text
stringlengths
4
2.78M
--- author: - | M. Sharif [^1] and Sehar Aziz [^2]\ Department of Mathematics, University of the Punjab,\ Quaid-e-Azam Campus, Lahore-54590, Pakistan title: '**On the Physical Properties of the Plane Symmetric Self-Similar Solution**' --- This paper discusses some of the physical properties of plane symmetric self-similar solutions of the first kind (i.e., homothetic solutions). We are interested in calculating the expansion, the acceleration, the rotation, the shear tensor, the shear invariant, and the expansion rate (given by Raychaudhuri’s equation). We check these properties both in co-moving and non-co-moving coordinates (only in the radial direction). Further, the singularity structure of such solutions will be explored. This analysis provides some interesting features of self-similar solutions.\ [**Keywords:**]{} Self-similar solutions, Properties\ [**PACS:**]{} 04.20.Jb I. Introduction {#i.-introduction .unnumbered} =============== The similarity assumption reduces the complexity of partial differential equations as it turns the governing partial differential equations to relatively simple ordinary differential equations. The Einstein field equations (EFEs) $$\begin{aligned} R_{ab}-\frac{1}{2}g_{ab}R=\kappa T_{ab},\quad (a,b=0,1,2,3)\end{aligned}$$ are non-linear partial differential equations. Self-similar solutions have been shown to be very useful in solving these set of equations. Self-similarity refers to the fact that the spatial distribution of characteristics of motion remains similar to itself at all times during the motion. Similarity solutions were first studied in General Relativity (GR) by Cahill and Taub \[1\]. They assumed that the solution was such that the dependent variables were essentially functions of a single independent variable constructed as a dimensionless combination of the independent variables. In the simplest situation, a similarity solution is invariant under the transformation $r\rightarrow ar$, $t\rightarrow at$ for any constant $a$. Geometrically, they showed that the existence of a similarity of first kind in this situation could be invariantly formulated in terms of the existence of a homothetic vector. A natural generalization of homothety, called kinematic self-similarity, exists and is defined by the existence of a kinematic self-similar (KSS) vector field. A KSS vector $\xi$ satisfies the following conditions: $$\begin{aligned} \pounds_{\xi}h_{ab}&=& 2\delta h_{ab},\\ \pounds_{\xi}u_{a}&=& \alpha u_{a},\end{aligned}$$ where $h_{ab}$ is the projection tensor, and $\alpha$ and $\delta$ are constants. The similarity transformation is characterized by the scale-independent ratio $\alpha/\delta$, which is known as the similarity index. By using a similarity index, Carter and Henriksen \[2,3\] defined other kinds of self-similarity: namely, second, zeroth, and infinite kinds. In the context of kinematic self-similarity, homothety is considered as the first kind. Several authors have explored KSS perfect-fluid solutions. The only barotropic equation of state is compatible with self-similarity of the first kind is $p=k\rho$. Carr \[4\] has classified self-similar perfect-fluid solutions of the first kind for the dust case ($k=0$). The case $0<k<1$ has been studied by Carr and Coley \[5\]. Coley \[6\] has shown that the Friedmann-Robertson Walker (FRW) solution is the only spherically symmetric homothetic perfect-fluid solution in the parallel case. McIntosh \[7\] has discussed a stiff fluid ($k=1$) being the only compatible perfect-fluid with homothety in the orthogonal case. Benoit and Coley \[8\] have studied analytic spherically symmetric solutions of the EFE’s coupled with a perfect-fluid and admitting a KSS vector of the first, second, or zeroth kind. Carr et al. \[9\] considered the KSS vector associated with the critical behavior observed in the gravitational collapse of spherically symmetric perfect fluid with equation of state $p=k\rho$. Carr et al. \[10\] further investigated the solution space of self-similar spherically symmetric perfect-fluid models and the physical aspects of these solutions. They combined the state space description of the homothetic approach with the use of the physically interesting quantities arising in the co-moving approach. Maeda et al. \[11\] discussed the classification of the spherically symmetric KSS perfect-fluid and dust solutions. Recently, Sharif and Sehar investigated the classification of cylindrically symmetric \[12\] and plane symmetric \[13\] KSS perfect-fluid and dust solutions. The existence of self-similar solutions of the first kind is related to conservation laws and to the invariance of the problem with respect to the group of similarity transformations of quantities with independent dimensions. This can be characterized in GR by the existence of a homothetic vector. Perveen \[14\] classified plane symmetric Lorentzian manifolds according to their homotheties and found different solutions admitting 5, 7, or 11 homotheties. Among these solutions, two correspond to 5 homotheties and five admit 7 homotheties. The only solution admitting 11 homotheties is the Minkowski metric. Recently, Sharif and Sehar explored the physical properties of spherically symmetric self-similar solution of the first kind \[15\] and cylindrically symmetric self-similar solution of the first kind \[16\]. This is the third paper in the series. Here, we are extending the same analysis for the plane symmetric self-similar solution of the first kind. The paper can be outlined as follows: In Section II, we shall write down the self-similar solutions of the plane symmetric spacetime. Section III is devoted to a discussion of the physical properties of these solutions both in co-moving and non-co-moving coordinates. In section IV, we shall explore the singularity structure of these solutions. Finally, we shall summarize and discuss all the results in Section V. II. Plane Symmetric Self-Similar Solutions of the First Kind {#ii.-plane-symmetric-self-similar-solutions-of-the-first-kind .unnumbered} ============================================================ The general plane symmetric spacetime is given by the line element \[17\] $$ds^2=e^{2\nu(t,x)}dt^2-e^{2\lambda(t,x)}dx^2 -e^{2\mu(t,x)}(dy^2+dz^2),$$ where $\nu$, $\lambda$ and $\mu$ are arbitrary functions of $t$ and $x$. Perveen \[14\] classified plane symmetric Lorentzian manifolds by homotheties and found self-similar solutions of the first kind. There are two classes of such solutions, one admitting 5 homotheties and the other admitting 7 homotheties. This paper is devoted to discussing the physical properties of these solutions. The first metric is given by $$ds^2=e^{2\nu(x)}dt^2- dx^2-e^{2\mu(x)}(dy^2+dz^2).$$ This metric has the following two solutions having 5 and 7 homotheties, respectively: $$ds^2=(\frac{x}{x_0})^{2A} dt^2- dx^2-(\frac{x}{x_0})^{2B} (dy^2+dz^2),$$ where $A\neq B,~B\neq0,~A\neq1$, and $$ds^2=(\frac{x}{x_0})^{2A} (dt^2-dy^2-dz^2)- dx^2,$$ where $A\neq0$. The second metric has the form $$ds^2=dt^2-e^{2\lambda(t)} dx^2-e^{2\mu(t)}(dy^2+dz^2).$$ This metric has the following two solutions with 5 and 7 homotheties, respectively: $$ds^2=dt^2-(\frac{t}{t_0})^{2A} dx^2-(\frac{t}{t_0})^{2B} (dy^2+dz^2),$$ where $A\neq B,~B\neq0,~A\neq1$, and $$ds^2=dt^2-(\frac{t}{t_0})^{2A} (dx^2+dy^2+dz^2),$$ where $A\neq0$. The metric $$ds^2=e^{2f(x)}dt^2-dx^2-e^{2\frac{t}{a}+2f(x)}(dy^2+dz^2),$$ where $a\neq0$, has only one solution admitting 7 homotheties, and that solution is given by $$ds^2=(\frac{x}{x_0})^2 dt^2- dx^2-(\frac{x}{x_0})^2 e^{2\frac{t}{a}}(dy^2+dz^2).$$ The metric $$ds^2=dt^2-e^{2f(t)}dx^2-e^{2\frac{x}{a}+2f(t)}(dy^2+dz^2),$$ where $a\neq0$, also has one solution with 7 homotheties, and that solution is given by $$ds^2= dt^2-(\frac{t}{t_0})^2 dx^2-(\frac{t}{t_0})^2 e^{2\frac{x}{a}}(dy^2+dz^2).$$ Finally, the metric $$ds^2=dt^2-dx^2-e^{2(at+bx)}(dy^2+dz^2),$$ yields the following solution with 7 homotheties: $$ds^2= dt^2- dx^2-e^{2a(t+x)}(dy^2+dz^2).$$ Thus, we have total of seven self-similar solutions that are given by Eqs.(6), (7), (9), (10), (12), (14), and (16). These can be divided into two classes admitting 5 and 7 homotheties, respectively. III. Kinematics of the Velocity Field {#iii.-kinematics-of-the-velocity-field .unnumbered} ===================================== This section is devoted to discussing the kinematical properties of the self-similar solutions of the first kind both in co-moving and non-co-moving coordinates. These properties \[17\] can be listed as follows: The volume behavior of the fluid can be determined by the expansion scalar defined by $$\Theta=u^{a}_{;a}.$$ The acceleration can be defined as $$\dot{u}_a=u_{a;b} u^b,$$ where $u_a$ is the four-vector velocity. The rotation is given by $$\omega_{ab}=u_{[a;b]}+ \dot{u}_{[a}u_{b]}.$$ The shear tensor, which provides the distortion arising in the fluid flow leaving the volume invariant, can be found by $$\sigma_{ab}=u_{(a;b)}+\dot{u}_{(a}u_{b)} -\frac{1}{3}\Theta h_{ab}.$$ The shear scalar, which gives the measure of anisotropy, is defined by $$\sigma=\sigma_{ab}\sigma^{ab}.$$ The expansion rate with respect to proper time is given by Raychaudhuri’s equation \[18\] $$\frac{d\Theta}{d\tau}=-\frac{1}{3}\Theta^2-\sigma_{ab}\sigma^{ab} +\omega_{ab}u^{a}u^{b}-R_{ab}u^{a}u^{b}.$$ We now discuss these properties for the self-similar solutions given in the previous section. 1. Kinematic Properties in Co-moving Coordinates {#kinematic-properties-in-co-moving-coordinates .unnumbered} ------------------------------------------------ First, we evaluate the kinematical properties of the self-similar solutions in co-moving coordinates. For this purpose, we have divided the solutions into two classes. Class 1 has solutions that admit 5 homotheties while class 2 has solutions that admit 7 homotheties. ### A. Class 1 {#a.-class-1 .unnumbered} This class has two solutions given by Eqs.(6) and (9). We start the discussion of kinematic properties for the solution given by Eq.(6). The expansion scalar is zero for this spacetime in co-moving coordinates. The only non-zero component of the acceleration turns out to be $$\dot{u}_1=-\frac{A}{x}.$$ The non-vanishing rotation component becomes $$\omega_{01}=2\frac{A}{x}(\frac{x}{x_0})^A.$$ The shear component is $$\sigma_{01}=-\omega_{01}$$ while the shear invariant becomes $$\sigma=-4\frac{A^2}{x^2}=-4{\dot{u}_1}^2.$$ With Raychaudhuri’s equation, the rate of change of expansion takes the following form: $$\frac{d\Theta}{d\tau}=4\frac{A^2}{x^2} -\frac{A(A+2B-1)}{x^2}(\frac{x}{x_0})^{2B-2A}.$$ For the self-similar solution given by Eq.(9), the expansion scalar is given by $$\Theta=\frac{(A+2B)}{t}.$$ The acceleration and the rotation turn out to be zero for this metric. The shear components are given as $$\begin{aligned} \sigma_{11}=\frac{(2B-5A)}{3t}(\frac{t}{t_0})^{2A},\quad \sigma_{22}=\frac{(A-4B)}{3t}(\frac{t}{t_0})^{2B}=\sigma_{33},\end{aligned}$$ and the shear invariant becomes $$\sigma=\frac{4B^2+3A^2-4AB}{t^2}.$$ The rate of expansion takes the form $$\frac{d\Theta}{d\tau}=-\frac{1}{3t^2}(7A^2+10B^2+3A+6B-8AB).$$ ### B. Class 2 {#b.-class-2 .unnumbered} Class 2 has five solutions that are given by Eqs.(7), (10), (12), (14) and (16). In this class, for the first solution, the expansion scalar and rotation are zero. The only acceleration component is given by $$\dot{u}_1=-\frac{A}{x}.$$ The only component of the shear becomes $$\sigma_{01}=-2\frac{A}{x}(\frac{x}{x_0})^A,$$ and the shear invariant takes the form $$\sigma=-4\frac{A^2}{x^2}.$$ The expansion rate takes the following form: $$\frac{d\Theta}{d\tau}=\frac{A}{x^2}(A+1).$$ For the second solution, given by Eq.(10), the expansion scalar is given by $$\Theta=\frac{3A}{t}.$$ The acceleration and the rotation are zero. The non-zero components of the shear are $$\sigma_{11}=-\frac{A}{t}(\frac{t}{t_0})^{2A}=\sigma_{22}=\sigma_{33},$$ and the shear invariant becomes $$\sigma=\frac{3A^2}{t^2}.$$ The rate of change of expansion takes the form $$\frac{d\Theta}{d\tau}=-\frac{3A}{t^2}(A+1).$$ The third solution in class 2 has the following expansion scalar: $$\Theta=\frac{2x_0}{ax}.$$ The only acceleration component is given by $$\dot{u}_1=\frac{1}{x}$$ while the rotation turns out to be zero. The non-vanishing components of the shear are $$\begin{aligned} \sigma_{11}=\frac{2x_0}{3ax},\quad \sigma_{22}=-\frac{4x}{3ax_0}e^{2\frac{t}{a}}=\sigma_{33},\end{aligned}$$ and the shear invariant becomes $$\sigma=\frac{4{x_0}^2}{a^2x^2}.$$ The rate of change of expansion has the following form: $$\frac{d\Theta}{d\tau}=-\frac{2}{3a^2x^2}(5{x_0}^2+3a^2).$$ For the fourth solution, given by Eq.(14), the expansion scalar is given by $$\Theta=\frac{1}{t}.$$ The acceleration and the rotation are zero for this metric. The components of the shear are $$\sigma_{22}=-\frac{5t}{3t^2_0}e^{2\frac{x}{a}}=\sigma_{33}$$ while the shear invariant becomes $$\sigma=\frac{50}{9t^2}.$$ The expansion rate is given, by using Raychaudhuri’s equation, as $$\frac{d\Theta}{d\tau}=-\frac{53}{9t^2}.$$ The last solution in this class, given by Eq.(16), has the expansion scalar $$\Theta=2a.$$ The acceleration and the rotation become zero. The components of the shear are $$\sigma_{22}=-\frac{4a}{3}e^{2a(t+x)}=\sigma_{33},$$ and the shear invariant becomes $$\sigma=\frac{32a^2}{9}.$$ The expansion rate is $$\frac{d\Theta}{d\tau}=-\frac{26a^2}{9}.$$ 2. Kinematic Properties in Non-co-moving Coordinates {#kinematic-properties-in-non-co-moving-coordinates .unnumbered} ---------------------------------------------------- This section is devoted to discussing the same properties of the self-similar solutions in non-co-moving coordinates only in the radial direction. ### A. Class 1 {#a.-class-1-1 .unnumbered} For the first metric in class 1, given by Eq.(6), we obtain non-zero expansion as follows: $$\Theta=-\frac{(A+2B)}{x}.$$ The non-zero components of the acceleration turn out to be $$\dot{u}_0=\frac{A}{x}(\frac{x}{x_0})^A,\quad \dot{u}_1=-\frac{A}{x}.$$ The non-zero rotation component is $$\omega_{01}=\frac{A}{x}(\frac{x}{x_0})^A=\dot{u}_0.$$ The components of the shear are $$\begin{aligned} \sigma_{00}=4\frac{A}{x}(\frac{x}{x_0})^{2A},\quad \sigma_{01}=\frac{2}{3x}(\frac{x}{x_0})^A (B-4A),\nonumber\\ \sigma_{11}=\frac{4}{3x}(A-B),\quad \sigma_{22}=-\frac{1}{3x}(\frac{x}{x_0})^{2B}(A+8B)=\sigma_{33},\end{aligned}$$ and the measure of anisotropy is given by $$\sigma=\frac{2}{9x^2}(49A^2+16AB+70B^2).$$ The rate of change of expansion using Raychaudhuri’s equation becomes $$\frac{d\Theta}{d\tau}=-\frac{1}{9x^2}(101A^2+134B^2+62AB-18B)-\frac{A}{x}.$$ For the second metric in this class, we obtain non-zero expansion as follows: $$\Theta=\frac{A+2B}{t}.$$ The non-zero components of acceleration are $$\dot{u}_0=-\frac{A}{t},\quad \dot{u}_1=\frac{A}{t}(\frac{t}{t_0})^A,$$ and the non-zero rotation component is $$\omega_{01}=\frac{A}{t}(\frac{t}{t_0})^A=\dot{u}_1.$$ The non-vanishing components of the shear are $$\begin{aligned} \sigma_{00}=-\frac{2A}{t},\quad \sigma_{01}=\frac{(10A+2B)}{3t}(\frac{t}{t_0})^A,\nonumber\\ \sigma_{11}=-\frac{(10A-4B)}{3t}(\frac{t}{t_0})^{2A},\quad \sigma_{22}=\frac{(A-4B)}{3t}(\frac{t}{t_0})^{2B} =\sigma_{33},\end{aligned}$$ and the shear invariant is given by $$\sigma=\frac{2}{9t^2}(19A^2-68AB+22B^2).$$ The expansion rate becomes $$\frac{d\Theta}{d\tau}=-\frac{1}{9t^2}(41A^2+38B^2-106AB+18B)-\frac{A}{t}.$$ ### B. Class 2 {#b.-class-2-1 .unnumbered} For the first metric in this class, we obtain non-zero expansion as follows: $$\Theta=-\frac{3A}{x}.$$ The non-zero components of the acceleration turn out to be $$\dot{u}_0=\frac{A}{x}(\frac{x}{x_0})^A,\quad \dot{u}_1=-\frac{A}{x},$$ and the non-zero rotation component is $$\omega_{01}=\frac{A}{x}(\frac{x}{x_0})^A=\dot{u}_0.$$ The non-vanishing components of the shear are $$\begin{aligned} \sigma_{00}=4\frac{A}{x}(\frac{x}{x_0})^{2A},\quad \sigma_{01}=-\frac{2A}{x}(\frac{x}{x_0})^A,\quad \sigma_{22}=-\frac{3A}{x}(\frac{x}{x_0})^{2A}=\sigma_{33}\end{aligned}$$ while the measure of anisotropy is given by $$\sigma=\frac{30A^2}{x^2}.$$ The rate of change of expansion becomes $$\frac{d\Theta}{d\tau}=-\frac{A}{x^2}(33A+2)-\frac{A}{x}.$$ The second solution in this class gives the non-zero expansion $$\Theta=\frac{3A}{t}.$$ The non-zero components of the acceleration are $$\dot{u}_0=-\frac{A}{t},\quad \dot{u}_1=\frac{A}{t}(\frac{t}{t_0})^A,$$ and the non-zero rotation component is $$\omega_{01}=\frac{A}{t}(\frac{t}{t_0})^A=\dot{u}_1.$$ The components of the shear become $$\begin{aligned} \sigma_{00}=-\frac{2A}{t},\quad \sigma_{01}=\frac{4A}{t}(\frac{t}{t_0})^A,\nonumber\\ \sigma_{11}=-\frac{2A}{t}(\frac{t}{t_0})^{2A},\quad \sigma_{22}=-\frac{A}{t}(\frac{t}{t_0})^{2A} =\sigma_{33},\end{aligned}$$ and the shear invariant is given by $$\sigma=-\frac{6A^2}{t^2}.$$ The rate of expansion becomes $$\frac{d\Theta}{d\tau}=-\frac{A}{t}+\frac{A}{t^2}(3A-2).$$ For the third solution, the non-zero expansion is given by $$\Theta=-\frac{3}{x}+\frac{2x_0}{ax}.$$ The components of the acceleration are $$\dot{u}_0=\frac{1}{x_0},\quad \dot{u}_1=-\frac{1}{x},$$ and the rotation component is $$\omega_{01}=\frac{1}{x_0}=\dot{u}_0.$$ The components of the shear take the form $$\begin{aligned} \sigma_{00}=\frac{4x}{{x_0}^2},\quad \sigma_{11}=\frac{4x_0}{3ax},\quad \sigma_{01}=-\frac{2}{3ax_0}(3a+x_0),\nonumber\\ \sigma_{22}=-\frac{x}{3a{x_0}^2}(9a+4x_0)e^{2\frac{t}{a}}=\sigma_{33}\end{aligned}$$ while the measure of anisotropy is $$\sigma=\frac{2}{9a^2x^2}(135a^2+22{x_0}^2+60ax_0).$$ The expansion rate becomes $$\frac{d\Theta}{d\tau}=-\frac{1}{9a^2x^2}(261a^2-14{x_0}^2 -156ax_0)-\frac{1}{x}+\frac{2}{a^2{x_0}^2}(a^2-{x_0}^2)e^{2\frac{t}{a}}.$$ The fourth self-similar solution (Eq.(14)) gives the expansion as follows: $$\Theta=\frac{3a-2t_0}{at}.$$ The acceleration components are $$\dot{u}_0=-\frac{1}{t},\quad \dot{u}_1=\frac{1}{t_0}$$ while the non-zero rotation component is $$\omega_{01}=-\frac{1}{t_0}=-\dot{u}_1.$$ We obtain the following non-vanishing components of shear: $$\begin{aligned} \sigma_{00}=-\frac{2}{t},\quad \sigma_{11}=-\frac{2t}{3a{t_0}^2}(3a+2t_0),\quad \sigma_{01}=\frac{2}{at_0}(a+t_0),\nonumber\\ \sigma_{22}=-\frac{t}{3a{t_0}^2}e^{2\frac{x}{a}}(3a+8t_0)=\sigma_{33},\end{aligned}$$ and the shear invariant is $$\sigma=\frac{1}{9a^2t^2}(45a^2+44t^2_0+24at_0).$$ The expansion rate becomes $$\frac{d\Theta}{d\tau}=\frac{1}{t}-\frac{2}{9a^2t^2}(45a^2+19t^2_0-6at_0).$$ The expansion, the acceleration, and the rotation vanish for the last solution in this class. The shear components are $$\sigma_{22}=-4ae^{2a(t+x)}=\sigma_{33},$$ and the shear invariant is $$\sigma=32a^2.$$ The rate of expansion becomes $$\frac{d\Theta}{d\tau}=-30a^2.$$ IV. Singularities {#iv.-singularities .unnumbered} ================= In this section, we shall explore the singularities of the self-similar solutions in classes 1 and 2. The Kretschmann scalar is defined by $$K=R_{abcd}R^{abcd},$$ where $R_{abcd}$ is the Riemann tensor. For the solution given by Eq.(6), Eq.(92) reduces to $$K=\frac{2}{x^4}[A^2(A^2+1+2B^2-2A)+B^2(3B^2+2-4B)].$$ It is clear that $K$ diverges at $x=0$. It follows that the solution is singular at $x=0$. For the solution given by Eq.(7), the Kretschmann scalar becomes $$K=\frac{2A^2}{x^4}(6A^2+3-2A),$$ which shows that the spacetime singularity lies at $x=0$. The solution given by Eq.(9) has the Kretschmann scalar $$K=\frac{2}{t^4}(A^2(A-1)^2+2B^2(B-1)^2+2A^2B^2+B^4).$$ From here, it follows that the spacetime is singular at $t=0$. For the metric given by Eq.(10), the Kretschmann scalar reduces to $$K=\frac{6A^2}{t^4}(2A^2+1-2A).$$ It is clear that $K$ diverges at $t=0$. Hence, the solution is singular at $t=0$. For the solution given by Eq.(12), we obtain $$K=\frac{6}{a^4 x^4}(a^2-{x_0}^2)^2,$$ which gives the spacetime singularity at $x=0$. The solution given by Eq.(14) has the following Kretschmann scalar: $$K=\frac{6}{a^2t^4}(t^2_0-a^2)^2.$$ It follows that the spacetime singularity lies at $t=0$. For the last solution, given by Eq.(16), the Kretschmann scalar reduces to $$K=4a^4.$$ It turns out to be constant, which shows that this solution is singularity free. V. Conclusion {#v.-conclusion .unnumbered} ============= Self-similar solutions in GR are very important, and discussing their physical features is interesting. Keeping this point in mind, we explored some kinematic properties and the singularity features of such solutions representing plane symmetric spacetime. We discussed the properties both in co-moving and non-co-moving coordinates (only in the radial direction). This provided a comparison of these properties in the two coordinates. We explored the acceleration, the expansion, the rotation, the shear, the rate of change of expansion, and finally the singularity. First, we discussed the class 1 which has two solutions. For the first solution, given by Eq.(6), we found zero expansion in co-moving coordinates and positive/negative expansion in non-co-moving coordinates depending upon the values of the constants $A$ and $B$. The acceleration and the rotation had only one component in co-moving coordinates while two components of the acceleration existed in non-co-moving coordinates. The rotation component remained the same in both coordinates, except for a factor of one-half in non-co-moving coordinates. We found the shear invariant to be negative in co-moving coordinates and positive/negative in non-co-moving coordinates, depending on the values of the constants. The rate of change of expansion could be positive/negative, depending upon the values of $A$ and $B$ in co-moving coordinates and negative in non-co-moving coordinates. For the second solution, given by Eq.(9), we obtained the same expansions in both coordinates, which could be positive/negative. These solutions had vanishing acceleration and rotation in co-moving coordinates and one rotation and two acceleration components in non-co-moving coordinates. The shear invariant was positive in co-moving coordinates whereas the rate of expansion was negative. In non-co-moving coordinates, these quantities could be positive/negative, depending on the values of constants. Now, we discuss the solutions of class 2. We notice that in co-moving coordinates, the solutions given by Eqs.(7) and (12) have non-zero acceleration component while the remaining solutions have zero acceleration. In non-co-moving coordinates, all solutions have non-zero acceleration components. Also, all solutions have zero rotation in co-moving coordinates while only one solution (Eq.(16)) has zero rotation in non-co-moving coordinates. The solution in Eq.(7) has zero expansion in co-moving coordinates whereas all other solutions have positive/negative expansion. In non-co-moving coordinates, the expansion is positive/negative for the solutions in Eqs.(7), (10), (12), and (14) and zero for the solution in Eq.(16). The shear invariant is negative in co-moving coordinates and positive in non-co-moving coordinates for the solution in Eq.(7). The expansion rate is positive in co-moving coordinates, but it is negative in non-co-moving coordinates. For the solution in Eq.(10), we have a positive shear scalar in co-moving coordinates and a negative one in non-co-moving coordinates. The rate of expansion is negative in co-moving coordinates and positive in non-co-moving coordinates. For the solution in Eq.(12), the shear invariant is positive in both coordinates, and the expansion rate is negative in both coordinates. The shear scalar is positive and the rate of expansion is negative in both coordinates for the solution in Eq.(14). For the solution in Eq.(16), the shear invariant is positive, and the expansion rate is negative in both coordinates. Finally, we discuss the singularity structure for these solutions. The solutions given by Eqs.(6), (7), and (12), are singular at $x=0$ while the solutions given by Eqs.(9), (10), and (14) are singular at $t=0$. The solution in Eq.(16) is singularity free. We have noticed from the above discussion that the kinematic quantities are relatively simple in co-moving coordinates as compared to those in non-co-moving coordinates. It is worth mentioning that the expansion of the solution in Eq.(16) turns out to be positive in co-moving coordinates, but it is zero in non-co-moving coordinates. [**ACKNOWLEDGMENTS**]{} One of us (SA) acknowledges the enabling role of the Higher Education Commission Islamabad, Pakistan, and appreciates its financial support through the [*Merit Scholarship Scheme for Ph.D. Studies in Science and Technology (200 Scholarships)*]{}. We should thank Mr. Tariq Ismaeel who brought some important literature to us. [**REFERENCES**]{} [\[1\]]{} M.E. Cahill and A.H. Taub, Commun. Math. Phys. [**21**]{}, 1(1971). [\[2\]]{} B. Carter and R.N. Henriksen, Annales De Physique [**14**]{}, 47(1989). [\[3\]]{} B. Carter and R.N. Henriksen, J. Math. Phys. [**32**]{}, 2580(1991). [\[4\]]{} B.J. Carr, Phys. Rev. [**D62**]{}, 044022(2000). [\[5\]]{} B.J. Carr and A.A. Coley, Phys. Rev. [**D62**]{}, 044023(2000). [\[6\]]{} A.A. Coley, Class. Quant. Grav. [**14**]{}, 87(1997). [\[7\]]{} C.B.G. McIntosh, Gen. Relat. Gravit. [**7**]{}, 199(1975). [\[8\]]{} P.M. Benoit and A.A. Coley, Class. Quant. Grav. [**15**]{}, 2397(1998). [\[9\]]{} B.J. Carr, A.A. Coley, M. Golaith, U.S. Nilsson and C. Uggla, Class. Quant. Grav. [**18**]{}, 303(2001). [\[10\]]{} B.J. Carr, A.A. Coley, M. Golaith, U.S. Nilsson and C. Uggla, Phys. Rev. [**D61**]{}, 081502(2000). [\[11\]]{} H. Maeda, T. Harada, H. Iguchi and N. Okuyama, Prog. Theor. Phys. [**108**]{}, 819(2002); ibid [**110**]{}, 25(2003). [\[12\]]{} M. Sharif and Sehar Aziz, Int. J. Mod. Phys. [**D14**]{}, 1527(2005). [\[13\]]{} M. Sharif and Sehar Aziz, submitted for publication. [\[14\]]{} Sadia Perveen, M.Phil. Dissertation (Quaid-i-Azam University Islamabad, 2003). [\[15\]]{} M. Sharif and Sehar Aziz, Int. J. Mod. Phys. [**D14**]{}, 73(2005). [\[16\]]{} M. Sharif and Sehar Aziz, Int. J. Mod. Phys. [**A**]{}(2005) (arXiv:gr-qc/0504102). [\[17\]]{} H. Stephani, D. Kramer, M. Maccallum, C. Hoenselaers and E. Herlt, *Exact Solutions of Einstein’s Field Equations* (Cambridge University Press, 2003). [\[18\]]{} R.M. Wald, *General Relativity* (University of Chicago, Chicago, 1984). [^1]: msharif@math.pu.edu.pk [^2]: sehar$\_$aziz@yahoo.com
--- abstract: | We develop median statistics that provide powerful alternatives to $\chi^2$ likelihood methods and require fewer assumptions about the data. Application to astronomical data demonstrates that median statistics lead to results that are quite similar and almost as constraining as $\chi^2$ likelihood methods, but with somewhat more confidence since they do not assume Gaussianity of the errors or that their magnitudes are known. Applying median statistics to Huchra’s compilation of nearly all estimates of the Hubble constant, we find a median value $H_0=67 {{\rm \, km\, s}^{-1}{\rm Mpc}^{-1}}$. Median statistics assume only that the measurements are independent and free of systematic errors. This estimate is arguably the best summary of current knowledge because it uses all available data and, unlike other estimates, makes no assumption about the distribution of measurement errors. The 95% range of purely statistical errors is $\pm 2{{\rm \, km\, s}^{-1}{\rm Mpc}^{-1}}$. The high degree of statistical accuracy of this result demonstrates the power of using only these two assumptions and leads us to analyze the range of possible systematic errors in the median, which we estimate to be roughly $\pm 5{{\rm \, km\, s}^{-1}{\rm Mpc}^{-1}}$ (95% limits), dominating over the statistical errors. Using a Bayesian median statistics treatment of high-redshift Type Ia supernovae (SNe Ia) apparent magnitude versus redshift data from Riess et al., we find the posterior probability that the cosmological constant $\Lambda > 0$ is 70 or 89%, depending on the prior information we include. We find the posterior probability of an open universe is about 47% and the probability of a spatially flat universe is 51 or 38%. Our results generally support the observers’ conclusions but indicate weaker evidence for $\Lambda > 0$ (less than 2 $\sigma$). Median statistics analysis of the Perlmutter et al. high-redshift SNe Ia data show that the best-fit flat-$\Lambda$ model is favored over the best-fit $\Lambda = 0$ open model by odds of $366:1$; the corresponding Riess et al. odds are $3:1$ (assuming in each case prior odds of $1:1$). A scalar field with a potential energy with a “tail" behaves like a time-variable $\Lambda$. Median statistics analyses of the SNe Ia data do not rule out such a time-variable $\Lambda$, and may even favor it over a time-independent $\Lambda$ and a $\Lambda=0$ open model. author: - 'J. Richard Gott, III, Michael S. Vogeley, Silviu Podariu, and Bharat Ratra' title: 'Median Statistics, $H_0$, and the Accelerating Universe' --- Introduction {#intro} ============ Statistics that require the fewest assumptions about the data are often the most useful. Gott & Turner (1977, also see Gott 1978) used median mass-to-light ratios for groups of galaxies in comparing with N-body simulations to estimate ${\Omega_{\rm M}}$. Median mass-to-light ratios were preferable to mean mass-to-light ratios because they were less sensitive to the effects of background contamination and unlucky projection effects. At the IAU meeting in Tallinn, Estonia, Ya. B. Zeldovich commented on this choice. He noted that in Russia some watches were not made very well, so when three friends meet they compare the times on their watches — one says “it’s 1 o’clock”, the second says, “it’s 5 minutes after 1,” the third says “it’s 5 o’clock”. Take the median! Perhaps no one has ever stated the benefits of the median over the mean better than Zeldovich[^1]. In this paper we develop median statistics and apply them to high-redshift SNe Ia apparent magnitude versus redshift data which recently provided evidence for an accelerating universe. We also apply these statistics to estimates of the Hubble constant and the mass of Pluto. The usual hypotheses made when using data in a $\chi^2$ analysis are that (1) individual data points are statistically independent, (2) there are no systematic effects, (3) the errors are Gaussianly distributed, and (4) one knows the standard deviation of these errors. These are four extraordinarily potent hypotheses, which lead to powerful results if the four conditions are indeed true. We will show that even the first two conditions alone can lead to powerful results — allowing us to drop the third and fourth conditions[^2] Recent analyses of supernovae distances by Riess et al. (1998, hereafter R98) and Perlmutter et al. (1999a, hereafter P99) use all four hypotheses. These authors combine apparent magnitude versus redshift data for distant supernovae with data on nearby supernovae in $\chi^2$ analyses to derive likelihood ratios for different cosmological models (defined by their values of ${\Omega_{\rm M}}$ and ${\Omega_{\Lambda}}$, the nonrelativistic matter and cosmological constant $\Lambda$ energy density contribution to $\Omega$, respectively). Using some additional Bayesian assumptions, which we discuss below, R98 conclude that the probability that ${\Omega_{\Lambda}}> 0$ is 99.5% (using MLCS data for all 16 high-redshift SNe Ia including SN 1997ck at redshift $z = 0.97$). P99 conclude that ${\Omega_{\Lambda}}> 0$ with 99.8% confidence (their fit C). These results rely on the important assumption that the errors are normally distributed, as is apparent in their derivation of confidence limits in their $\chi^2$ analyses. This is a somewhat troubling assumption since the errors in corrected supernovae luminosities are not likely Gaussianly distributed. While there seems to be a rather strong upper limit on supernovae luminosities, there seems to be a longer low luminosity tail (Höflich et al. 1996). This does not directly imply that the errors in the corrected supernovae luminosities are non-Gaussian, but does indicate that the population of supernovae could include outliers in the luminosity distribution that are not as well-calibrated by the training set. This is related to the possible concern that, when corrected supernovae luminosities are calculated using a training set of order the same size (roughly two dozen for the R98 MLCS method) as the data set to be corrected, one can never be sure that one is not encountering some supernovae that are odd and do not fit the training set. The limits of assuming a normal distribution are illustrated by a penguin parable adapted from a discussion by Hill (1992). Suppose one measured the weights of a million adult penguins and found them to have a mean weight of 100 lbs with a standard deviation of 10 lbs. Further suppose that the observed data’s distribution fits a Gaussian distribution perfectly. Of the million penguins measured, suppose that, consistent with a Gaussian distribution, the heaviest one weighs 147.5 lbs. What is the probability of encountering, on measuring the next adult penguin, a penguin weighing more than 200 lbs? One might be tempted to say that it was $P = 10^{-23}$, by simply fitting the normal distribution and calculating the probability of obtaining an upward 10 $\sigma$ fluctuation. But this would be wrong. There could be a second species of penguin, all of whose adults weighed over 200 lbs which simply had a population a million times smaller, so that one had not encountered one yet. In this case, the probability of encountering a penguin weighing over 200 lbs is $10^{-6}$. Even data that fits a normal distribution perfectly cannot be used to extend the range beyond that of the data itself. The correct answer, suggested by Hill’s (1992) argument, which does not depend on assumption (3), is that the probability that the 1,000,001st penguin weighs more than any of the million penguins measured so far is simply $P = 1/1,000,001$. This is according to hypothesis (1) that all the data points are independent. A priori, each of the 1,000,001 penguins must have an equal chance ($1/1,000,001$) of being the heaviest one. Thus, the last one must have a probability of $1/1,000,001$ of weighing over 147.5 lbs. Beyond that, the data do not say anything. This may be relevant in the supernova case. Using a training set of a little more than two dozen supernovae to correct 16 distant supernovae, one might encounter a distant supernova that goes beyond the training set, in other words, one that is odd. Indeed, supernovae classifications like Ia pec, as well as Arp’s famous catalog of peculiar galaxies, are warnings that in astronomy one does encounter peculiar objects as rare events. If we fail to recognize them as a separate class, we may unduly pollute a mean indicator — another reason for using median statistics, which make no assumptions about the distribution and which are less influenced by such outliers. Clearly, given sufficient information about a penguin (supernova), we should be able to identify it as a different species (supernova class) and thereby avoid skewing the results. Our concern is what happens when the information is not sufficient. When Gaussian errors are assumed, one of the great benefits is that the errors decrease as $N^{-1/2}$, where $N$ is the number of measurements. Thus, with the 16 high-redshift R98 supernovae one can get estimates of ${\Omega_{\rm M}}$ and ${\Omega_{\Lambda}}$ that are 4 times as accurate as with a single measurement. In this paper we show how median statistics takes advantage of a similar $N^{-1/2}$ factor to produce accurate results, even while not relying on hypotheses (3) and (4) that the errors are Gaussian with known standard deviation. Indeed, as we shall show, hypotheses (1) and (2) are sufficiently powerful by themselves to produce results that are only slightly less constraining than those from $\chi^2$ analyses that also assume hypotheses (3) and (4), but in which we may have more confidence because two significant and perhaps questionable assumptions have been dropped. In Section \[medstats\] we outline how median statistics can be used with $N$ estimates, which we illustrate with examples from the Cauchy distribution and another look at the penguin problem. We apply our methods to estimates of the Hubble constant in Section \[hubble\] and to estimates of the mass of Pluto in Section \[pluto\]. In Section \[supern\] we perform a simple binomial analysis of the 16 high-redshift R98 SNe Ia measurements. In Section \[bayes\] we present a more complete Bayesian analysis of these data. Constraints on ${\Omega_{\rm M}}$ and ${\Omega_{\Lambda}}$ from the larger P99 data set are discussed in Section \[perlmutter\]. Section \[quint\] discusses median statistics SNe Ia constraints on a time-variable $\Lambda$. In Section \[conclude\] we summarize our conclusions. Median statistics {#medstats} ================= We assume hypotheses (1) and (2), that our measurements of a given quantity are independent and that there are no systematic effects. Suppose we were to take a large finite number of measurements. We will then assume — call this related hypothesis (2a) — that the median value thus obtained as the number $N$ of measurements tends to infinity will be the true value. We are thus excluding some “complex" distributions, e.g., a symmetric double hatbox model with a gap in the middle. The accuracy of hypothesis (2a) may be limited by discreteness in the measurements that prevents the data set from including the true median (see section 3 for an example of this problem, in which we analyze the Hubble constant data, which are tabulated as integer values). An extreme example of discreteness would be the case of a sample of numbers generated by coin flips in which heads = 1 and tails = 0. If we obtain 49 1’s and 51 0’s, then the median is 0 but the 95% confidence limits must include both 0 and 1. If we make a large number of measurements and there are no systematic effects we might naturally expect half to be above the true value and half to be below the true value. So we will suppose that after some very large number of measurements, as $N$ tends to infinity, there would be a true median (TM). Now by hypothesis (1) each individual measurement will be statistically independent, thus, each has a 50% chance to be above or below TM. Suppose we make $N$ independent measurements $M_i$ where $i = 1,...,N$. Where is TM likely to be? The probability that exactly $n$ of the $N$ measurements are higher than TM is given by the binomial distribution, $P = 2^{-N} N! / [n!(N - n)!]$, because there is a 50% chance that each measurement is higher than TM and they are independent. Thus, if we have taken $N$ measurements $M_i$ and these are later ranked by value such that $M_j > M_i$ if $j > i$, then the probability that the true median TM lies between $M_i$ and $M_{i+1}$ is $$P = {2^{-N} N!\over i!(N - i)!} ,$$ where we set $M_0 = -\infty$ and $M_{N+1}= +\infty$. For example, if $N = 16$, our confidence that TM lies between $M_8$ and $M_9$ is 19.6%. Importantly, the distribution of TM is much narrower that the distribution of the measurements themselves. For comparison the probability that the [**next**]{} individual measurement we take will lie between $M_i$ and $M_{i+1}$ is just $$P = {1\over N+1} .$$ If we set $r = i/N$ we can define $M(r) = M_i$. Then the distribution width can be defined by the variable $r$. Any measurement $m$ may be associated with a value $r$ by the inverse function $r(m)$ such that $M(r) = m$ with suitable interpolation applied. In the limit of large $N$ we find that the expectation value of $r$ for the next measurement $m$ is $\langle r \rangle = 0.5$ and its standard deviation is $\langle r^2 - \langle r\rangle ^2\rangle ^{1/2} = 1/(12)^{1/2}$ (since the distribution is uniform in $r$ over the interval 0 to 1). On the other hand, in the limit of large $N$ the expectation value of $r$ for TM is $\langle r\rangle = 0.5$ and its standard deviation is $\langle r^2 - \langle r\rangle ^2\rangle ^{1/2} = 1/(4N)^{1/2}$ (in fact, in the limit of large $N$ the distribution in $r$ approaches a Gaussian distribution with the above mean and standard deviation). Thus, as we take more measurements we see that the standard deviation in $r$ of the TM is proportional to $N^{-1/2}$. If we use median statistics, we find that our precision in determining TM (as measured by the percentile $r$ in the distribution of measurements) improves like $N^{-1/2}$ as $N$ grows larger. Thus, median statistics achieves the factor of $N^{1/2}$ improvement with sample size that we expect from mean Gaussian statistics. Statistics of samples drawn from a Cauchy distribution illustrate the robustness of the median for even a pathological parent population. If $\theta$ is a uniform random variable in the range from $-\pi/2$ to $+\pi/2$, then the probability distribution function of $x=x_0 + \tan \theta$ is a Cauchy distribution, $f(x) \propto 1/[1+(x-x_0)^2]$. This is a distribution with infinite variance, thus samples from this parent population are plagued by extreme outliers. However, the median is quite well-behaved and the uncertainty in the median, unlike the variance, is appropriately narrow. As an example, we generate a sample of 101 uniform random values of $\theta$, then compute the statistics of the set $\{x_i\}, i=1,...,101$ where $x=5+\tan \theta$. Using standard formulae we find that the mean of our sample is $\overline{x}=9.58$, the standard deviation is $\sigma_x=54.8$, and the standard deviation of the mean is $\sigma_{\overline{x}} = 5.45$, thus the 95% confidence limits on the mean are $-0.32 < \overline{x} < 19.48$. For comparison, the median of this sample is $x_{med}=4.818$ and the 95% confidence limits on the true median, following eq. (1), are $4.41 < x_{TM} < 5.11$. The median is nearly immune to the “outliers” in our test sample, which included $x=-35.17$ and $x=552.57$. Let us apply median statistics to the previously mentioned penguin problem. Suppose we measure the mass of 1,000,000 penguins and find that they follow a normal distribution with a mean of 100 lbs and a standard deviation of 10 lbs. Thus we have $${\rm mean} = 100\, {\rm lbs}$$ and applying the standard formula the standard deviation of the mean is $$\sigma_{\rm mean} = 10\, {\rm lbs}/(999,999)^{1/2} = 0.01\, {\rm lbs}.$$ We would hence deduce with 95% (2 $\sigma$) confidence that the true mean for the population of penguins lies between 99.98 and 100.02 lbs. But this result will be true only if the distribution in penguin masses beyond the limits seen in the first million penguins is well behaved, in particular falling off more rapidly than 1/mass. Suppose one penguin in a million weighs 100,000,000 lbs. Since we have examined only 1 million penguins, there is an appreciable chance ($P = e^{-1} = 0.38$) that we would have missed one of the supermassive ones. Yet, these supermassive penguins make the true mean = 200 lbs. So, even if the already measured data is well behaved, it is easy to be fooled by extreme cases falling beyond the observed distribution. One is less likely to be fooled about the median mass. In the above example we would deduce that, with $N = 1,000,000$, the expected $r$ value of TM and its standard deviation would be 0.5 and 0.0005 respectively. Thus, we would say with 95% (2 $\sigma$) confidence that TM has an $r$ value between 0.499 and 0.501. In other words, we expect the true median weight of penguins to lie between the weight of the 499,000th and the 501,000th most massive of the million measured penguins. These are distributed approximately normally so the 499,000th most massive weighs 99.975 lbs and the 501,000th weighs 100.025 lbs. Thus, with 95% confidence we would say that the true median lies between 99.975 lbs and 100.025 lbs. Note that these limits are only slightly less constraining than the 95% confidence limits derived on the mean earlier. Furthermore, these limits are not invalidated by the supermassive one-in-a-million penguins. Their existence only changes TM to 100.000025 lbs. If one’s data points are independent and there are no systematic effects, the median value is not going to be greatly perturbed by data points lying beyond the range of observed values — whereas the mean can always be significantly perturbed. In short, the 95% confidence limits on the true median are not much wider than those derived for the mean (assuming a Gaussian distribution), and they are more secure since the hypothesis of a Gaussian distribution is dropped. Hubble Constant {#hubble} =============== Approaches to Hubble Constant Statistics ---------------------------------------- The history of attempts to estimate the Hubble constant invites the application of statistics that are robust with respect to non-Gaussianity in the error distribution. Until recently, many published estimates of the Hubble constant, $H_0 = 100 h{{\rm \, km\, s}^{-1}{\rm Mpc}^{-1}}$, differed by several times their quoted uncertainty range. The most famous historical contradiction was that between estimates published by Sandage, Tammann and collaborators, typically $h=0.5\pm 0.05$, and de Vaucouleurs and collaborators, typically $h=0.9\pm 0.1$. What should one believe when reputable astronomers have published values that differ by 4 $\sigma$? If a priori we gave equal weight to the two groups, our technique would ignore the quoted errors and allocate a chance $P=25\%$ that $h < 0.5$, $P=50\%$ that $0.5 < h < 0.9$, and $P=25\%$ that $h > 0.9 $ — which is probably reasonable if these were the only available data. Another approach to determining $H_0$ from a collection of measurements is to filter out, or at least give smaller weight to, “wrong” observations and use only the best published estimates. “Wrong” in this context means observational values plus their errors that are unlikely to prove correct given the other data in hand. Press (1997) develops an elegant Bayesian technique using this approach, beginning with 13 reputable measurements of $H_0$, and finds a mean of $h=0.74$ and 95% confidence (2 $\sigma$ around the mean) range $0.66 < h < 0.82$. Our approach to analyzing the same Hubble constant data set is like that suggested by Zeldovich — use all the data and take the median. If we apply our median statistics method to the same set of 13 $H_0$ measurements used by Press (1997), we obtain a median of $h=0.73$ and 97.8% confidence limits of $0.55 < h < 0.81$. These results are nearly identical to Press’s result, without any assumption of Gaussianity or even looking at the 13 error estimates of the observers. Note that our uncertainties are not symmetric and the range quoted is not exactly 95% because we do not assume Gaussianity and therefore we do not interpolate probabilities between the estimates (one could use these limits as conservative estimates of the 95% limits, since the 95% confidence region lies somewhere between them). Our treatment assumes that the rank of the next measurement is random, thus the next measurement is equally likely to land between any of the previous measurements (or below/above the smallest/largest). For those who would like a Bayesian treatment, this assumed distribution of the next measurement is our prior for the median, which we multiply by the binomial likelihood of observing $N$ tails/heads to determine the probability distribution of the true median given the previous data. 331 Estimates of the Hubble Constant ------------------------------------ We now apply our median statistics method to 331 published measurements of the Hubble constant (Huchra 1999) . After deleting four entries in the table from 1924 and 1925 that lacked actual estimates of $H_0$, the June 15, 1999 version of this catalog contained 331 published estimates, the most recent dated 1999.458. These have a large range, including Hubble’s early high values (near $h = 5$) and, on the low end, values as small as $h=0.24$ inferred from measurements of the Sunyaev-Zeldovich effect in clusters (McHardy et al. 1990). However the relative likelihood of the true median as defined by these measurements and using eq. (1) is very narrow, as indicated by Figure 1. The published estimates were tabulated as integer values in ${{\rm \, km\, s}^{-1}{\rm Mpc}^{-1}}$ and there are many identical estimates. Figure 1 shows the relative likelihood of the true median of $H_0$ in bins centered on these integral values. The median value of the 331 measurements is $h = 0.67$; arguably this is an extremely reasonable estimate of the Hubble constant. The 95% statistical confidence limits are approximately $0.65 < h < 0.69$, obtained by integrating over the tails of the binomial likelihood distribution. These are surprisingly narrow limits. This result illustrates that the assumptions of independence and lack of systematic errors alone are very powerful. These purely statistical uncertainties are certainly a lower bound on the true errors, both because the entries in Huchra’s table are not independent measurements and because of systematic uncertainties in various methods for measuring $H_0$. Below we discuss the possible impact of systematic effects on our median statistics estimate of the Hubble constant. The strong effect of including or removing a small number of estimates illustrates how the mean can be biased upwards or downwards by a few extreme values, while the median remains insensitive to these outliers. The mean of the 331 $H_0$ estimates is $h=0.80$ with 95% limits $0.76<h<0.84$, inconsistent with our median statistics. After excluding the 10 estimates published before Sandage’s (1958) paper that discusses Hubble’s confusion of HII regions for bright stars, the median of the remaining 321 estimates is again $0.67$ with 95% limits $0.65<h<0.69$. However, the mean of this culled sample is $h=0.68$ with 95% limits $0.66<h<0.70$, perfectly consistent with median statistics. The result seems obvious; removing the systematically high estimates makes sense because we are aware of systematic errors of the type that Sandage (1958) points out. A strength of median statistics is robustness when we lack such knowledge. For comparison with the median and mean, we find that the mode of the 331 estimates is $h=0.55$. 11 of the 19 estimates with this value were published by Sandage, Tammann, and collaborators. The importance of using the median to estimate the true value of $H_0$ from a sample of estimates becomes apparent when we consider the arbitrariness of the Hubble relation, $v=H_0r$. A trivial rewriting of this relation as $r=\tau_H v$ describes identical physics. Had the relation first been written in this form, we would all be trying to measure the Hubble time, $\tau_H=1/H_0$. However, using the mean to estimate these parameters would give inconsistent answers because the mean of a sample of $H_0$ estimates is not the same as the inverse of the mean of $\tau_H$ estimates, $\bar{H_0} \neq 1/\bar{\tau_H}$. For the sample of 331 $H_0$ estimates, the mean of $H_0$ yields $h=0.80$ but the inverse of the mean of $\tau_H$ yields $h=0.66$ (excluding the ten pre-1958 estimates yields $h=0.68$ and $h=0.65$, respectively). In contrast, the median yields identical estimates, thus guaranteeing that the central values of $H_0$ and $\tau_H$ obey the correct relation $\tau_H = 1/H_0$. Systematic Effects and Uncertainties in the Median of $H_0$ ----------------------------------------------------------- The small range of the 95% confidence interval (statistical errors only) may cause one to immediately and rightly object that these $H_0$ estimates are neither independent nor free of systematic errors. Regarding the latter objection alone, we might consider the set of $H_0$ measurements as 331 “Russian watches.” As long as the same systematic effect does not plague an overwhelming number of the measurements, the median of the measurements should be relatively robust (certainly more so than the mean). The lack of independence of these measurements implies that the same systematic bias might affect at least one group of published estimates but, again, this should not strongly affect the median unless this group of estimates is a significant fraction of the 331. In fact we find that similar systematics could affect the majority of the estimates and so we must evaluate this effect. Of course, the real concern about independence and systematic errors is their impact on the confidence intervals, which are remarkably small. That the 95% confidence interval of purely statistical errors ranges over $\pm 2{{\rm \, km\, s}^{-1}{\rm Mpc}^{-1}}$, while astronomers have long argued over differences of $\pm 10{{\rm \, km\, s}^{-1}{\rm Mpc}^{-1}}$ and larger, merely points out that systematic effects are the likely dominant source of uncertainty. In the following discussion we attempt to assess the possible impact of systematic errors on our median statistics analysis. Exhaustive analysis of systematic errors in measurement of the Hubble constant is obviously beyond the scope of this paper. Many of the methods of $H_0$ estimation have well-known possible systematics. First, the overwhelming majority of methods are tied to the LMC distance scale and/or calibration of the Cepheid period-luminosity relation. Of the 331 estimates, probably not many more than 52 (those based on the CMB, the Sunyaev-Zeldovich effect, and gravitational lensing time delays) are certain to be independent of the LMC and Cepheid observations. Thus, roughly 84% of the sample of estimates could share similar systematic uncertainty. Many recent measurements, roughly 130 of the 331 ($\sim 39\%$), are specifically tied to the HST Cepheid distance scale. So, the data set clearly violates the above assumption that no group of measurements subject to the same systematic effect forms a significant fraction of the sample. Below we address the impact of this lack of independence. The “Cepheid-free” methods each have their own possible systematic errors. Estimation of $H_0$ using the Sunyaev-Zeldovich effect in clusters typically assumes that the gas in clusters is smoothly distributed; clumpiness in the gas would cause the true $H_0$ to be lower than estimated. Clusters are more likely to be included in optical cluster catalogs and targeted for observation of the S-Z effect if they are prolate along the line-of-sight; such a projection effect would cause $H_0$ to be larger than estimated (e.g., Sulkanen 1999). Using gravitational lens time delays to estimate $H_0$ requires assumption of a model for the mass distribution in the lens, which can be non-trivial. Changes in the mass model have substantial impact on the derived $H_0$ and could push the estimates up or down, depending on the assumed model (for discussion regarding 0957+561 see, e.g., Falco, Gorenstein, & Shapiro 1991, Kochanek 1991). Uncertainty in the LMC distance seems likely to be the dominant source of error in the majority of $H_0$ estimates. The LMC distance modulus assumed in many recent $H_0$ analyses (e.g., Mould et al. 2000) is $m-M=18.5$, corresponding to an LMC distance of 50 kpc. This distance modulus has quite large uncertainty; some recently published values span from $m-M=18.1$ (Stanek, Zaritsky, & Harris 1998) to 18.7 (Feast, Pont, & Whitelock 1998). Because a shortening of the distance scale by $\delta(m-M)=0.1$ corresponds to a 4.7% upward shift in $H_0$, this could be a very large effect. When Mould et al. use the histogram of recent LMC distance moduli to model the effect of this uncertainty, they infer a possible bias of 4.5% (in the sense that the true value of $H_0$ would lie above the estimate arrived at when Gaussian errors were assumed) and a total uncertainty ($1\sigma$) of 12% in the value of $H_0$ from combining all the Key Project results. We can assess the uncertainty in the LMC distance modulus and the resulting uncertainty in the true median of $H_0$ in the same way that we analyze the $H_0$ data themselves; we apply median statistics to the distribution of published $m-M$. Examining Gibson’s (2000) compilation of 38 recent measurements of the LMC distance, we find that the median of these is $m-M=18.39$ with 95% confidence limits $18.3 < (m-M) < 18.52$. This median lies below the nominal $m-M=18.5$ partly due to the number of recent measurements that use red clump stars, which typically yield $m-M\sim 18.3$ or smaller (this tail of smaller $m-M$ is also what causes the possible shift of $H_0$ by 4.5% in the modeling performed by Mould et al.). To estimate the range of systematic uncertainty in the LMC distance modulus due to different methods we apply median statistics to these data, but give equal weight to different methods (as we shall do below with $H_0$). Grouping the 38 LMC estimates into 11 different methods (taking the median of estimates among each group), the median of the methods (the median of medians) is $m-M=18.46$ with 95% limits $18.26 < (m-M) < 18.64$. This median agrees with the median of values in the histogram in Figure 1 of Mould et al. (2000). Relative to $H_0$ estimated with the nominal LMC distance modulus of $m-M=18.5$, the median statistics of different methods implies that $H_0$ could be shifted upwards by 1.9% with a 95% confidence range that spans from 8.0% below to 9.6% above the revised median. It is reasonable to assume that a range of LMC $m-M$ have been used in the past; too small a value led to erroneously large $H_0$ and vice versa. Correcting this ensemble of estimates to use the true value would therefore narrow the distribution of $H_0$ estimates and might cause a small shift in the median. However, to evaluate the possible impact of the LMC distance modulus uncertainty on our median statistics estimate of $H_0$, let us suppose that all but the 52 “Cepheid-free” estimates had used the [*same*]{} value, $m-M=18.5$, to calibrate the distance scale. Since many workers are know to have used other distance moduli (e.g., de Vaucouleurs advocated a shorter distance scale, using $m-M=18.4$ or smaller; de Vaucouleurs 1993), this assumption may lead us to overestimate the impact on the median. If all the $H_0$ estimates that could be plagued by dependence on the LMC distance modulus were to shift in identical fashion, then we estimate the effect of the 8.0% lower and 9.6% upper 95% systematic limits by multiplying these bounds by 0.84. Thus, using the distribution of LMC distance moduli to model the systematic uncertainty in $H_0$ and assuming that all but 52 of the estimates suffer from this same systematic uncertainty yields a possible shift of $\delta h = 0.01$ and 95% systematic errors of $(-0.045,+0.055)$, roughly 7.5% in either direction. Other systematic effects on the Cepheid distance scale include metallicity effects on the period-luminosity relation and uncertainties in photometry. If metallicity corrections to the Cepheid zeropoint based on spectroscopic \[O/H\] abundance (Kennicutt et al. 1998) are applied to the HST Key Project Cepheids, their summary estimate of $H_0$ decreases by 4.5% (Mould et al. 2000). On the other hand, use of Stetson’s (1998) WFPC2 calibration would cause Cepheid calibration based on HST to be revised in such a way to shift $H_0$ upwards by 4%. Another photometric uncertainty concerns blending of Cepheids (Mochejska et al. 2000; cf. Gibson, Maloney, & Sakai 2000); photometric blending of Cepheids with other stars would cause the distance modulus to be underestimated, thus overestimating $H_0$. Of the four possible sources of systematic error in the HST Cepheid distance scale that we have mentioned, two (LMC distance modulus, WPFC2 calibration) might increase $H_0$ while two (metallicity effects, photometric blending) might decrease $H_0$. The magnitude of these effects varies and we do not consider them equally likely, so assuming mutual cancellation of these effects is not justified. However, we would be quite unlucky if all these or other systematic effects fell in the same direction. As shown by Mould et al. (2000), the LMC distance scale uncertainty is the dominant source of systematic error in $H_0$ estimated by the HST Key Project. We conclude that the systematic error on the median value of $H_0$ (which is not the same as the uncertainty on any one measurement, nor any one group of measurements, such as those of the HST Key Project) due to uncertainty in the LMC distance modulus and/or Cepheid calibration is of order the LMC effect described above, roughly 7.5% or $\delta h =0.05$ in either direction at the 95% confidence level. Historically, debate among workers in the field has often centered on the impact of systematic effects on measurement of $H_0$, with the expected tendency (on which progress in science keenly relies) of each group to point out systematics that might plague the others’ measurements. We remain agnostic regarding these debates and examine the possible effects of systematic effects on $H_0$ by analyzing the distribution of $H_0$ estimates, grouping these estimates by method and/or by research group. If systematic effects in one method or group dominate the 331 published estimates, then excluding them should shift the median statistics estimate. Huchra classifies the published estimates into 18 primary types by method and 5 secondary types by author or group of authors. Using these types to group the estimates, we examine the dependence of $H_0$ statistics on the methods employed and the investigators who report the estimates. The 5 secondary types, their number, and the median of estimates in each type are as follows: No Type ($N=216$, median $h=0.68$), HST Key Project or KP team member ($N=40$, median $h=0.73$), Theory ($N=3$, median $h=0.47$), Sandage, Tammann, and collaborators ($N=51$, median $h=0.55$), de Vaucouleurs or van den Bergh, and collaborators ($N=21$, median $h=0.95$). The median of “No Type” is $h=0.68$, thus excluding results published by the best-known workers in the field would have no impact on the median value of $H_0$. The median of the type medians is also $h=0.68$. One might also be curious about the effect of excluding a particular group’s work. [*Excluding*]{} each group in turn renders the following medians and 95% confidence limits: no HST KP $h=0.65$, $0.62<h<0.68$; no Sandage and Tammann $h=0.68$, $0.65<h<0.70$; no de Vaucouleurs and van den Bergh $h=0.66$, $0.64<h<0.69$. Thus, completely excluding any one of these renowned investigators or teams would shift the median by at most $\delta h=0.02$. The history of a number of fundamental constants shows a systematic trend with time. Certainly this is the case when we compare very high early estimates of $H_0$ with more modern values, because of systematic effects like those pointed out by Sandage (1958). Has such a trend continued? Excluding measurements before 1990 yields a median value $h=0.65$ and 95% limits $0.62<h<0.68$. Further culling the sample to include only “HST era” measurements (post 1996) yields a median $h=0.65$ with 95% limits $0.62<h<0.67$. We can extend this “historical” analysis by examining only measurements too recent to have been included in the original version of this manuscript. Between June 15, 1999 and August 2, 2000, 46 entries were added to Huchra’s catalog, one of these being the value of $h=0.67$ in the preprint of this paper. How would our analysis treat an entry such as ours as it appears in Huchra’s table? We would take the central value seriously, but ignore the quite small uncertainty. Excluding our own value, the median of the remaining 45 new entries is $h=0.69$ with 95% confidence limits of $0.65<h<0.71$. The mean of these same entries is $h=0.67$ with 95% confidence limits $0.65<h<0.70$. Thus, comparison with our estimate of the median above shows that the 45 new estimates are entirely consistent with the median of the previous 331. Systematic differences between the results of different methods of measuring $H_0$ are also apparent in these data. Grouping the estimates by method and applying median statistics yields an estimate of the range of systematic errors that separate the methods. This approach also addresses the possible concern that our analysis of the 331 estimates gives equal weight to each publication, including proceedings and summary articles that restate previous results. Table 1 lists the primary types into which Huchra (1999) classifies the published estimates. Columns 2 and 3 list the number in each type and the median of estimates for each type (mean of the central two for even numbers of estimates). The median of the 18 methods is $h=0.70$, slightly higher than the median of all 331 estimates. The 95% uncertainty range $0.645<h<0.745$ of the median of methods includes the median of all 331 estimates, $h=0.67$. This result is unchanged by excluding the questionable “Irvine” (not a method, but rather a meeting), “No Type” and “CMB fit” values. If the median value for each method is an accurate representation of the result of applying that method, then these confidence limits on the median of methods is indicative of the range of systematic error among different methods, roughly 7% or $\delta h = 0.05$ in either direction. It is improbable that the systematic errors in the various methods all go in the same direction, therefore correction of systematic errors in all the methods is likely to narrow the distribution of $H_0$ estimates and might shift the median. Some of the systematic spread in the methods may be due to different assumed LMC distance moduli (so the 7% systematic range here is not independent of the uncertainty due to the LMC distance discussed above) or freedom from that distance scale calibration (allowing the “global” methods such as gravitational lensing and S-Z tend to yield smaller estimates of $H_0$ than the locally-calibrated methods). This range of systematic error is slightly smaller than the 7.5% range from possible LMC distance modulus/Cepheid calibration uncertainties computed above. We expect this to be so, because the 7.5% range assumed that all but 52 of the 331 estimates would suffer from identical systematics, which is clearly an overestimate. We conclude that the true median of 331 estimates of $H_0$ is $h=0.67\pm 0.02 (95\% \ {\rm statistical}) \pm 0.05 (95\% \ {\rm systematic})$, where the systematic error, also derived using median statistics, is dominated by uncertainty in the LMC distance modulus. This allows for a systematic error range that is slightly larger than that inferred by examining the range of estimates produced using different methods of $H_0$ measurement. Our estimate of $h=0.67$ is arguably the best current summary of our knowledge of $H_0$. It is in reasonable agreement with most recent estimates, is based on almost all measurements, and makes no assumptions about the distribution of errors from individual measurements. Mass of Pluto {#pluto} ============= The history of mass estimates for Pluto is an extreme example illustrating the effects of systematic errors. Early measurements of the mass of the Pluto-Charon system were obtained by observing perturbations in the orbit of Neptune. Errors in the orbit of Neptune dominated the analysis and these were mistaken for the influence of Pluto. These errors led to many measurements of Pluto’s mass that were larger than an Earth mass. This was, of course, a systematic error. Later, when Charon was discovered, the mass of the Pluto-Charon system could be measured with great accuracy. If we examine 60 published values of the mass of Pluto-Charon (Marcialis 1997) we obtain a median mass of approximately 0.7 Earth masses with 95% confidence limits between 0.1 and 1.0 Earth masses (see Figure 2). This is incorrect because of a now well-known systematic error, similar to the mistake made by Hubble in his estimates of $H_0$. If we examine only the 28 measurements taken after 1950 (we pick that date simply to divide the century and the data set roughly in half), we obtain a median value of 0.00246 Earth masses, which is almost exactly the modernly-accepted value, with 95% limits from 0.00236 to 0.08 Earth masses. Lacking knowledge of the Neptune systematic, this extreme trend with time would alone provide a strong clue that systematic errors dominated the uncertainty in $M_{\rm Pluto}$. Such a trend would not be readily apparent using the mean. Even after culling the pre-1950 data the mean is still too high: $M_{\rm Pluto}=0.157$ with standard deviation $0.060$. This strongly contrasts with the case of the Hubble constant in the previous section, in which excluding the pre-1958 data, which were contaminated by Hubble’s systematic error of mistaking HII regions for stars, does not change the median. The lesson here is that median statistics are more robust than the mean but are not immune to systematic errors. The point of examining these Pluto data is to show a case where even median statistics fail; there is no magic bullet for faulty data sets. SNe Ia Data and Binomial Constraints on ${\Omega_{\rm M}}$ in the ${\Omega_{\Lambda}}= 0$ Model {#supern} =============================================================================================== Recent analyses reported by the High-$z$ Supernova Search Team (R98) and the Supernova Cosmology Project (P99) place extremely stringent constraints on cosmological models, including evidence for a positive cosmological constant at the many $\sigma$ level. It is important to examine the sensitivity of these results to the use of $\chi^2$ analyses and the assumptions underlying this approach. Both for this purpose and simply to demonstrate the use of median statistics, here we apply median statistics to the R98 high-redshift SNe Ia data. These data are, with the exception of SN 1997ck at redshift $z = 0.97\,$[^3], of excellent quality and the size of the high-$z$ part of this data set, $N=16$, lends itself to a clear pedagogical discussion of median statistics. In what follows we use the MLCS data of R98, and set $h = 0.652$ (their calibrated value and consistent with our median). In Section \[perlmutter\] below, we apply our median statistics analysis to the larger set of high-$z$ SNe Ia from P99. In the following analyses we use the most recent R98 and P99 data to constrain cosmological parameters. We emphasize that, like analyses done by R98 and P99, our median statistics analyses rely on hypothesis (2), that there are no systematic effects in the data. A number of astrophysical processes and effects (the mechanism responsible for the supernova, evolution, environmental effects, intergalactic dust, etc.) could, in principle, strongly affect our conclusions (see, e.g., Aguirre 1999; Drell, Loredo, & Wasserman 2000; Sorokina, Blinnikov, & Bartunov 2000; Wang 2000; Höflich et al. 2000; Simonsen & Hannestad 1999; Totani & Kobayashi 1999; Aldering, Knop, & Nugent 2000). We note that our estimate of $H_0$ in section \[hubble\] is consistent with that found by R98 from an analysis of their MLCS data, $h = 0.652 \pm 0.013$ (1 $\sigma$ statistical error only), thus we use the R98 value of $h=0.652$ in the likelihood analysis of the supernovae below in order to vary only the statistical method applied to these data. Using each supernova observation to estimate ${\Omega_{\rm M}}$, we use our median statistic method to obtain a robust estimate of ${\Omega_{\rm M}}$. Let’s first consider the case where we assume that ${\Omega_{\Lambda}}= 0$; these are Friedmann big bang models characterized by the value of ${\Omega_{\rm M}}$. Each of the 16 distant supernovae produces an independent estimate of ${\Omega_{\rm M}}$ — the value of ${\Omega_{\rm M}}$ such that the supernova’s estimated brightness (from looking at the shape of its light curve) and its predicted brightness (given its $z$) agree exactly. Presumably, if we did an enormous number of such measurements, the true median value of ${\Omega_{\rm M}}$ obtained would give us our best estimate of the true value of ${\Omega_{\rm M}}$, assuming as always that there are no systematic effects. Listed in the left column of Table 2 are the 16 estimates of ${\Omega_{\rm M}}$ from the 16 supernovae – ranked in order of their value. In the middle column is the confidence (using eq. \[1\] above) that the true median ($\Omega_{TM}$) lies between the corresponding values just above and just below it in the column on the left. Thus, there is a 0.00153% chance that the value of $\Omega_{TM}$ is greater than 5.96, and a 2.78% chance that $0.0426 < \Omega_{TM} < 0.206$, and so forth. The 99.6% confidence limits on $\Omega_{TM}$ are $-1.60 < \Omega_{TM} < 0.656$. The ${\Omega_{\rm M}}= 1$, ${\Omega_{\Lambda}}= 0$ model is ruled out at the 99.6% confidence level. This is a dramatic result. R98 likewise rule out this model at a similarly high confidence level but we have done so without assuming that the errors are Gaussian. The chance that $\Omega_{TM} < 0$ (which would be unphysical, and indicate that a simple Friedmann model with ${\Omega_{\Lambda}}= 0$ was inadequate) is between 89.5% and 96.2% so we can not say that the ${\Omega_{\Lambda}}= 0$ Friedmann models with ${\Omega_{\rm M}}> 0$ are ruled out at the 95% confidence level. (It would be correct to say that they are ruled out at the 89.5% level however.) This compares with the more dramatic R98 statement that there is a 99.5% probability that ${\Omega_{\Lambda}}> 0$ and that therefore all ${\Omega_{\Lambda}}= 0$ models with ${\Omega_{\rm M}}> 0$ are ruled out. These data are not sufficient to cause median statistics to rule out the ${\Omega_{\Lambda}}= 0$ models (with 95% confidence). If we argued that we know independently that ${\Omega_{\rm M}}> 0.0426$ from nucleosynthesis results and masses in groups and clusters of galaxies, and from large scale structure, then with this additional constraint we could argue that the acceptable ${\Omega_{\Lambda}}= 0$, ${\Omega_{\rm M}}> 0$ models are ruled out at the 96.2% confidence level. This is just slightly above the 95% confidence level. Bayesian Constraints on ${\Omega_{\rm M}}$ and ${\Omega_{\Lambda}}$ from 16 R98 High-$z$ SNe Ia {#bayes} =============================================================================================== Observational data favor low density cosmogonies. The simplest low-density models have either flat spatial hypersurfaces and a constant or time-variable cosmological “constant" $\Lambda$ (see, e.g., Peebles 1984; Peebles & Ratra 1988; Sahni & Starobinsky 2000; Steinhardt 1999; Carroll 2000; Binétruy 2000), or open spatial hypersurfaces and no $\Lambda$ (see, e.g., Gott 1982, 1997; Ratra & Peebles 1994, 1995; Kamionkowski et al. 1994; Górski et al. 1998). In this and the next section we consider a more general model with a constant $\Lambda$ that has open, closed, or flat spatial hypersurfaces. Two of the currently favored models lie along the lines ${\Omega_{\Lambda}}= 0$ or ${\Omega_{\rm M}}+ {\Omega_{\Lambda}}= 1$ in the two-dimensional (${\Omega_{\Lambda}}$, ${\Omega_{\rm M}}$) parameter plane of this more general model. In Section 8 we consider a model with a time-variable $\Lambda$. We can translate the binomial results (such as those derived in the previous section) into Bayesian constraints. Bayesian statistics says that the posterior probability of a particular model after analyzing the data at hand is proportional to the prior probability of that model multiplied by the likelihood of obtaining the observational data given that model. Consider a model with ${\Omega_{\Lambda}}= 0$ and ${\Omega_{\rm M}}= 6$. For this model, all 16 supernovae estimates of ${\Omega_{\rm M}}$ are lower than the true value ${\Omega_{\rm M}}= 6$ (see Table 2), thus all 16 distant supernovae would have intrinsic luminosities that are fainter than we expect. Since each represents independent data and we are assuming no systematic effects, that means that the likelihood of this happening is $1/2^{16}$ (since each individual supernova has an independent probability of 1/2 of being fainter than we expect based on the low redshift supernovae). Suppose we next consider a model with ${\Omega_{\Lambda}}= 0$ and ${\Omega_{\rm M}}= 2$. Table 2 shows that for this model there is 1 supernova that is too bright and 15 that are too faint. The likelihood of obtaining this result is $16/2^{16}$ according to the binomial distribution. The relative likelihood in column 3 of the table is given as 16. The normalized likelihood is equal to the relative likelihood in the table divided by $2^{16}$. From Table 2 we would conclude that if we initially found a model with ${\Omega_{\Lambda}}= 0$, ${\Omega_{\rm M}}= 6$ and a model with ${\Omega_{\Lambda}}= 0$, ${\Omega_{\rm M}}= 2$ to be a priori equally likely, with odds of $1:1$, then after consulting the supernovae data we would give odds of $16:1$ in favor of the ${\Omega_{\rm M}}= 2$ model over the ${\Omega_{\rm M}}= 6$ model. One can perform similar analyses for models with other values of ${\Omega_{\rm M}}$ and ${\Omega_{\Lambda}}$ by examining Figure 3. These plots show, for each supernova, the locus of values of (${\Omega_{\rm M}}$, ${\Omega_{\Lambda}}$) that predict the corrected apparent brightness (see, e.g., Goobar & Perlmutter 1995). To compute the likelihood of a particular model (value of ${\Omega_{\rm M}}$ and ${\Omega_{\Lambda}}$), count the number of SNe Ia that are too bright/faint for the model, compute the binomial likelihoods, and apply the prior (note that 2 SNe Ia lie off the bottom and 4 off the top of the linear scale plot, Figure 3$b$). Figure 4 allows one to do this “by eye”. In this figure, the greyscale intensity at each point in the (${\Omega_{\rm M}}$, ${\Omega_{\Lambda}}$) plane is proportional to the binomial likelihood of the observed SNe Ia being brighter/fainter than predicted by the model with that pair of values of (${\Omega_{\rm M}}$, ${\Omega_{\Lambda}}$). Not surprisingly, the favored region in this plane is similar to that found by R98. Solid lines in Figure 4 show 1, 2, and 3 $\sigma$ likelihood contours derived from a $\chi^2$ analysis (Podariu & Ratra 2000). If we limit ourselves to consideration of flat cosmologies then ${\Omega_{\rm M}}+ {\Omega_{\Lambda}}=1$ and the allowed models lie along the long-dashed “flat universe” lines in Figures 3 and 4. To examine this region in detail, Figure 5 plots the relative likelihoods of $\Omega_{TM}$ lying in ranges of ${\Omega_{\rm M}}$ bounded by the intersection of the loci in Figure 3$a$ with the flat universe line. Irrespective of the assumed prior, the best-fit flat-$\Lambda$ model has ${\Omega_{\rm M}}\sim 0.3$, in agreement with R98 and P99. One must adopt reasonable prior probabilities to perform a more complete Bayesian analysis. If the prior probability was $P = 100\%$ that the ${\Omega_{\rm M}}= 1$, ${\Omega_{\Lambda}}= 0$ fiducial Friedmann model was correct, then no matter what data was examined, after examining that data one would still conclude with $100\%$ certainty that the ${\Omega_{\rm M}}= 1$, ${\Omega_{\Lambda}}= 0$ model was correct. This is because the prior probability of all other models is zero, and zero times even a high likelihood is still zero. Thus this is a bad prior. Priors should be as agnostic as possible to allow, as much as possible, the data and not the prior determine the odds. Vague or “non-informative” priors are appropriate in this situation (Press 1989). An appropriate vague prior for an unbounded variable $x$ that can be positive, zero, or negative is uniform in $x$: $P(x)dx \propto dx$. But for an unbounded variable $x$ that must be positive the correct vague prior is the Jeffreys (1961) prior which is uniform in the logarithm of $x$: $P(x)dx \propto d\ln x = dx/x$ (Berger 1985). (If $x$ must be positive then the variable $\ln x$ can be positive, zero, or negative and therefore should be distributed uniformly in $d\ln x$ via the previous rule.) That the vague prior for a number that is positive and unbounded should be uniform in the logarithm is well established and related to the rule that the first digits of positive numbers in a data table (like lengths of rivers) should be distributed according to the space they occupy on the slide rule (i.e., uniform in the logarithm). Thus in any data set involving positive numbers we should expect as many to have a first digit of 1 as the sum of those starting with 2 or 3, and as the sum of those starting with 4, 5, 6, or 7. R98 and P99 assume a vague prior with ${\Omega_{\rm M}}$ and ${\Omega_{\Lambda}}$ as free parameters and the prior probability proportional to $d{\Omega_{\rm M}}d{\Omega_{\Lambda}}$. This would be appropriate for variables that could take on positive or zero or negative values. This may be reasonable for ${\Omega_{\Lambda}}$ since people certainly consider both positive and zero values of ${\Omega_{\Lambda}}$ plausible. Since we know little about what sets the level of the vacuum density it could conceivably be negative as well. However, ${\Omega_{\rm M}}$ must be positive. No one envisions a negative or zero density of matter. In fact, R98 and P99 do not consider models with ${\Omega_{\rm M}}$ not positive. Standard Friedmann models with negative values of ${\Omega_{\rm M}}$ would easily explain the data with maximum likelihood (e.g., $-0.303 < {\Omega_{\rm M}}< -0.266$ from Table 2) but these are not considered because ${\Omega_{\rm M}}\le 0$ is thought to be unphysical. It is also clear that ${\Omega_{\rm M}}$ is not a priori bounded above. Thus, the appropriate vague prior for ${\Omega_{\rm M}}$ is uniform in $\ln{\Omega_{\rm M}}$. In other words, a priori there should be an equal probability of finding ${\Omega_{\rm M}}$ between 0.1 and 0.2 or finding ${\Omega_{\rm M}}$ between 0.2 and 0.4. More generally, the log prior allows an equal chance of the universe being either open or closed. So if both ${\Omega_{\rm M}}$ and ${\Omega_{\Lambda}}$ are free parameters, we should expect a priori, before examining the data, for $P({\Omega_{\Lambda}},{\Omega_{\rm M}}) d{\Omega_{\Lambda}}d{\Omega_{\rm M}}$ to be proportional to $d{\Omega_{\Lambda}}d{\Omega_{\rm M}}/{\Omega_{\rm M}}$. This is more favorable to low density models than the prior R98 and P99 have chosen[^4]. Perhaps a more serious problem with the prior adopted by R98 and P99 is that it gives zero weight to the flat-$\Lambda$ model and the ${\Omega_{\Lambda}}= 0$ model. After assuming $P({\Omega_{\Lambda}}, {\Omega_{\rm M}}) d{\Omega_{\Lambda}}d{\Omega_{\rm M}}$ proportional to $d{\Omega_{\Lambda}}d{\Omega_{\rm M}}$, R98 state that the posterior probability that ${\Omega_{\Lambda}}> 0$ is 99.5%. But what this really means is that, according to their prior, the posterior probability that ${\Omega_{\Lambda}}< 0$ is 0.5% and that the probability that ${\Omega_{\Lambda}}= 0$ is 0%. Furthermore, the posterior probability that ${\Omega_{\Lambda}}+ {\Omega_{\rm M}}=1$ is also 0%. This is because the prior probability of ${\Omega_{\Lambda}}= 0$ or ${\Omega_{\Lambda}}+ {\Omega_{\rm M}}= 1$ is zero because these are lines of zero area in the ${\Omega_{\Lambda}}, {\Omega_{\rm M}}$ plane. Clearly, the prior adopted by R98 and P99 is not reasonable. This shortcoming of these analyses has also been noted by Drell et al. (2000). Occam’s razor suggests that models that are simpler must have higher prior probability. One suggestion often used is that the prior probability for a model with $N$ free parameters is $P = (1/2)^{N+1}$. Thus the prior probability that the correct model is one with no free fitting parameters is 50%. The prior probability that the correct model is one with one free fitting parameter is 25%, and with two free fitting parameters is 12.5% and so forth. The infinite sum, up to $N = \infty$ equals 100% as expected. Having additional free parameters to fit the data always makes fitting any data easier and there has to be a penalty for this. The ${\Omega_{\rm M}}= 1$, ${\Omega_{\Lambda}}= 0$ Einstein-de Sitter model is one with no free fitting parameters. For this reason it has been called the fiducial cold dark matter model. The steady-state model also is a model with no free fitting parameters — this was one of the attractions of this model for its proponents. The steady-state model is spatially flat and expands exponentially, $a(t)\propto \exp(t/r_0)$. Geometrically, this model is identical to a ${\Omega_{\Lambda}}= 1, {\Omega_{\rm M}}= 0$ model. If the steady-state model were correct 12 supernovae would be too bright and 4 would be too faint, giving a likelihood of $1820/2^{16}$. The ${\Omega_{\rm M}}= 1$ Einstein-de Sitter model by contrast has 14 supernovae too faint and 2 supernovae too bright, giving a likelihood of $120/2^{16}$. If we regarded the odds between these two competing models as a priori $1:1$ before examining the supernovae data, we would give posterior odds of $15:1$ in favor of the steady-state model after examining the supernovae data. Let us illustrate our Bayesian technique by considering how we would have evaluated the competing models in the early 1960’s, if the supernovae data had been available then. At that time ${\Omega_{\Lambda}}> 0$ models were not popular. The two main zero free parameter models were the ${\Omega_{\rm M}}= 1$, ${\Omega_{\Lambda}}= 0$ Einstein-de Sitter model and the steady-state model. The only popular one-parameter model was the ${\Omega_{\Lambda}}= 0$ Friedmann model with ${\Omega_{\rm M}}$ as a free parameter. This may be considered the Friedmann models with ${\Omega_{\rm M}}\neq 1$ because in this one-parameter family the ${\Omega_{\rm M}}= 1$ model is a set of measure zero. Now, if zero-parameter models as a group were considered to have prior probability of 50%, and one-parameter models as a group had a prior probability of 25%, then we would assign a prior probability of 25% to the steady-state model, 25% to the Einstein-de Sitter model, and 25% to the ${\Omega_{\Lambda}}= 0$, ${\Omega_{\rm M}}\neq 1$ Friedmann model, and 25% to more complicated models with 2 or more free parameters. If we were to discount more complicated models and renormalize, then we would have prior probabilities of 33.3% for the steady-state model, 33.3% for the Einstein-de Sitter model, and 33.3% for the ${\Omega_{\Lambda}}= 0$, ${\Omega_{\rm M}}\neq 1$ Friedmann model. Independent measurements of the mass in clusters of galaxies would suggest a minimum value of ${\Omega_{\rm M}}$ of 0.05, and Hubble diagrams to measure $q_0$ from galaxies indicate a maximum value of ${\Omega_{\rm M}}= 4$. Since the prior for ${\Omega_{\rm M}}$ is distributed uniformly in $\ln{\Omega_{\rm M}}$ we can calculate the prior probabilities of finding ${\Omega_{\rm M}}$ in different ranges. These prior values will be revised by the likelihoods after examining the supernova data. Table 3 lists the prior probabilities for the different models, and how these values would be revised by multiplying the priors in each model (and over each range of ${\Omega_{\rm M}}$ in the ${\Omega_{\Lambda}}= 0$ Friedmann models with ${\Omega_{\rm M}}\neq 1$) by the relative likelihoods from Table 2 above, and renormalizing the results to give a total probability of 100%. The steady-state model and the $0.05 < {\Omega_{\rm M}}< 0.2$ models would be the only ones to gain ground due to the supernovae data. Ranking the 5 models in order of posterior probability, we would see that at the 95% confidence level (that our reduced list still included the correct model) we could only rule out the $1 < {\Omega_{\rm M}}< 4$ models. The others would remain in contention. The steady-state model would have been favored by the supernova data. It is of course an accelerating model, but one that is no longer in contention. Today the models in contention are different. The steady-state model has no Big Bang and is ruled out by the cosmic microwave background. The only zero-parameter model still in contention is the Einstein-de Sitter model so by Occam’s Razor it gets 50% of the prior probability. There are two one-parameter models in contention, the ${\Omega_{\Lambda}}= 0$ open model with $0.05 < {\Omega_{\rm M}}< 4$, and the ${\Omega_{\rm M}}+ {\Omega_{\Lambda}}= 1$ flat-$\Lambda$ model with $-1 < {\Omega_{\Lambda}}< 0.95$. Together, these one-parameter models must get 25% of the prior probability. The two-parameter model has both ${\Omega_{\rm M}}$ and ${\Omega_{\Lambda}}$ variable, with $0.05 < {\Omega_{\rm M}}< 4$, and $-1 < {\Omega_{\Lambda}}< 1$. ${\Omega_{\Lambda}}$ can be both positive or negative and so its prior probability is uniform in $d{\Omega_{\Lambda}}$. ${\Omega_{\rm M}}$ must be positive and so is distributed uniformly in $d\ln {\Omega_{\rm M}}$. This is the only two-parameter model under consideration so its prior probability must be 12.5%. The prior probability of other more complicated models would be 12.5%. We can renormalize to give unit probability to the sum of just the models under consideration (with 0, 1, and 2 free parameters). Summing the prior probabilities listed in Table 4, we find prior probabilities of 18.6% for open, 71.4% for flat, and 9.96% for closed. We have prior probabilities of 14.5% for $\Lambda < 0$, 71.4% for $\Lambda = 0$, and 14.1% for $\Lambda > 0$. After observing the supernovae, the zero-parameter Einstein-de Sitter model suffers greatly, dropping to 9.37%, though still not ruled out by the usual 95% criterion. The greatest beneficiaries of the supernova results are $\Lambda>0$ models; flat $\Lambda>0$ models rise from 6.97% to 41.53% while open $\Lambda>0$ models rise from a mere 3.34% to 27.48%. Almost as impressive are quite low ${\Omega_{\rm M}}$ open (${\Omega_{\Lambda}}= 0$) models, that rise in probability from 4.52% to 13.33%. Table 5 summarizes this analysis. First, let us examine the evidence for a non-zero cosmological constant. We find that the posterior probability of $\Lambda>0$ is 70%. This result differs from the 99.5% claimed by R98. A posteriori, $\Lambda=0$ models have a 27% probability of being correct, thus such models are still quite viable, in agreement with the conclusion of Drell et al. (2000). R98 find 0% probability for such models, because they disallowed this possibility in their prior. Similar to R98, we find that $\Lambda<0$ models are ruled out at greater than 97% confidence. Is the universe open or closed? We rule out closed-universe models with greater than 98% confidence, but the odds are evenly split between flat and open models. The 16 SNe Ia slightly decrease the probability of flat models, from our prior of 71% to a posterior probability of 51.5%, while significantly increasing the probability of open models, from our prior of 18.6% to a posterior probability of 47%. Alternatively, we could be more conservative since, because of age considerations and the amount of power on large scales in galaxy clustering, it could be argued that the only models currently under serious discussion are those with $0.05 \leq {\Omega_{\rm M}}< 1$, and with $0 \leq {\Omega_{\Lambda}}< 1$. That eliminates all parameter-free models, leaving the flat-$\Lambda$ (${\Omega_{\rm M}}+ {\Omega_{\Lambda}}= 1$) model and the ${\Omega_{\Lambda}}= 0$ open model as the only one-parameter ones and allows the two-parameter model where ${\Omega_{\rm M}}$ and ${\Omega_{\Lambda}}$ are both allowed to vary. The one-parameter models together have a prior probability that is twice that of the two-parameter model, again by Occam’s razor. Since there are two competing one-parameter models, all three models must have equal prior probability of 33.3%. This reflects fairly well the prior probabilities as thought of by astronomers today, before seeing the supernova data. Again since $\Lambda$ may be zero, we use a prior that is uniform in $d{\Omega_{\Lambda}}$ for both the flat-$\Lambda$ model and the two-parameter model. Figure 4$b$ shows the relative likelihood for models in this more restricted (${\Omega_{\rm M}}$, ${\Omega_{\Lambda}}$) space, with greyscale intensity proportional to the likelihood as in Figure 4$a$. For comparison, the plotted lines show the confidence regions computed in similar fashion to R98 (derived in Podariu & Ratra 2000). Table 6 presents the priors and results after including the SNe Ia data for this restricted modern analysis. This gives prior probabilities of 56.1% for open, 33.3% for flat, and 10.5% closed. It also gives prior probabilities of 33.3% for $\Lambda = 0$, and 66.7% for $\Lambda > 0$. Under these restricted conditions the ${\Omega_{\Lambda}}= 0$ Friedmann models with $0.2 < {\Omega_{\rm M}}< 1$ suffer the most, but end with the same $\sim 3\%$ posterior probability as under the broader analysis above. Open universe $\Lambda>0$ models substantially increase in probability after considering the supernova data. This analysis uses more (non-SNe Ia) astrophysical evidence in the computation of the prior probabilities, thus restricting ${\Omega_{\rm M}}$ and ${\Omega_{\Lambda}}$ to smaller ranges than those considered reasonable in the previous analysis. So far, we have used a log prior for ${\Omega_{\rm M}}$, in keeping with the positive-definite property of the density of matter. To examine the sensitivity of our median statistics results to this choice of prior for ${\Omega_{\rm M}}$, we also compute posterior probabilities using a uniform prior for ${\Omega_{\rm M}}$. This is useful because our analysis above differs from those of R98 and P99 both in its use of median statistics and in the choice of prior. Table 7 repeats the analysis summarized in Table 4, this time with prior probabilities that are uniform in $d{\Omega_{\rm M}}\, d{\Omega_{\Lambda}}$. When so little prior probability is assigned to low values of ${\Omega_{\rm M}}$, low density Friedmann models do not fare as well. Flat-$\Lambda$ models with ${\Omega_{\Lambda}}>0$ fare better (59.2% vs. 41.5%) with a uniform prior, not because of a larger prior probability nor because of a higher average likelihood (both actually decrease somewhat relative to the log prior case), but rather because the average likelihood of other models decreases when more weight is given to the high ${\Omega_{\rm M}}$ region of the (${\Omega_{\rm M}}$, ${\Omega_{\Lambda}}$) plane. The total posterior probability of all $\Lambda>0$ models is only marginally higher (76.9% vs. 70.2%) than in the log prior case. Thus, our conclusions about the cosmological constant are relatively insensitive to the choice of prior for ${\Omega_{\rm M}}$. However, the posterior probabilities for the flat and closed models are now significantly larger, 75.1% and 11.1% respectively (compared to 51.5% and 1.5% in the previous analysis), while the odds for the open case are significantly reduced to 13.7% (from 47%). If, as in R98, we were to adopt a uniform prior and limit ourselves to two-parameter models, then after renormalizing the posterior probabilities in Table 7 we would find a 94.3% chance that $\Lambda >0$, comparable to 99.5% in R98. These results are qualitatively consistent with those found from the $\chi^2$ analyses of Podariu & Ratra (2000), showing that median statistics lead to quite similar (but slightly more conservative) results while relying on fewer hypotheses. Again we would argue that our choice of priors is superior to those chosen in R98 and we have included these last estimates only to show the direct action of the median statistics. That some of these results depend significantly on the choice of prior indicates that better data are needed to convincingly constrain cosmological parameters. Binomial Constraints on ${\Omega_{\rm M}}$ and ${\Omega_{\Lambda}}$ from 42 P99 High-$z$ SNe Ia {#perlmutter} =============================================================================================== One nice thing about median statistics is that they are extraordinarily easy to apply. P99 have recently published data on 42 high-$z$ SNe Ia[^5]. They have shown plots of the data versus several cosmological models, thus one can read the answer right off their graph of magnitude residuals versus cosmological models (their Figure 2). We ignore the error bars and simply ask how many data points are below or above each cosmological model line. In other words, we examine how many supernovae are too bright or too faint given a particular cosmological model. The results are given in Table 8. For example, for the ${\Omega_{\rm M}}=1, {\Omega_{\Lambda}}=0$ model, 4 supernovae are too bright and 38 are too faint. If this model is correct, the likelihood of obtaining this result is the same as throwing up 42 coins and having 4 come up heads and 38 come up tails. The open model, with ${\Omega_{\rm M}}=0$ and ${\Omega_{\Lambda}}=0$, has 10 supernovae too bright and 32 too faint. This is presumably the best fitting ${\Omega_{\Lambda}}= 0$ model. The best fitting flat-$\Lambda$ model is, according to P99, one with ${\Omega_{\rm M}}=0.28$ and ${\Omega_{\Lambda}}=0.72$, where 21 supernovae are too bright and 21 too faint. Interestingly, this is also a best fitting model using the median statistic! No result is more likely than 21 supernovae too bright and 21 too faint. The “steady-state” model with ${\Omega_{\rm M}}=0$ and ${\Omega_{\Lambda}}=1$ has 31 supernovae too bright and 11 too faint. If the number of supernovae too bright is $B$ and the number too faint is $F$, then according to eq. (1) the relative likelihood of obtaining this result in a given model is proportional to $2^{-42}(42!)/(B!F!)$. Table 8 gives the relative likelihoods normalized to the open model. According to Bayesian statistics our posterior probabilities for each model after examining the P99 data would be proportional to our prior probabilities times the likelihoods in Table 8. Today our prior probability for the “steady-state” model is near zero since it has no Big Bang and cannot explain the cosmic microwave background. If we restrict attention to open and flat-$\Lambda$ models we see that even if the ${\Omega_{\rm M}}=1, {\Omega_{\Lambda}}=0$ model is favored a priori by a factor of 2 because it is a simpler zero-parameter model, it is still strongly ruled out after examining the P99 data because the likelihood for this model in Table 8 is so low. If a priori we regarded the best-fitting flat-$\Lambda$ model and the best-fitting open model as equally likely (prior odds of $1:1$), then after examining the P99 data we should favor the flat-$\Lambda$ model by odds of $366:1$. This is an impressive result and does not assume that the errors are Gaussian or that their magnitude is known. It does rely on the assumption that the data points are independent and very importantly that there are no systematic effects. A modest systematic effect in the high-redshift supernovae would reverse these odds. The middle panel of Figure 2 in P99 shows that ten SNe Ia lie between the curves for the ${\Omega_{\rm M}}=0.28, {\Omega_{\Lambda}}=0.72$ and ${\Omega_{\rm M}}=0, {\Omega_{\Lambda}}=0$ models. The largest magnitude residual between these curves is approximately $\Delta m = 0.14$, thus a systematic shift of $-0.14$ mag would cause the data to strongly favor the ${\Omega_{\rm M}}=0, {\Omega_{\Lambda}}=0$ model over the ${\Omega_{\rm M}}=0.28, {\Omega_{\Lambda}}=0.72$ model with the same odds that now favor the best-fitting flat-$\Lambda$ model. We emphasize however that we are not suggesting that there is evidence for such a shift in magnitude. For comparison, using the same prior, the odds favoring the best-fit flat-$\Lambda$ model (with ${\Omega_{\rm M}}=0.24$ and ${\Omega_{\Lambda}}=0.76$) over the best-fit open model (with ${\Omega_{\rm M}}=0$ and ${\Omega_{\Lambda}}=0$) are $3:1$ after examining the R98 data. The implication of these analyses is clear: the SNe Ia data sets are now large enough to achieve powerful statistical results. Confidence in such results will obtain from more detailed investigation of possible systematic effects. Constraints on a Time-Variable Cosmological “Constant" {#quint} ====================================================== While the restricted one-dimensional models (flat-constant-$\Lambda$ and open) discussed in the previous two sections are consistent with most recent observations, the flat-constant-$\Lambda$ model seems to be in conflict with a number of observations, including: (1) analyses of the rate of gravitational lensing of quasars and radio sources by foreground galaxies which require a rather large ${\Omega_{\rm M}}\geq 0.38$ at 2 $\sigma$ in this model (see, e.g., Falco, Kochanek, & Muñoz 1998); and (2) analyses of the number of large arcs formed by strong gravitational lensing by clusters (Bartelmann et al. 1998, also see Meneghetti et al. 2000; Flores, Maller, & Primack 2000)[^6]. In the near future, measurements of the cosmic microwave background (CMB) anisotropy, thought to be generated by zero-point quantum fluctuations during inflation (see, e.g., Fischler, Ratra, & Susskind 1985), will provide a tight determination of cosmological parameters. See, e.g., Kamionkowski & Kosowsky (1999), Rocha (1999), Page (1999) and Gawiser & Silk (2000) for recent reviews of the field. While it has been suggested, largely from $\chi^2$ comparisons of CMB anisotropy measurements and model predictions (Ganga, Ratra, & Sugiyama 1996), that a spatially-flat model is favored over an open one (see, e.g., Lineweaver 1999; Dodelson & Knox 2000; Peterson et al. 2000; Page 1999; Melchiorri et al. 2000; Tegmark & Zaldarriaga 2000; Knox & Page 2000; Le Dour et al. 2000), such suggestions must be viewed as tentative (see discussion in Ratra et al. 1999 and references therein); see however Lange et al. (2000). More reliable constraints follow from models-based maximum likelihood analyses of CMB anisotropy data (see, e.g., Górski et al. 1995; Ganga et al. 1997, 1998; Rocha et al. 1999). But this method has not yet been applied to enough data sets to provide robust statistical constraints. A spatially-flat model with a time-variable $\Lambda$ can probably be reconciled with some of the observations that conflict with a large constant $\Lambda$ (e.g., Peebles & Ratra 1988; Ratra & Quillen 1992; Perlmutter, Turner, & White 1999b; Wang et al. 2000; Efstathiou 1999; Podariu & Ratra 2000; Waga & Frieman 2000). We emphasize, however, that most current observational indications are tentative and not definitive. At present, the only consistent model for a time-variable $\Lambda$ is that which uses a scalar field ($\phi$) with a scalar field potential $V(\phi)$ (Ratra & Peebles 1988). In this paper we focus on the favored scalar field model in which the potential $V(\phi) \propto \phi^{-\alpha}$, $\alpha > 0$, at low redshift (Peebles & Ratra 1988; Ratra & Peebles 1988)[^7]. A scalar field is mathematically equivalent to a fluid with a time-dependent speed of sound (Ratra 1991). This equivalence may be used to show that a scalar field with potential $V(\phi) \propto \phi^{-\alpha}$, $\alpha > 0$, acts like a fluid with negative pressure and that the $\phi$ energy density behaves like a cosmological constant that decreases with time. We emphasize that in the analysis here we do not make use of the time-independent equation of state fluid approximation to the scalar field model for a time-variable $\Lambda$, as has been done in a number of recent papers (see discussion in Podariu & Ratra 2000; also see Waga & Frieman 2000). The SNe Ia data also place constraints on a time-variable $\Lambda$. Here we consider only spatially flat models. For each SN Ia, there is a locus of values of $\alpha$ and ${\Omega_{\rm M}}$ that predict the corrected apparent magnitude. These curves define regions of different likelihood in the $\alpha-{\Omega_{\rm M}}$ plane. Figure 6 shows this plane, with greyscale intensity proportional to the binomial likelihood (eq. \[1\]) using 16 R98 high-$z$ SNe Ia. We now compute the posterior odds of the time-variable $\Lambda$ model versus the time-independent $\Lambda$ model, allowing only spatially flat cosmologies. The prior odds are set as in Section \[bayes\], thus we penalize complicated models by the prior $P\propto (1/2)^{N+1}$ where $N$ is the number of parameters. The time-variable $\Lambda$ model has two parameters, $\alpha$ and ${\Omega_{\rm M}}$, while the flat-constant-$\Lambda$ model has but one. Thus the prior odds are $2:1$ in favor of the constant $\Lambda$ model before examining the SNe Ia data. We focus on the range $0 \leq \alpha \leq 8$ since for larger $\alpha$ the time-variable $\Lambda$ model approaches the Einstein-de Sitter one (Peebles & Ratra 1988). For computational simplicity we also focus on the range $0.05 \leq {\Omega_{\rm M}}\leq 0.95$. To compare the time-variable $\Lambda$ model with the flat-constant-$\Lambda$ model, we compute average likelihoods for $\alpha>0$ and $\alpha=0$, adopting a uniform prior for both parameters over the ranges $0 \leq \alpha \leq 8$ and $0.05 \leq {\Omega_{\rm M}}\leq 0.95$. For the R98 data, the ratio of average likelihoods is $2:1$ in favor of the constant $\Lambda$ model, thus the posterior odds are $3.9:1$. Applying the same analysis to the P99 data, we find that these data favor the constant $\Lambda$ model by $18:1$ over the time-variable model. If we adopt logarithmic priors for both $\alpha$ and ${\Omega_{\rm M}}$ (in this case setting a lower bound $\alpha>0.01$ to the time-variable model — these results are insensitive to any $0.01<\alpha_{\rm min}<0.1$), the R98 data favor the constant-$\Lambda$ model by odds of $1.7:1$. However, log priors cause the P99 data to favor the time-variable $\Lambda$ model by $3.6:1$. The latter occurs because there are several SNe Ia in the P99 data set whose brightnesses would be matched by quite small (but non-zero) values of $\alpha$ and ${\Omega_{\rm M}}$, so giving larger weight to this region in the $\alpha-{\Omega_{\rm M}}$ plane strongly increases the average likelihood for the time-variable $\Lambda$ model. The strong dependence of the results on the prior distribution for the parameters indicates that better data are needed to convincingly constrain these parameters. We also compare the time-variable $\Lambda$ model to the open one with $0.05<{\Omega_{\Lambda}}<1$ and ${\Omega_{\Lambda}}=0$. For the R98 data the posterior odds are $3.2:1$ and $1.9:1$ in favor of the time-variable $\Lambda$ model when we adopt uniform and logarithmic priors, respectively, for $\alpha$ and ${\Omega_{\rm M}}$. These results motivate further consideration of time-variable $\Lambda$ models. Conclusions {#conclude} =========== Applications of median statistics that we present in this paper demonstrate that statistical independence and freedom from systematic errors are by themselves extremely powerful hypotheses. Perhaps to the surprise of most who survived Freshman Physics laboratory, we find that median statistics leads to strong constraints on models even though this method does not make use of the other two of four assumptions required for standard $\chi^2$ data analysis, those that require Gaussianity and knowledge of the errors. When applied to some of the astronomical data we consider, the median statistics results are dramatic enough to make one question even the first two hypotheses — independence and freedom from systematic error. Median statistics are relatively robust to bad data but when median statistics yield such strong results this could be a warning that the assumptions of independence and freedom from systematics should be carefully examined. Median statistics analysis of 331 Hubble constant estimates, from Huchra’s (1999) compilation, yields a value of $H_0=67{{\rm \, km\, s}^{-1}{\rm Mpc}^{-1}}$. This value is quite reasonable and in agreement with many recent estimates, including those obtained from the R98 SNe Ia data that we examine. Based on nearly all available data, this is arguably the best available current summary of our knowledge of the Hubble constant. Such a summary statistic is useful when one needs a consensus value for a cosmological simulation or similar application or, as in the case of the Hayden Planetarium, simply to present a value that is representative of current knowledge (the Planetarium chose $H_0=70{{\rm \, km\, s}^{-1}{\rm Mpc}^{-1}},\, {\Omega_{\rm M}}=1/3,\, {\Omega_{\Lambda}}=2/3$, just one significant figure for each constant). The formal, purely statistical 95% confidence interval that results from median statistics, $65-69{{\rm \, km\, s}^{-1}{\rm Mpc}^{-1}}$, is indeed narrow, which highlights the power of our assumptions. If they were truly independent and free of systematics, the extant estimates of $H_0$ would clearly be numerous enough. Systematic effects do, of course, dominate the error budget for the Hubble Constant. The vast majority of the published estimates share possible systematic uncertainty through the LMC distance scale and/or calibration of the Cepheid period-luminosity relation (as many as 279 of the 331 could be so affected). We apply median statistics to the distribution of different methods for measuring the LMC distance modulus and find a median value $m-M=18.46$ with 95% confidence limits $18.26<(m-M)<18.64$. This range of distance moduli implies that the systematic error in our estimate of the median of $H_0$ could be as large as 7.5% in either direction (95% limits). Grouping the 331 estimates of the Hubble constant by method and applying median statistics to the distribution of methods, we infer that the 95% confidence range of systematic error due to differences between methods is 7% (the median is $H_0=70{{\rm \, km\, s}^{-1}{\rm Mpc}^{-1}}$ with 95% range $64.5 - 74.5 {{\rm \, km\, s}^{-1}{\rm Mpc}^{-1}}$). To be conservative, we take the somewhat larger of these two estimates of systematic uncertainty and quote a total error budget on the true median of $H_0$ of $H_0=67 \pm 2 (95\% \ {\rm statistical}) \pm 5 (95\% \ {\rm systematic}){{\rm \, km\, s}^{-1}{\rm Mpc}^{-1}}$. Thus, systematic errors clearly dominate over the purely statistical errors. Of some interest is the dependence, or near lack thereof, of median statistics of $H_0$ on the authors of the papers or the year of publication. Completely excluding all the work of any of the best-known investigators or groups – Sandage, Tammann, and collaborators, de Vaucouleurs or van den Bergh, or the HST Key Project – has at most a $2{{\rm \, km\, s}^{-1}{\rm Mpc}^{-1}}$ effect on the median. The set of estimates attributed to none of these groups has median $H_0=68{{\rm \, km\, s}^{-1}{\rm Mpc}^{-1}}$; this value is also the median of the medians from each group. Recent $H_0$ estimates (post 1990 or post 1996) differ only slightly from the median of all estimates, shifting the median to $H_0=65{{\rm \, km\, s}^{-1}{\rm Mpc}^{-1}}$ with confidence limits that include the value estimated from the full data set. Our analyses of constraints on ${\Omega_{\rm M}}$ and ${\Omega_{\Lambda}}$ from recently published high-$z$ SNe Ia data from R98 and P99 generally support the conclusions of these groups. Although our results differ in detail, our median statistics prefer the same region in the (${\Omega_{\rm M}}$, ${\Omega_{\Lambda}}$) plane as did these earlier analyses. Because we abandon the assumption of Gaussianity, the statistical power of our results is somewhat smaller. If the assumption of Gaussianity is valid, then somewhat stronger constraints (with confidence similar to limits found by R98 and P99) could be obtained but these would not be identical to those of R98 and P99 because we assume a different prior. In agreement with R98 and P99, the ${\Omega_{\rm M}}= 1$ Einstein-de Sitter model is strongly ruled out. The reason for this strong result is simply that the majority of the SNe Ia are too faint for the model. Using only the binomial likelihoods that the observed SNe Ia are too bright/faint for a given model, we find that the 16 R98 high-$z$ SNe Ia rule out the Einstein-de Sitter model at the 99.6% confidence level. A similar analysis rules out ${\Omega_{\Lambda}}=0$ models at 89%. We apply a more complete Bayesian treatment to the 16 R98 SNe Ia, including appropriate priors for ${\Omega_{\rm M}}$ and ${\Omega_{\Lambda}}$, and for models with varying numbers of free parameters. The posterior probability that $\Lambda > 0$ is between 70 and 89%, depending on how we bound the parameter space using prior information (compare Tables 4 and 6). The posterior probability of an open universe is about 47% and the probability of a flat universe is either 51 or 38%. These results differ in detail from those of R98 (and a similar conclusion holds for the results of P99), whose analysis used a uniform prior for ${\Omega_{\rm M}}$ and made no allowance for the zero- or one-parameter models. The constraints on ${\Omega_{\Lambda}}$ are not sensitive to our use of a logarithmic prior for ${\Omega_{\rm M}}$, although the uniform prior does strongly discriminate against low ${\Omega_{\rm M}}$ models and significantly increases the odds of a flat model over an open one (also see Podariu and Ratra 2000). To determine the significance of constraints on ${\Omega_{\rm M}}$ and ${\Omega_{\Lambda}}$ from a larger data set, we apply median statistics to the 42 high-$z$ SNe Ia reported by P99. Here we simply count the number of SNe Ia that lie brighter/fainter than predicted by different models and compute the binomial likelihoods of these events. The likelihood of the best fitting flat-$\Lambda$ model (with ${\Omega_{\rm M}}=0.28$ and ${\Omega_{\Lambda}}=0.72$) is 366 times that of the best-fitting open model (with ${\Omega_{\rm M}}=0$ and ${\Omega_{\Lambda}}=0$). Thus, if a priori we regarded the flat-$\Lambda$ model and the open model as equally likely ($1:1$), then after examining the P99 data we should favor the flat-$\Lambda$ model by odds of $366:1$. (A similar analysis of the R98 data results in odds of $3:1$ in favor of the flat-$\Lambda$ model.) That we can achieve such dramatic constraints from median statistics alone indicates that it might be wise to carefully examine the possible effects of systematic errors. Although we do not mean to suggest that there is evidence for such an effect, we caution that a systematic shift of only $0.14$ mag would reverse these odds. Using similar techniques, we use the SNe Ia to evaluate the posterior probabilities of a time-variable cosmological “constant” compared to a flat-constant-$\Lambda$ model. Using uniform priors for the distribution of the parameters $\alpha$ and ${\Omega_{\rm M}}$, the R98 and P99 data favor the constant $\Lambda$ model over the time-variable $\Lambda$ one by posterior odds of $3.9:1$ and $18:1$, respectively. If we adopt logarithmic priors for the parameters, the R98 data favor the constant $\Lambda$ model by somewhat smaller odds, $1.7:1$, but the P99 data actually favor a time-variable $\Lambda$ by $3.6:1$. Similar analysis shows that the R98 data mildly favors a time-variable $\Lambda$ model over an open universe with $\Lambda=0$, by posterior odds $3.2:1$ or $1.9:1$ assuming uniform or logarithmic priors, respectively, on $\alpha$ and ${\Omega_{\rm M}}$. We conclude that the data in hand are not good enough to convincingly constrain these parameters. Given the simplicity of median statistics and their freedom from the sometimes-questionable assumption of Gaussianity, we find it surprising that such methods have not been applied more frequently. At the very least, this approach is useful for early analyses of data sets, before one has gathered the evidence to justify methods that require stronger hypotheses. As our examples illustrate, when applied even to larger data sets, median statistics provide a check on more complicated methods. When the results of median statistics seem questionable, analyses that rely on a larger number of assumptions are likely to be even more in doubt. We suggest that one follow the advice of Zeldovich. Take the median! We acknowledge valuable discussions with I. Wasserman. We thank J. Huchra for his compilation of Hubble constant estimates and R. Marcialis for providing the Pluto-Charon data. We are indebted to the referee, B. Gibson, for detailed comments which helped improved the paper. We also thank C. Dudley, A. Gould, A. Riess and L. Weaver for helpful comments on the manuscript. JRG acknowledges support from NSF grant AST-9900772. MSV acknowledges support from the AAS Small Research Grant program, from John Templeton Foundation grant 938-COS302, and from NSF grant AST-0071201. SP and BR acknowledge support from NSF CAREER grant AST-9875031. [lrr]{} No Type & 1 & 85\ SNe II & 2 & 66.5\ Global Summary & 70& 70\ B Tully-Fisher & 21 & 57\ CMB fit & 1 & 30\ $D_n-\sigma$ & 9 & 75\ SB Fluctuations & 8 & 82\ Glob. Cluster LF & 12 & 76.5\ IR Tully-Fisher & 16 & 85\ Irvine meeting & 5 & 67\ Grav. Lensing & 26 & 64.5\ Novae & 3 & 69\ Other & 54 & 70\ Plan. Nebulae LF & 3 & 87\ I, R Tully-Fisher & 11 & 74\ SNe I& 55 & 60\ Tully-Fisher & 9 & 73\ Sunyaev-Zeldovich & 25 & 55 [llr]{} & 0.00153 & 1\ 5.96 & &\ & 0.0244 & 16\ 1.68 & &\ & 0.183 & 120\ 0.656 & &\ & 0.854 & 560\ 0.206 & &\ & 2.78 & 1,820\ 0.0426 & &\ & 6.67 & 4,368\ $-$0.0136 & &\ & 12.2 & 8,008\ $-$0.165 & &\ & 17.5 & 11,440\ $-$0.266 & &\ & 19.6 & 12,870\ $-$0.303 & &\ & 17.5 & 11,440\ $-$0.310 & &\ & 12.2 & 8,008\ $-$0.349 & &\ & 6.67 & 4,368\ $-$0.724 & &\ & 2.78 & 1,820\ $-$1.33 & &\ & 0.854 & 560\ $-$1.60 & &\ & 0.183 & 120\ $-$1.62 & &\ & 0.0244 & 16\ $-$2.53 & &\ & 0.00153 & 1 [ccc]{} Steady-state & 33.3 & 67.3\ $1 < {\Omega_{\rm M}}< 4$ & 10.5 & 0.642\ ${\Omega_{\rm M}}= 1$ & 33.3 & 4.44\ $0.2 < {\Omega_{\rm M}}< 1$ & 12.2 & 6.35\ $0.05 < {\Omega_{\rm M}}< 0.2$ & 10.5 & 21.3 [ccc]{} $1 < {\Omega_{\rm M}}< 4$ & 4.52 & 0.34\ ${\Omega_{\rm M}}= 1$ & 57.1 & 9.37\ $0.2 < {\Omega_{\rm M}}< 1$ & 5.25 & 3.92\ $0.05 < {\Omega_{\rm M}}< 0.2$ & 4.52 & 13.33\ $0 < {\Omega_{\Lambda}}< 0.95$ & 6.97 & 41.53\ $-1 < {\Omega_{\Lambda}}< 0$ & 7.33 & 0.60\ Open & ${\Omega_{\Lambda}}> 0$ & 3.34 & 27.48\ Closed & ${\Omega_{\Lambda}}> 0$ & 3.81 & 1.15\ Open & ${\Omega_{\Lambda}}< 0$ & 5.51 & 2.23\ Closed & ${\Omega_{\Lambda}}< 0$ & 1.63 & 0.05\ [ccc]{} $\Lambda > 0$ & 70.16\ $\Lambda = 0$ & 26.96\ $\Lambda < 0$ & 2.88\ & &\ Flat & 51.50\ Open & 46.96\ Closed & 1.54\ [ccc]{} $0.2 < {\Omega_{\rm M}}< 1$ & 17.9 & 2.58\ $0.05 < {\Omega_{\rm M}}< 0.2$ & 15.4 & 8.75\ $0 < {\Omega_{\Lambda}}< 0.95$ & 33.3 & 38.35\ Open & ${\Omega_{\Lambda}}> 0$ & 22.8 & 36.16\ Closed & ${\Omega_{\Lambda}}> 0$ & 10.5 & 14.16 [ccc]{} $1 < {\Omega_{\rm M}}< 4$ & 10.96 & 0.94\ ${\Omega_{\rm M}}= 1$ & 57.1 & 15.10\ $0.2 < {\Omega_{\rm M}}< 1$ & 2.86 & 2.63\ $0.05 < {\Omega_{\rm M}}< 0.2$ & 0.57 & 2.47\ $0 < {\Omega_{\Lambda}}< 0.95$ & 6.97 & 59.20\ $-1 < {\Omega_{\Lambda}}< 0$ & 7.31 & 0.82\ Open & ${\Omega_{\Lambda}}> 0$ & 0.81 & 7.73\ Closed & ${\Omega_{\Lambda}}> 0$ & 6.33 & 10.00\ Open & ${\Omega_{\Lambda}}< 0$ & 2.63 & 0.87\ Closed & ${\Omega_{\Lambda}}< 0$ & 4.53 & 0.20\ [ccccc]{} 1 & 0 & 4 & 32 & 0.000076\ 0 & 0 & 10 & 32 & 1\ 0.28 & 0.72 & 21 & 21 & 366\ 0 & 1 & 31 & 11 & 2.9 [99]{} Aguirre, A. 1999, , 525, 583 Albrecht, A., & Skordis, C. 2000, , 84, 2076 Aldering, G., Knop, R., & Nugent, P. 2000, , 119, 2110 Amendola, L. 1999, Phys. Rev. D, 60, 043501 Barreiro, T., Copeland, E.J., & Nunes, N.J. 2000, , 61, 127301 Bartelmann, M., Huss, A., Colberg, J.M., Jenkins, A., & Pearce, F.R. 1998, A[&]{}A, 330, 1 Bartolo, N., & Pietroni, M. 2000, , 61, 023518 Battye, R.A., Bucher, M. & Spergel, D. 1999, , submitted Berger, J.O. 1985, Statistical Decision Theory and Bayesian Analysis (New York: Springer-Verlag), 82 Bertolami, O. & Martins, P.J. 2000, , 61, 064007 Binétruy, P. 2000, in The Early Universe, in press Brax, P., & Martin, J. 2000, , 61, 103502 Carroll, S.M. 2000, Living Rev. Relativity, submitted Cheng, Y.-C.N., & Krauss, L.M. 2000, Int. J. Mod. Phys. A, 15, 697 Chiba, T. 1999, Phys. Rev. D, 60, 083508 Chiba, M., & Yoshii, Y. 1999, , 510, 42 Choi, K. 1999, hep-ph/9912218 Cooray, A.R. 1999, , 524, 504 de la Macorra, A. 1999, hep-ph/9910330 de Ritis, R., Marino, A.A., Rubano, C., & Scudellaro, P. 2000, Phys. Rev. D, 62, 043506 de Vaucouleurs, G. 1993, , 415, 10 Dodelson, S., & Knox, L. 2000, , 84, 3523 Drell, P.S., Loredo, T.J., & Wasserman, I. 2000, , 530, 593 Efstathiou, G. 1999, , 310, 842 Falco, E.E., Gorenstein, M.V., & Shapiro, I.I. 1991, , 372, 364 Falco, E.E., Kochanek, C.S., & Muñoz, J.A. 1998, , 494, 47 Feast, M., Pont, F., & Whitelock, P. 1998, , 298, L43 Ferreira, P.G., & Joyce, M. 1998, Phys. Rev. D, 58, 023503 Fischler, W., Ratra, B., & Susskind, L. 1985, Nucl. Phys. B, 259, 730 Flores, R.A., Maller, A.H., & Primack, J.R. 2000, , 535, 555 Fujii, Y. 2000, Phys. Rev. D, 62, 064004 Ganga, K., Ratra, B., Gundersen, J.O., & Sugiyama, N. 1997, , 484, 7 Ganga, K., Ratra, B., Lim, M.A., Sugiyama, N., & Tanaka, S.T. 1998, ApJS, 114, 165 Ganga, K., Ratra, B., & Sugiyama, N. 1996, , 461, L61 Gawiser, E., & Silk, J. 2000, Phys. Rept., 333-334, 245 Gibson, B.K. 2000, Mem. Soc. Astron. Italiana, in press Gibson, B.K., Maloney, P.R., & Sakai, S. 2000, , 530, L5 Goobar, A., & Perlmutter, S. 1995, , 450, 14 Górski, K.M., Ratra, B., Stompor, R., Sugiyama, N., & Banday, A.J. 1998, ApJS, 114, 1 Górski, K.M., Ratra, B., Sugiyama, N., & Banday, A.J. 1995, , 444, L65 Gott, J.R. 1978, in The Large Scale Structure of the Universe, ed. M.S. Longair and J. Einasto (Dordrecht: Reidel), 63 Gott, J.R. 1982, Nature, 295, 304 Gott, J.R. 1997, in Critical Dialogues in Cosmology, ed. N. Turok (Singapore: World Scientific), 519 Gott, J.R., & Turner, E.L. 1977, , 213, 309 Hill, B. M. 1992, in Bayesian Analysis in Statistics and Econometrics, eds. P. K. Goel & N. S. Iyengar (New York: Springer-Verlag), 43 Höflich, P., Khokhlov, A., Wheeler, J.C., Phillips, M.M., Suntzeff, N.B., & Hamuy, M. 1996, , 472, L81 Höflich, P., Nomoto, K., Umeda, H., & Wheeler, J.C. 2000, , 528, 590 Holden, D.J., & Wands, D. 2000, Phys. Rev. D, 61, 043506 Huchra, J. P. 1999, compilation at http://cfa-www.harvard.edu/$\sim$huchra/, collected as part of the NASA/HST Key Project on the Extragalactic Distance Scale Jeffreys, H. 1961, Theory of Probability (Oxford: Oxford University Press) Kamionkowski, M. & Kosowsky, A. 1999, Ann. Rev. Nucl. Part. Sci., 49, 77 Kamionkowski, M., Ratra, B., Spergel, D.N., & Sugiyama, N. 1994, , 434, L1 Kaufmann, R. & Straumann, N. 2000, Ann. Phys., 11, 507 Kennicutt, R.C., Jr., et al. 1998, , 498, 181 Knox, L. & Page, L. 2000, , 85, 1366 Kochanek, C.S. 1991, , 382, 58 Lange, A.E., et al. 2000, , submitted Le Dour, M., Douspis, M., Bartlett, J.G., & Blanchard, A. 2000, A[&]{}A, submitted Lineweaver, C.H. 1999, in Gravitational Lensing: Recent Progress and Future Goals, ed. T. Brainerd and C. Kochanek, in press Lucchin, F., & Matarrese, S. 1985, , 32, 1316 Marcialis, R.L. 1997, in Pluto and Charon, ed. S.A Stern and D.J. Tholen (Tucson: University of Arizona Press) Masiero, A., & Rosati, F. 1999, in Venice 99 — Neutrino Telescopes, 2, 169 McHardy, I.M., Stewart, G.C., Edge, A.C., Cooke, B., Yamashita, K., & Hatsukade, I. 1990, , 242, 215 Melchiorri, A., et al. 2000, , 536, L63 Meneghetti, M., Bolzonella, M., Bartelmann, M., Moscardini, L., & Tormen, G. 2000, , 314, 338 Mochejska, B.J., Macri, L.M., Sasselov, D.D., & Stanek, K.Z. 2000, , 120, 810 Mould, J.R., et al. 2000, , 529, 786 Özer, M. 1999, , 520, 45 Page, L.A. 1999, astro-ph/9911199 Peebles, P.J.E. 1984, , 284, 439 Peebles, P.J.E., & Ratra, B. 1988, ApJ, 325, L17 Perlmutter, S., et al. 1999a, , 517, 565 (P99) Perlmutter, S., Turner, M.S., & White, M. 1999b, , 83, 670 Peterson, J.B., et al. 2000, , 532, L83 Podariu, S., & Ratra, B. 2000, , 532, 109 Press, S.J. 1989, Bayesian Statistics (New York: Wiley), 15 Press. W. 1997 in Unsolved Problems in Astrophysics, eds. J. N. Bahcall & J. P. Ostriker (Princeton: Princeton University Press), 49 Ratra, B. 1989, , 40, 3939 Ratra, B. 1991, , 43, 3802 Ratra, B., & Peebles, P.J.E. 1988, , 37, 3406 Ratra, B., & Peebles, P.J.E. 1994, , 432, L5 Ratra, B., & Peebles, P.J.E. 1995, Phys. Rev. D, 52, 1837 Ratra, B., & Quillen, A. 1992, MNRAS, 259, 738 Ratra, B., Stompor, R., Ganga, K., Rocha, G., Sugiyama, N., & Górski, K.M. 1999, ApJ, 517, 549 Riess, A.G., et al. 1998, , 116, 1009 (R98) Rocha, G. 1999, in Dark Matter in Astrophysics and Particle Physics 1998, ed. H.V. Klapdor-Kleingrothaus & L. Baudis (Bristol: Institute of Physics Publishing), 238 Rocha, G., Stompor, R., Ganga, K., Ratra, B., Platt, S.R., Sugiyama, N., & Górski, K.M. 1999, ApJ, 525, 1 Sahni, V., & Starobinsky, A. 2000, Int. J. Mod. Phys. D, 9, 373 Sandage, A. 1958, , 127, 513 Simonsen, J.T., & Hannestad, S. 1999, A[&]{}A, 351, 1 Sorokina, E.I., Blinnikov, S.I., & Bartunov, O.S. 2000, Astron. Lett., 26, 67 Stanek, K.Z., Zaritsky, D., & Harris, J. 1998, , 503, L131 Steinhardt, P.J. 1999, in Proceedings of the Pritzker Symposium on the Status of Inflationary Cosmology, in press Steinhardt, P.J., Wang, L., & Zlatev, I. 1999, Phys. Rev. D, 59, 123504 Stetson, P. 1998, , 110, 1448 Sulkanen, M.E. 1999, , 522, 59 Tegmark, M., &  Zaldarriaga, M. 2000, , in press Totani, T., & Kobayashi, C. 1999, , 526, L65 Waga, I., & Frieman, J.A. 2000, , 62, 043521 Waga, I., & Miceli, A.P.M.R. 1999, , 59, 103507 Wang, L., Caldwell, R.R., Ostriker, J.P., & Steinhardt, P.J. 2000, ApJ, 530, 17 Wang, Y. 2000, , 536, 531 Wetterich, C. 1995, , 301, 321 **FIGURE CAPTIONS** Figure 1 Figure 2 Figure 3a Figure 3b Figure 4a Figure 4b Figure 5 Figure 6 [^1]: The mean after removal of outliers could also prove useful, but that is another story. We note that while the mean is the quantity that minimizes the sum of the squares of the deviations of the measurements, the median is the quantity that minimizes the sum of absolute values of the deviations of the measurements. [^2]: The $\chi^2$ method may be generalized to take account of correlations, thus dropping hypothesis (1), but this requires knowledge of the covariance matrix. While the assumption of Gaussianity is not required for parameter estimation by simply maximizing the likelihood of $\chi^2$, this assumption is required for computing the confidence region of the parameters. [^3]: Including or excluding this SN does not qualitatively alter the conclusions (R98; Podariu & Ratra 2000). It is included in our analyses here. [^4]: Podariu & Ratra (2000) illustrate the effect of such a non-informative prior on the confidence contours derived from $\chi^2$ analyses. [^5]: While their primary analysis (fit C) makes use of only 38 of these SNe, we use all 42 in our analyses here. As discussed in P99, including or excluding the 4 “suspicious" SNe does not dramatically alter the conclusion. [^6]: Note that the constraints on the flat-constant-$\Lambda$ model from gravitational lensing of quasars (not radio sources) might be less restrictive than previously thought (see, e.g., Chiba & Yoshii 1999; Cheng & Krauss 2000), and semi-analytical analyses of large-arc statistics lead to a different conclusion (Cooray 1999; Kaufmann & Straumann 2000). [^7]: Other potentials have been also considered, e.g., an exponential potential (see, e.g., Lucchin & Matarrese 1985; Ratra & Peebles 1988; Ratra 1989; Wetterich 1995; Ferreira & Joyce 1998), but such models are inconsistent with observational data. A potential $\propto \phi^{-\alpha}$ plays a role in some high energy particle physics models (see, e.g., Masiero & Rosati 1999; Albrecht & Skordis 2000; de la Macorra 1999; Brax & Martin 2000; Choi 1999). Discussions of these and related models are given by Steinhardt, Wang, & Zlatev (1999), Chiba (1999), Amendola (1999), de Ritis et al. (2000), Fujii (2000), Holden & Wands (2000), Bartolo & Pietroni (2000), and Barreiro, Copeland, & Nunes (2000). Özer (1999), Waga & Miceli (1999), Battye, Bucher, & Spergel (1999), and Bertolami & Martins (1999) discuss other possibilities.
--- abstract: 'We investigate the origin of the evolution of the population-averaged size of quenched galaxies (QGs) through a spectroscopic analysis of their stellar ages. The two most favoured scenarios for this evolution are either the size growth of individual galaxies through a sequence of dry minor merger events, or the addition of larger, newly quenched galaxies to the pre-existing population (i.e., a progenitor bias effect). We use the 20k zCOSMOS-bright spectroscopic survey to select *bona fide* quiescent galaxies at $0.2<z<0.8$. We stack their spectra in bins of redshift, stellar mass and size to compute stellar population parameters in these bins through fits to the rest-frame optical spectra and through Lick spectral indices. We confirm a change of behaviour in the size-age relation below and above the $\sim10^{11} \mathrm{M}_\odot$ stellar mass scale: In our $10.5 < \log \mathrm{M_*/M_\odot} < 11$ mass bin, over the entire redshift window, the stellar populations of the largest galaxies are systematically younger than those of the smaller counterparts, pointing at progenitor bias as the main driver of the observed average size evolution at sub-10$^{11} \mathrm{M}_\odot$ masses. In contrast, at higher masses, there is no clear trend in age as a function of galaxy size, supporting a substantial role of dry mergers in increasing the sizes of these most massive QGs with cosmic time. Within the errors, the \[$\alpha$/Fe\] abundance ratios of QGs are $(i)$ above-solar over the entire redshift range of our analysis, hinting at universally short timescales for the buildup of the stellar populations of QGs, and $(ii)$ similar at all masses and sizes, suggesting similar (short) timescales for the whole QG population and strengthening the role of mergers in the buildup of the most massive QGs in the Universe.' author: - 'Martina Fagioli, C. Marcella Carollo, Alvio Renzini, Simon J. Lilly, Masato Onodera and Sandro Tacchella' bibliography: - 'master.bib' title: 'Minor Mergers or Progenitor Bias? The Stellar Ages of Small and Large Quenched Early-Type Galaxies' --- Introduction ============ The observed evolution with cosmic time in the population-averaged size of Quenched Galaxies (QGs, here often also referred to as ‘passive’ or ‘quiescent’ galaxies, as opposed to ‘star-forming’ galaxies) at fixed stellar mass has received a lot of attention in the past decade (e.g., @daddi2005 [@trujillo2007; @cimatti2008; @vandokkum2008; @cassata2011; @carollo2013], hereafter C13, @poggianti2013 [@vanderwel2014]). The median half-light radius of QGs is about a factor $\sim$3–5 larger in the local universe than at redshift $z\sim 2$ [@newman2012]. The size growth scales as roughly $(1+z)^{-1}$, and it is similar to the rate of growth of the sizes of dark matter halos, but is somewhat steeper than the latter. This has sparked an intense debate concerning the physical mechanism behind this size evolution. There are two main scenarios to which the evolution of the size-mass relation has been ascribed: the growth of individual QGs through a series of dry minor merger events, or the continuous addition of larger, recently quenched, galaxies, at later epochs. This effect is an example of so-called ‘progenitor bias’ in the sense that the population changes because of a change in membership rather than through changes in individual members (e.g., @franx2008 [@newman2012]; C13; @poggianti2013 [@belli2015]). In the individual growth scenario, the compact cores of QGs would remain constant in mass within a few kiloparsecs, but would accrete extended stellar envelopes around them (@cimatti2008 [@hopkins2009; @naab2009; @cappellari2013a]). Contrary to major mergers, minor gas-poor (dubbed ‘dry’) mergers could have a key role: for a given amount of added mass, mergers with higher mass-ratios (i.e. minor mergers) result in a larger size increase (@villumsen1983 [@hilz2012]; see also @taylor2010 [@feldmann2010; @szomoru2011; @mclure2013; @vanderwel2014]). This scheme would require $\sim 10$ dry mergers with $\sim 1:10$ mass ratio to account for the observed growth in size [@naab2009; @vandesande2013]. The mergers are required to be dry since ‘wet’ mergers, involving gas-rich galaxies, are expected to lead to central star formation and therefore to a reduction of the half light radius of the primary galaxy. At a mass ratio of 1:10, the companion of a $10^{11}\mathrm{M}_\odot$ galaxy is a $10^{10}\mathrm{M}_\odot$ galaxy. Galaxies of this mass are generally gas-rich systems (e.g., @santini2014 [@genzel2015]). Therefore, the sequence and number of the required dry mergers, without a substantial contribution of wet mergers, is quite problematical, an aspect which has been substantially ignored so far. Regardless of whether the merger scenario can explain the observed effect, the possible effects of progenitor bias must anyway be taken into consideration. An implicit assumption of the individual size growth view is that galaxies which are being quenched at different epochs have similar properties. If not, however, this could lead to a progenitor bias effect. In the context of the evolution of the average size-mass relation, the addition of newly quenched galaxies to a pre-existing population of QGs could lead to an observed growth of the average size of the population even if individual early-type galaxies do not grow at all. This is particularly important especially in the light of the observed increase by about one order of magnitude of the comoving number density of massive (i.e., $\gtrsim10^{11}\mathrm{M}_\odot$) QGs from $z=2$ to the present epoch (e.g. @ilbert2010 [@cassata2013; @muzzin2013]). Tracing the evolution of the number density of QGs of different sizes offers clues towards discriminating between the two scenarios. Different studies well agree on the evolution of the number densities of the smallest and densest QGs at stellar masses above $\sim10^{11} \mathrm{M}_\odot$, where a steady decrease is observed with cosmic time. At lower masses, however, different authors report different results. For example, C13 did not find any change in the number density of their ‘compact’ galaxies at masses $10.5 < \log\mathrm{M}_*/\mathrm{M}_\odot < 11$; they report instead a substantial increase in the number density of large QGs. The constancy of the compact population and the increase in the large population led those authors to advance the progenitor bias interpretation. At similar $10.5 < \log\mathrm{M}_*/\mathrm{M}_\odot < 11$ masses, however, [@vanderwel2014] report a strong decrease in the number density for compact QGs since $z=1.5$, and therefore interpret their observed disappearance of these objects at the lower redshifts as indication of a growth in size of individual QGs. In comparing results from different studies, it is important however to note that the adopted definition of stellar mass is an important factor when discussing the evolution of the size-mass relation. In C13, and also in this paper, we will define the stellar masses to be the integral of the star formation rate (SFR). These are about 0.2 dex larger than the commonly used definition which subtracts the mass returned to the interstellar medium, i.e. the mass of surviving stars plus compact stellar remnants. The former has the feature of remaining constant after the galaxy ceases star-formation, whereas the latter continually decreases. Thus, when comparing the properties of quenched galaxies in a given mass bin across cosmic time, one should clearly use the former. This effect explains part of the discrepancies found in the different number density analyses: effectively high redshift galaxies are given a spuriously high mass, which leads to them appearing to be too small and to have a higher number density at high redshift. Another factor that leads to different estimates for the evolution of the number densities of small QGs is the definition of the bins in which the densities are computed, in particular whether a single size threshold is used to compare number densities at different redshifts, or whether the bins are defined along the size-mass relation at each given redshift (which, due to its evolution, implies a comparison between populations of different sizes). Number densities alone however are not conclusive. C13 and @damjanov2015a agree that, at masses below the $\sim10^{11} \mathrm{M}_\odot$ scale, the number densities of compact QGs remain constant since at least $z \sim 1$; these authors reached however different conclusions on the origin of this constancy. @damjanov2015a proposed that the compact QG population is continuously replenished with younger members, so as to compensate for the shift towards larger sizes of individual galaxies due to mergers. In contrast, C13 argued that the compact population remains stable since $z\sim1$ and the newly-accreted memberd of the population have increasingly larger sizes at steadily lower redshifts. These two interpretations can be easily tested through the average ages of the populations involved. If the increase of the median size is due to the addition of newly-quenched galaxies that are progressively larger towards lower redshifts, then, at any epoch, the stellar populations of larger QGs should be *younger* than those of smaller QGs of similar mass. On the other hand, if individual QGs grow their sizes through mergers and the number density of compact QGs remains more or less constant due to the continuous production of compact QGs, then, at any epoch, *smaller* QGs should be younger on average than their larger relatives of similar mass. Therefore, the stellar ages of the galaxies offer a powerful discriminant between these two scenarios (see e.g. also @onodera2012 [@belli2014b; @keating2015; @yano2016]). C13 did a study of the colors of compact and large $<10^{11} \mathrm{M}_\odot$ QGs at different redshifts, and found that, at any epoch, larger QGs appear to be bluer than those of their smaller counterparts; it is this result that led those authors to conclude that, at these masses, the stellar populations of larger QGs are younger than those of smaller QGs and thus that the evolution in size of the whole populations is to a large extent ascribable to the addition of recently quenched, larger QGs. Galaxies quenched at later epochs are indeed expected to have larger sizes than the ones quenched earlier as (progenitor) star-forming galaxies also experience an evolution in their average size with cosmic time (e.g., @newman2012). Stellar ages determined on the basis of a single rest-frame optical color (as done in C13), however, heavily suffer from the well-known degeneracy between age, metallicity and also, possibly most problematically, dust effects [@worthey1994]. We therefore push here the analysis of the stellar ages of small and large QG below and above the evidently important mass threshold of $10^{11} \mathrm{M}_\odot$ using more robust spectroscopic measurements of stellar population properties. Our primary goal is to test whether and to what extent progenitor bias is driving the increase of the average size of passive galaxies as a function of stellar mass; specifically, we use two mass bins with boundaries $10.5 < \log\mathrm{M}_*/\mathrm{M}_\odot < 11$ and $11 < \log\mathrm{M}_*/\mathrm{M}_\odot < 11.5$. We also use the spectroscopic diagnostics to study the ratio of different elements in the attempt to constrain the timescales of buildup of the stellar populations of quenched galaxies of different masses and sizes. Even spectroscopically, however, residual degeneracies between effects of age and metallicity continue afflicting galaxy ages, which are not straightforward to obtain. In the last few years, a number of ‘full spectral fitting’ codes (e.g, [@ocvirk2006b; @ocvirk2006a; @koleva2009; @cappellari2004], STECKMAP, ULySS and pPXF, respectively) have been developed in order to address this issue. In the full spectral technique, a set of templates is used to fit the overall shape of the spectrum. The most recent full spectral fitting codes do not fit the overall shape of the continuum, thereby avoiding common problems such as flux calibration and extinction. Instead, a polynomial function is used to fit the shape of the continuum. Full spectrum fitting codes are good at handling the impact of the age-metallicity degeneracy as they maximize the information used from the whole observed spectrum (@koleva2008 [@sanchezblazquez2011; @beasley2015; @ruizlara2015]). We therefore adopt this methodology to derive our fiducial stellar population ages in this paper. Besides the full spectral fitting analysis, we have also used however the Lick line-strength indices to get independent estimates of ages and metallicities. The Lick system of spectral line indices is a commonly used method to determine ages and metallicities of stellar populations (e.g., @burstein1984 [@gonzalez1993; @carollo1994; @worthey1994; @worthey1997; @trager1998; @trager2000; @trager2005; @korn2005; @poggianti2001; @thomas2003a; @thomas2003b; @korn2005; @schiavon2007; @thomas2011; @onodera2012; @onodera2014]). The system consists of a set of 25 optical absorption line indices, spanning a wavelength range from $\sim$ 4080 to $\sim$ 6400 Å. The absorption features are particularly useful because they are largely insensitive to dust attenuation [@macarthur2005]. Even so, age-dating based on the Lick indices is not free from degeneracy effects. Its main pitfall is that most indices are sensitive to all the basic population parameters, namely age, metallicity and the ratio of $\alpha$-elements to iron. Circulation of the errors can generate spurious correlations or anticorrelations [@kuntschner2001; @thomas2005; @renzini2006]. For example, an underestimate in the strength of a Balmer line (mainly sensitive to age), due e.g. to partial filling in by an emission line, may lead to an overestimate in the age, but having an overestimated age, the procedure is forced to underestimate metallicity in order to match the strength of the metal lines. In this way, a spurious age-metallicity anti-correlation can be generated. A strength of our analysis is thus to attempt to mitigate the intrinsic degeneracies by using and comparing for cross-validation both methodologies, i.e., the full spectral fitting approach and the Lick indices approach. We include in our study a brief introspection of the $\alpha$-elements to $Fe$-elements abundance ratios (i.e., \[$\alpha$/Fe\]) in QGs of different masses and sizes. The \[$\alpha$/Fe\] ratio is a well-known diagnostics to constrain formation timescales (@matteucci1986 [@pagel1995]), since $\alpha$-elements such as O, Ne, Mg, Si, S, Ar, Ca, and Ti (i.e., nuclei that are built up with $\alpha$-particles) are delivered mainly by core collapse (CC) supernova explosions of massive stars and thus on much shorter timescales than elements such as Fe and Cr, which come predominantly from the delayed explosion of Type Ia supernovae (e.g., @nomoto1984 [@woosley1995; @thielemann1996]). Enhanced values of \[$\alpha$/Fe\] ratio indicate a short formation timescale.\ The paper is organized as follows. In Section \[dataset\] we describe the data set, the zCOSMOS-bright 20k catalog, and its features. Section \[sampselmeasurements\] summarizes the basic measurements and presents the spectroscopic sample selection in detail. Section \[analysis\] presents the steps we took in the course of our analysis. Section \[sizemass\] describes the binning in mass, size and redshift, and Section \[stacking\] describes the stacking procedure that was used to obtain average spectra as a function of redshift, mass and size. The fitting used to correct for the emission lines contribution is described in Section \[emcorrection\]. Section \[fullspectralfitting\] describes how we derived ages with full spectral fitting using `pPXF` and Section \[lickmeasurements\] describes how we measured the Lick strengths and derived the stellar population parameters from them. In Section \[results\] we present our results, followed by a discussion in Section \[discussion\]. In Section \[conclusion\] we summarize our paper and present the conclusions.\ Through this paper we adopt a $\Lambda$-dominated Cold Dark Matter ($\Lambda$CDM) cosmology, with $\Omega_m\,=\,0.3$, $\Omega_\Lambda\,=\,0.7$ and $H_0\,=\,70$ kms$^{-1}\,$Mpc$^{-1}$. All magnitudes are given in AB system. We use ‘dex’ to refer to the anti-logarithm, so that 0.3 dex represents a factor of 2. Data Set {#dataset} ======== We use the zCOSMOS-bright 20k sample, to which however we apply several cuts in order to limit the redshift range to the $0.2 \le z \le 0.8$ interval, the mass range to the $10.5 < \log\mathrm{M}_*/\mathrm{M}_\odot < 11.5$ interval, and to ensure that the selected galaxies are *bona fide* passive systems. In Figure \[fig: sampsel\] we show a schematic view of the selection criteria that we have applied to the original sample, and in the next sections we describe in detail the steps which lead to the final selection. The 20k zCOSMOS-bright catalog ------------------------------ The spectra we employed come from the full zCOSMOS-bright 20k catalog [@lilly2007; @lilly2009]. Here we briefly summarize the data. zCOSMOS-bright consists of about 20,000 galaxy spectra selected to have $I_{\rm AB}<$ 22.5 across the full 1.7 deg$^2$ in the COSMOS field [@scoville2007]. The zCOSMOS project [@lilly2007] is a large redshift survey of galaxies undertaken on the ESO VLT. The bright part uses the VIMOS MR grating with a resolution of $R\sim600$ and a pixel size of $\approx2.553$ Å. The VIMOS wavelength coverage spans from 5500 to 9700 Å. The greatest advantage of the zCOSMOS spectroscopic survey is to combine high quality spectra with a compilation of multi-wavelength imaging of the COSMOS survey data set, including *HST*/ACS data [@Koekemoer2007]. Therefore we have available high quality spectroscopic redshifts, photometrically derived quantities and high resolution images. The typical redshift uncertainty in zCOSMOS-bright is $\pm$ 110 km s$^{-1}$ [@lilly2007]. A confidence class parameter is introduced to estimate the reliability of the redshift assignment. Also, objects flagged with confidence class 4 usually show high quality spectra. The spectroscopic redshifts are compared with photometric redshifts derived from the COSMOS multi-band photometric data and a decimal number is used to flag the agreement or otherwise between the photometric and spectroscopic redshifts. For a complete description of confidence classes the reader is referred to [@lilly2007; @lilly2009]. The S-COSMOS MIPS 24 $\mu$m catalog ----------------------------------- The S-COSMOS survey [@sanders2007] is a deep infrared imaging survey, which comprises IRAC 3.6, 4.5, 5.8 and 8.0 $\mu$m and MIPS 24, 70 and 160 $\mu$m observations including the entire 2 deg$^2$ of the COSMOS-ACS field. It has been carried out with the Spitzer Space Telescope as part of the Spitzer Cycle 2 and 3 Legacy Programs. In Cycle 2, the COSMOS field has been mapped at 24 $\mu$m. The observations performed in Cycle 3 mapped the entire COSMOS area reaching deeper flux limits, down to a flux density limit S$_{24\,\mu m}\approx$ 0.08 mJy. Measurements and Sample Selection {#sampselmeasurements} ================================= Redshifts --------- We selected objects within a restricted redshift range of $0.2<z<0.8$ so as to be complete in mass down to $\log\, \mathrm{M_*/M}_\odot =10.5$ at $z = 0.8$ [@pozzetti2010]. In order to achieve a high signal-to-noise ratio, adeguate for a Lick-based stellar population analysis, we further restricted the sample to confidence Classes 3 and 4 (including secondary objects in Classes 23 and 24, but excluding objects with broad emission lines, Classes 13 and 14). Class 3 and 4 spectra have a very secure redshift assignment (reliability $>99.8\%$). Almost all the galaxies in our sample (98.5%) have been flagged with the .5 decimal number, indicating an agreement between spectroscopic and photometric redshift to within $0.08(1+z)$ (a subsequent visual inspection of all of the final set of objects confirmed the correctness of the assigned spectroscopic redshifts). After this first selection, we end up with a sample of 9,208 objects. Stellar Masses {#stellarmasses} -------------- Our sample is matched with that from C13 to get structural parameters, whose derivation has been fully described in that paper. The stellar masses are derived with the `ZEBRA+` [@oesch2010] code from synthetic SED fitting to 11 photometric broad bands from 3832 Å (u\*, CFHT) to 4.5 $\mu$m (*Spitzer*/IRAC channel 2). A set of star formation history models with exponentially declining SFRs, ranging metallicities from 0.05 to 2 Z$_\odot$, with decay timescales from $\tau\sim$ 0.05 to 9 Gyr, and ages from 0.01 to 12 Gyr, constitutes the SED library. This was constructed with the [@bruzual2003] (BC03) stellar population synthesis code with a Chabrier initial mass function [@chabrier2003]. We use dust reddening from [@calzetti2000] with $E(B-V)$ as a free parameter. As discussed in the Introduction, we stress that our stellar masses are defined as the time-integral of the SFRs, which are on average around 0.2 dex higher than stellar masses which exclude the mass that is subsequently returned to the interstellar medium. Sizes ----- We adopt the half-light radius ($\mathrm{r}_{1/2}$) as an estimation of the size of our galaxies. The procedure is fully described in C13. The sizes of our galaxies have been measured with the software `ZEST+` (*Zurich Estimator of Structural Types Plus*), an extended version of `ZEST` [@scarlata2007]. The main advantage is that it measures the half-light radii within elliptical apertures, instead of the circular apertures of `SExtractor` [@bertin1996]. `ZEST+` requires as input the total apparent flux of each galaxy, which is taken from the 2.5 R$_{\mathrm{Kron}}$ [@kron1980] value from `SExtractor`. Using a file with the positions of close objects, `ZEST+` first replaces the segmentation maps of companion galaxies with random sky values, and then estimates the local sky background of the galaxy. Finally, the code outputs the semi-major axis of the corresponding elliptical aperture that encloses half of the output total flux. In Figure \[fig: measurements\] we show the stellar masses and sizes distribution before and after the mass cut. Spectroscopic Sample Selection ------------------------------ ![image](fig_1.pdf){width="160mm"} ![Equivalent Widths of \[OII\] line (EW\[OII\]) measured on 1000 stacked spectra, each constructed from a random subsample of 100 of our visually inspected sample of galaxies. The definition of continuum and line bandpasses are from [@balogh1999]. This tests the success in defining a set of galaxies that are free of emission-lines. The typical stack shows a negative equivalent width, and the maximum ever seen is +1.5 Å, well-below the 5 Å limit which is commonly used in the literature to separate star-forming and passive galaxies (e.g., [@mignoli2009; @moresco2010]).[]{data-label="fig: equivwidth"}](fig_2.pdf){width="88mm"} To proceed towards our goal of studying the properties of QGs, we give special attention to the separation between star-forming and quiescent galaxies. A number of studies have used photometric information to achieve this separation; a UVJ color-color diagram has been found to be particularly effective in this direction (@wuyts2007 [@patel2009; @williams2009; @whitaker2011; @muzzin2013; @moresco2013]). A cleaner separation can clearly be achieved however using spectroscopic diagnostics (see e.g., @onodera2014). Our selection of quiescent galaxies was done by choosing objects which show no, or only very weak, emission lines, with the following procedure: - Identify the expected wavelength of H$\alpha$, H$\beta$ and \[OII\] 3727 Å. - Define a continuum region on either side of this location to be between 1 and 5 times the FWHM $\Delta\lambda$ of the instrumental resolution ($\mathrm{R}=\lambda/\Delta\lambda=600$) and compute the mean ($\langle f_{\mathrm{cont}}\rangle$) and the standard deviation ($\sigma$) of the continuum per pixel. We then consider a line to be detected in emission if the peak of the line exceeds three times the noise per pixel, $$\max\,(f_{\mathrm{line}}-\langle f_{\mathrm{cont}}\rangle)\, >\,3\sigma$$ This is straightforwardly applied for H$\alpha$ and \[OII\] and the object is excluded if either line is detected. The case of $\mathrm{H}\beta$ is complicated by stellar absorption lines. If neither of \[OII\] and H$\alpha$ lines is in the observed wavelength range, we use an empirical calibration of the ratio of the peak of the \[OII\] and $\mathrm{H}\beta$, \[OII\]$\simeq 2.8\mathrm{H}\beta-2$, which yields an equivalent threshold of $\gtrsim 1.8\sigma$, using the standard deviation of the continuum from the blue side of the feature only, due to the proximity of \[OIII\] 5007 Å. Our wavelength range included $\mathrm{H}\alpha$ for 1042 galaxies and \[OII\] 3727 Å for 3826 objects. For the rest of the galaxies in the sample, we use H$\beta$ as the criterion to exclude emission line objects. As a result of this selection, we find a total of 2,094 galaxies showing no detected emission lines.\ \ To further check against the presence of star forming objects, we cross-match these 2,094 galaxies with the S-COSMOS MIPS 24 $\mu$m Photometry Catalog (October 2008) and with C-COSMOS (Chandra COSMOS) [@elvis2009], identifying sources within $2\arcsec$ ([see also @caputi2008]). We find 257 galaxies having a possible detection in the MIPS catalog and 28 in the X-ray. We discarded 38 objects which have a MIPS detection and for which subsequent visual inspection of the spectra revealed emission lines that had not been found by the automatic algorithm described above. The emission lines were hidden at the edges of the wavelength range, where the signal-to-noise ratio is lower and fringing may alter the shape of the continuum. None of the objects with X-ray detection show emission features in the zCOSMOS spectra and therefore we keep those galaxies in the quiescent sample. As the selection of a purely quiescent sample is crucial for our analysis, because the star-forming objects are expected to be larger than quiescent ones, we visually inspected the spectra of all the galaxies in the sample. We further discard 51 galaxies for which visible emission lines had not been detected from the automatic code. At the end of this selection, 2,005 objects have been defined as purely quiescent galaxies. We stress that this selection has been made purely on the basis of the spectra (and MIPS and X-ray catalogues) with no reference to the images, sizes, or morphologies of the galaxies. As a final check that we have succeeded in excluding all galaxies with significant emission lines, we stack the spectra of a subsample consisting of a randomly chosen $5\%$ of the final set of 2,005 galaxies and measure the Equivalent Width of \[OII\] $(\mathrm{EW([OII])}$), with continuum and line passbands defined as in @balogh1999. We repeat this procedure 1000 times. Figure \[fig: equivwidth\] shows the distribution of the EW(\[OII\]) of these stacked spectra. The EW(\[OII\]) of the stacked spectra in every case far below the 5 Å limit which is commonly used to separate star forming from quiescent galaxies (e.g., [@mignoli2009; @moresco2010], see also [@balogh1999] for the typical values of EW(\[OII\]) they found for quiescent galaxies). In fact, the maximum EW for the \[OII\] emission line in these 1000 trials was $<2$ Å, which corresponds roughly to a $\log\mathrm{(sSFR/Gyr}^{-1}) < -2$ [see also Figure 5 in @moresco2013], and the mean is below zero. In Figure \[fig: sampsel\], where we show the steps of our sample selection, we also give the number of galaxies remaining in the sample at each of the steps. The final step is to apply a cut in stellar mass at $10.5<$ log M/M$_\odot <11.5$, which should be complete in the redshift range $0.2<z<0.8$, applying the procedure described in @pozzetti2010. Figure \[fig: measurements\] shows the distribution of the stellar masses of the passive sample, before the cut, and of the redshifts and sizes before and after the mass cut is applied. Analysis ======== Sample Binning {#sizemass} -------------- In order to study the average properties of galaxies in our sample, we split them into bins of redshift, stellar mass and size. Our galaxies have three equal bins in redshift within the redshift interval $0.2 < z < 0.8$, with $\Delta z=0.2$. We cut the redshift interval at $z =0.8$ in order to be complete in mass down to $\log\mathrm{M_*/M_\odot} =10.5$. We then divide the mass range into two bins with size of 0.5 dex, as in C13, and further divide the sample into three size bins, following two different procedures, as follows. Quiescent galaxies are known to follow a tight relation between their size (r$_{1/2}$) and stellar mass (M$_*$), which evolves with redshift (@daddi2005 [@williams2010; @newman2012; @patel2013; @mosleh2013; @vanderwel2014; @tacchella2015a]). Our final goal is to compare the average stellar population parameters for a sample of ‘small’ and ‘large’ QGs. Therefore, we fit a size-mass relation for the different redshift bins, finding minimal or no variation in the slope, as expected (e.g., @vanderwel2014). We therefore fix the slope at the value obtained in the central redshift bin, $0.4 < z< 0.6$, as shown in the central panel in the lefthand figure in Figure \[fig: number\]. The resulting values of the intercepts at different redshifts are given in Table \[fig: intercept\]. As expected, we find decrease in the size (at given mass) with redshift. Our first approach to binning in size constructs bins relative to this evolving mean relation. This is shown in the lefthand plot in Figure \[fig: number\]. The two dashed lines, with the same slope as the our fitted size-mass relation, split the sample into $35:30:35 \%$ of the galaxies (across the whole redshift range). We define the galaxies lying in these areas as ‘small’, ‘intermediate size’ and ‘large’ galaxies. The bold numbers in Figure \[fig: number\] show how many passive galaxies we stacked as a function of redshift, size and mass. We define this binning as ‘size-mass cut’. ----------------- --------------------- Redshift Slope ($\alpha$) $0.2 < z < 0.8$ 0.63 Redshift Intercept ($\beta$) $0.2 < z < 0.4$ 5.76 $0.4 < z < 0.6$ 6.06 $0.6 < z < 0.8$ 6.43 ----------------- --------------------- : Intercept and slope in the size–mass relation as a function of redshift ($\log\,\mathrm{r}_{1/2} = \alpha\log\,\mathrm{M_*}-\beta$) \[fig: intercept\] This cut is useful to compare the average stellar ages among different sizes in the same redshift bin. However, the comparison of stellar ages between different redshift intervals becomes difficult, as different populations of galaxies may enter into the definition of small and large as the mean relation evolves. To make such comparison, we also apply a different binning in size, which we define as ‘horizontal cut’ (as shown in the righthand panels in Figure \[fig: number\]). In this case, we define for each mass bin, three bins in size that are the same at all three redshifts. Specifically, for the mass bin $10.5<\log \mathrm{M}_*/\mathrm{M}_\odot<11$, we define as small the galaxies having $r_{1/2} < 2$ kpc, intermediate, $2<r_{1/2}<5$ and large, $r_{1/2}>5$ kpc. For more massive galaxies ($11<\log \mathrm{M}_*/\mathrm{M}_\odot<11.5$), small galaxies have r$_{1/2}<4.5$ kpc and large $>7.5$ kpc. This allows us to track the evolution of the stellar ages of galaxies with same size and same stellar masses (i.e., integrated SFRs, see Section \[stellarmasses\]) through different cosmic epochs. Stacking -------- ![image](fig_5.pdf){width="158mm"} We computed the average stacked galaxy and noise spectra for the galaxies in each bin of redshift, stellar mass and size. First, we de-redden the individual spectra for Galactic extinction following the extinction curve for diffuse gas from [@odonnell1994] with $R_{\rm V} = 3.1$, and using the Galactic $E(B - V)$ values from the maps of [@schlegel1998]. We find a typical correction factor of $f_{\mathrm{obs}}\sim0.96f_{\mathrm{dered}}$. We do not correct for any internal dust extinction in the galaxies, since they are expected to be passive systems with negligible dust. We then de-redshift the spectra to the rest frame and normalize them by the mean flux at rest frame $4,100< \lambda < 4,700$ Å. The spectra are then interpolated onto a 1 Å linearly spaced wavelength grid. The associated noise spectra, which we use as weights during the stacking procedure, are normalized by the same factor as that for the object spectra and interpolated to the rest-frame in quadrature. The spectra are a straight average of the individual spectra, weighted by the signal to noise in each spectrum at each wavelength. In Figure \[fig: allspectra\] we show the stacked spectra of all bins of our analysis, i.e., all redshifts, mass bins, and size bins. In each panel we show two spectra, respectively for the size-mass binning and for the horizontal-cut binning. To highlight the quality of the stacked spectra, we replot in Figure \[fig: stacked\] the spectra of only the small galaxies, this time however with an expanded horizontal axis. The main absorption lines that we have used for our measurements of ages are marked with vertical grey bands; also shown, with vertical dashed lines, the spectral features which were excluded from the computation of the stellar ages, as discussed in Section \[lickmeasurements\]. Due to the variety of the redshifts in the sample, at all redshifts the stacked spectra overlap approximately in the wavelength range $4,150<$Å$<4,800$ (shaded yellow area in Figure \[fig: stacked\]). We fit our stacked spectra with `pPXF` [@cappellari2004], adopting stellar templates from version 9.1 of the MILES library, which consists of 985 stars, whose spectra cover a range of 3,525–7,500 Å at 2.51 Å (FWHM) spectral resolution [@sanchez2006; @falconbarroso2011]. During the `pPXF` fit, we use a $4^{th}$-order additive and $4^{th}$-order multiplicative polynomial correction for the spectral slope of our template. Additive polynomials are introduced to correct low-frequency differences in shape between the galaxy and the templates [@cappellari2004]. The multiplicative polynomials are included to ensure that the results are insensitive to the normalization or flux calibration of galaxy and stellar template spectra [@Kelson2000]. The polynomial degree is chosen to maximize the quality of the fit, which is also confirmed by a visual inspection. We find values for the velocity dispersions of between 125 and 215 km s$^{-1}$, after subtracting the instrumental velocity dispersion in quadrature. We adopt the same procedure as [@cappellari2009] and compute $\sigma_{\mathrm{stack}}$ using $\sigma_{\mathrm{stack}}\approx c\Delta z/(1 + z)$, where $c$ is the speed of light and $\Delta z=110\mathrm{\,km\,s}^{-1}$ is the zCOSMOS-bright redshift error. We find values for $\sigma_{\mathrm{stack}}\,\mathrm{of}\sim60-90\mathrm{\,km\,s^{-1}}$, depending on the redshift. The best-fit templates are shown as dashed red lines in Figure \[fig: stacked\] and Figure \[fig: allspectra\]. For each of the stacked spectra of Figure \[fig: allspectra\], we show greatly expanded in Figure \[fig: allres\] the residuals obtained after subtraction of the best-fit stellar-superposition templates. The regions around the Balmer lines, \[OIII\] and \[OII\] have been masked with the `goodpixels` function provided in `pPXF`. For the small galaxies, the residual spectra are also shown along the bottom of each panel in Figure \[fig: stacked\]. In each panel of Figure \[fig: allspectra\], the upper spectrum refers to the cut along the size-mass relation, the lower one to the horizontal cut. The horizontal cut spectra have been shifted by subtracting a constant value from the normalized spectra. This is also done for the residuals in Figure \[fig: allres\]. The different background colors in both figures indicate different redshift bins. It is clear that all of the stacked spectra are dominated by the typical features of an old stellar population, such as a strong 4000 Å break, the G-band at 4300 Å, the 5180 Å Mg and the Balmer absorption features. Nevertheless, it is clear that the residuals from essentially all of the stacks show a weak narrow emission line contamination. For masses $10.5<\log\,\mathrm{M/M}_\odot<11$ (first column of residuals in Figure \[fig: allres\]), the emission line fill-in appears to be stronger for larger galaxies, especially in the H$\beta$, H$\gamma$ and \[OII\] region. The same does not seem to apply to more massive galaxies (second column of residuals in Figure \[fig: allres\]). We fit the residual emission lines and compute the equivalent width of \[OII\] and H$\beta$. From the relation between sSFR, EW(H$\alpha$) and EW(\[OII\]) we obtain $\log\mathrm{(sSFR/Gyr}^{-1}) < -2$ (see for example @moresco2013 for the relation between these quantities in zCOSMOS galaxies). We assume in deriving this the Balmer line ratios for the case B recombination with a temperature of 1$0^4$ K and a typical electron density $\leq\,10^4$ cm$^{-3}$ without reddening [@osterbrock2006] for the conversion between H$\alpha$ and H$\beta$ fluxes. This is a very low sSFR, $<10^{-11}$ yr$^{-1}$; this corresponds closely to the inverse age of the universe at $z\sim0.3$. ![image](fig_7.pdf){width="158mm"} Emission Lines Correction {#emcorrection} ------------------------- We correct for the emission line contribution that is visible in Figure \[fig: allres\] by fitting a Gaussian to the lines in the residual spectrum, with fitting at the same time the residual continuum level, which is very close to zero. We find velocity dispersions for the residual emission lines of H$\beta$, H$\gamma$ and H$\delta$ lines that are consistent with the instrumental resolution. We then subtract the fitted emission lines from the observed spectra. In some bins, especially in the highest redshift one, the correction is not applied, as the residuals corresponding to the Balmer lines are barely distinguishable from the noise. In order to check whether we are performing an over-subtraction of the lines, we also fit gaussians fixing the continuum level at $1\sigma$. We find minimal or no variations between the two methods. In the rest of our analysis, to derive stellar ages for the different stacked spectra, we use the spectra subtracted of their emission line component. Full Spectral Fitting {#fullspectralfitting} --------------------- We first estimate ages by using `pPXF` for a full spectral fitting analysis, using a set of SSP templates with solar metallicity and Salpeter IMF from BC03 and exclude templates with age older than the age of the universe at each redshift and younger than 1 Gyr. The metallicity of early-type galaxies at these masses is expected to be solar with a scatter of 0.1 dex (see also [@gallazzi2005; @gallazzi2014; @thomas2010; @conroy2012]). During the `pPXF` fit we use a 10th order multiplicative polynomial correction and no additive polynomials. We use the emission line subtracted spectra as described in Section \[emcorrection\], without masking the regions of Balmer lines but masking out the regions corresponding to \[OIII\] and \[OII\]. We compute the mass-weighted age among the best-fit templates found from `pPXF`. To derive the error (error bars in Figures \[fig: age\_tot\] and \[fig: results\_age\]) on our age estimation, we use the *jackknife* technique: $$\sigma_{full}^2 = \frac{N-1}{N}\sum_{i=1}^{N} (\mathrm{Age_{full}}-\mathrm{Age_{full}}_{(i)})^2$$ where N is the number of objects stacked in each bin, $\mathrm{Age_{full}}$ is the age measured on all N spectra, and $\mathrm{Age_{full}}_{(i)}$ is the age measured on a stacked spectrum made of $N-1$ spectra by removing the $i^{\rm th}$ spectrum. The ages obtained with this method are shown in Figures \[fig: age\_tot\] and \[fig: results\_age\] and listed in Table \[fig: allvalues\]. Lick Indices Analysis {#lickmeasurements} --------------------- To derive the strengths of the spectral absorption features we use the bandpasses and pseudo-continua for each index from [@trager1998]. The resolution of the 20 indices that are covered in our different redshift ranges varies from 10.9 Å ($\sim340$ km/s) for H$\delta_\mathrm{A}$ to 8.4 Å ($\sim200$ km/s) for Fe5406 (see Table \[indexused\] for the indices we use at each redshift bin). The zCOSMOS-bright spectra have a resolution that is better than the defining Lick spectra for some wavelengths and worse for others. To correct the measured values from our spectra to the Lick resolution, we carry out the following procedure: $$I_{\mathrm{corr}}\,=\,I_{\mathrm{obs}}\,\frac{I_{\mathrm{best}}^{\mathrm{Lick}}}{I_{\mathrm{best}}^{\mathrm{obs}}},$$ where $I_{\mathrm{best}}^{\mathrm{obs}}$ is the index derived from the `pPXF` best-fit at the observed velocity dispersion and $I_{\mathrm{best}}^{\mathrm{Lick}}$ on `pPXF` original template, convolved to the Lick resolution. The best-fit template we use are those derived in Section \[stacking\] with MILES stellar library. For all indices the complete set of Lick’s resolutions is from [@schiavon2007]. We derive the stellar population parameters from the corrected indices $I_{\mathrm{corr}}$. In order to get the correct Lick resolution for $I_{\mathrm{best}}^{\mathrm{Lick}}$, the template FWHM = 2.51 Å has been taken into account. In order to check whether the index values depend on the choice of the templates, we derived, for those cases where the zCOSMOS-bright resolution was higher than the original Lick resolution, stellar population parameters from the spectra directly convolved down to the Lick resolution, without using the best-fit templates. For these test cases, there are no significant variations from the values reported here. To derive errors on the indices, we use the *jackknife* technique as explained in Section \[fullspectralfitting\]. \[fig: indexused\] Redshift ----------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------ $0.2 < z < 0.4$ , , , , , $\mathrm{Ca}4455$, , $\mathrm{C}_24668$, $\mathrm{H}\beta$, $\mathrm{Fe}5015$, , , , , $0.4 < z < 0.6$ , H$\delta_\mathrm{F}$, $\mathrm{CN}_1$, $\mathrm{CN}_2$, , , , , , $\mathrm{Ca}4455$, , $\mathrm{C}_24668$, $\mathrm{H}\beta$, $\mathrm{Fe}5015$, , , , , $0.6 < z < 0.8$ , H$\delta_\mathrm{F}$, $\mathrm{CN}_1$, $\mathrm{CN}_2$, , , , , , $\mathrm{Ca}4455$, , $\mathrm{C}_24668$ : Indices available in each redshift bin, highlighting those used to derive the stellar population parameters. \[indexused\] We derive stellar population parameters as in [@onodera2014], comparing our measured indices with those computed on SSP models with variable $\alpha$-abundances [@thomas2011]. The code makes a 3D grid with an uniform interval of 0.02 dex of the [@thomas2011] model values for age, \[Z/H\], and \[$\alpha$/Fe\]. The model spans the parameter space $0.1 < \mathrm{age/Gyr} < 15$, $-2.25 < [\mathrm{Z/H}] < 0.67$, and $-0.3 < $\[$\alpha$/Fe\]$ < 0.5$ and adopts the Salpeter IMF. The best-fit values of these three stellar population parameters are derived by comparing our corrected indices with the [@thomas2011] model and finding a set of parameters which gives minimum $\chi^2$: $$\chi^2\,=\sum\,\frac{(I_{\mathrm{Thomas}}\,-\,I_{\mathrm{measured}})^2}{\sigma^2_{\mathrm{Lick}}}$$ where $\sigma_{\mathrm{Lick}}$ is the error bar on our Lick indices computed with the *jackknife* procedure. This allows us to make use of all available indices at the same time and returns best-fit values for all the three free parameters. We use all the indices available in each redshift bin, with few exceptions. The indices Ca4455, H$\delta_{\mathrm{F}}$, Fe5015 and H$\beta$ have been excluded following [@thomas2011] recommendation. The iron line Fe5015 is a problem due to its proximity to the strongest of the emission lines of the doublet \[OIII\] at 5007 Å. The Balmer line H$\beta$ is also known to be problematic in deriving ages, as it is affected by fill-in from the emission line more than higher order Balmer lines (see also [@poole2010; @onodera2014]). Ca4455 is found to be mostly sensitive to Ca abundance [@korn2005; @thomas2011] and its use should be therefore taken with caution, as it is also demonstrated from comparison with globular clusters [@korn2005]. We also excluded other indices sensitive to the C abundance, namely CN$_1$, CN$_2$ and C$_{2}$4668, as the available models do not use the C abundance as a free parameter (see also @thomas2011 [@onodera2014]). We used Ca4227, G4300, H$\gamma_\mathrm{A}$, H$\gamma_\mathrm{F}$, Fe4383, Fe4531 for all redshifts, as these indices lie in the common wavelength range highlighted in yellow in Figure \[fig: stacked\]. The Balmer lines are the most sensitive indices to age variations. The index Fe4383 is particularly sensitive to the iron abundance and it is also influenced by total metallicity \[Z/H\] and by the magnesium abundance. Fe4531 and Ca4227 are also sensitive to the total metallicity. Ca4227 is also dominated by variations in Ca and C abundances and show very weak dependence to $\alpha$ variations. The same can be said about the G4300 band, whose main contributors are C, O and Fe abundances. We included H$\delta_{\mathrm{A}}$, Mg$_1$, Mg$_2$, Mg$b$, Fe5270, Fe5335 and Fe5406, when available. H$\delta_{\mathrm{A}}$ is sensitive to age variations, despite being more affected by metalliticy than H$\gamma_{\mathrm{A}}$ and H$\beta$, and is also important for \[$\alpha$/Fe\] ratio estimation, as the higher order Balmer indices like H$\delta_{\mathrm{A}}$ are sensitive to $\alpha$ enhancement [@thomas2004]. Mg$_1$, Mg$_2$ and Mg$b$ are the major indicators of the $\alpha$-element abundance, although being affected also by \[Z/H\], C and Fe. The Mg$_2$ band-pass covers also the Mg$b$ lines, the sensitivity of this index is therefore higher than Mg$_1$. Fe5270, Fe5335 and Fe5406 show similar trends and are sensitive mostly to Fe, \[Z/H\] and Mg abundances. Fe5406 is only available at low redshift ($0.2<z<0.4$). Its strength is however weaker than the other two and its inclusion does not modify the results. In deriving stellar population parameters we first leave \[Z/H\] as a free parameter. However, the results are strongly affected by the spurious age–metallicity anticorrelation mentioned in the Introduction, especially in the redshift bin $0.4<z<0.6$. In an alternative approach, we restrict the \[Z/H\] values to those from the mass–metallicity relation from [@gallazzi2005] obtained in the local Universe but evidently applicable also up to $z=0.7$ because of the apparent lack of evolution in this redshift range [@gallazzi2014]. We correct their masses to our integrated SFRs with a correction factor of $\sim0.2$ dex. We derive ages using both the indices from the original stacked spectra as well as those from spectra corrected for the emission lines as described in Section \[emcorrection\]. As expected, on average slightly younger ages are obtained from the emission-corrected Balmer indices, with a stronger correction for large galaxies in the less massive bin (see residuals in Figure \[fig: allres\]). The effect of the age-metallicity degeneracy are much less severe in the emission line corrected spectra than in the uncorrected ones, though still visible. This is most likely due to the fact that it was more difficult to find a unique solution for age and \[Z/H\] when the uncorrected Balmer indices were forcing older ages. The results we obtain with the Lick system are listed in Table \[fig: allvalues\]. Results ======= Before examining the age difference between large and small galaxies, we look first at the ages of passive galaxies, at each redshift, as a function of their stellar mass, but independent of their sizes. [@thomas2010] have shown that, at least at redshift zero, there is a clear mass-age relation for quenched galaxies, in the sense that more massive quenched galaxies are found to be on average older than their less massive counterparts. In Figure \[fig: age\_tot\] we show the stellar population ages derived from the stacked spectra of our quenched galaxies, in bins of stellar mass and redshift, using in each mass-redshift bin all the galaxies, regardless of size. More massive galaxies in our sample (filled black circles) indeed have older ages than less massive ones (empty circles) at any given redshift, in agreement with the previous studies at $z\sim0$, but now extending this result out to $z=0.8$. The shaded areas represents the difference in ages obtained when applying the correction for emission lines fill-in from the continuum level or from $1\sigma$ above the continuum. The decrease in age seen in the redshift bin $0.4<z<0.6$ might conceivably be related to the fact that these galaxies lie in a less dense region of the universe with respect to the known over-densities of COSMOS at 0.3 [@knobel2009] and 0.7 [@guzzo2007; @scoville2007]. The over-dense regions are also visible in Figure \[fig: measurements\]. Overall this test, reproducing the well-known mass-age relation for quenched galaxies and extending this at $z>0$ redshifts, gives us confidence that our measurements of ages, at least in a comparative sense, are fairly robust. We therefore turn to the main question of our paper, which is to establish the age comparison between quenched galaxies of different sizes at constant redshift and stellar mass. The stellar ages of small and large quenched galaxies ----------------------------------------------------- ### Visual Inspection of spectral ratios {#visualinspection} Before proceeding to a more quantitative analysis, significant differences between the spectra of large and small galaxies are readily visible ‘by eye’. In Figure \[fig: ratio\_large\_small\] we show the ratio of the spectra of large/small galaxies. At the lower masses, the overall redness of the small spectrum vs. the large one is clearly visible, as well as the much deeper Calcium H and K lines and the suppression of flux shortward of 4000 Å in the small spectrum. These features are both expected for older ages. The situation with the Balmer lines is less clear cut, as stronger absorption would be expected in the younger object, but this could be weakened by the residual emission which can be seen in Figure \[fig: allres\]. Interestingly, the same differences are not seen in the higher mass stacked spectra. In the ratio of large to small galaxies for more massive galaxies (bottom panels in Figure \[fig: ratio\_large\_small\]), deeper Ca H$\&$K lines for smaller galaxies can still be seen in the redshift bin $0.4<z<0.6$. On the other hand, no hint of deeper H and K and stronger 4000 Å break is visible at higher $z$, showing therefore a less clear trend of age with size than in less massive objects. Similar behavior is seen at all redshifts for the spectra whether they are stacked with the horizontal cuts or with the cuts along the size-mass relation. ### Ages of small and large quenched galaxies: Quantitative estimates from the full spectral fitting and Lick indices approaches In Figures \[fig: results\_age\] we show our best-fit stellar population ages as a function of redshift for the large and small galaxies, obtained with the full spectral fitting technique using `pPXF`. The spectra have been corrected for emission line contribution visible in the residuals in Figure \[fig: allres\] and fitted with a combination of SSP with solar metallicity. The results show a clear trend in ages for less massive galaxies. At masses $10.5<\log\mathrm{M_*/M_\odot<11}$ (left panel) red symbols (small galaxies) are the oldest in each redshift bin, while blue points (large galaxies) are the youngest objects at each redshift, with the age difference between small and large galaxies being especially evident in the lowest redshift bin. Same trends are visible for both the size-mass and the horizontal cuts. The Lick-based analysis that was described in Section \[lickmeasurements\] also shows the same trends, though overall younger ages are returned for both the small and large galaxies, as expected since the model from [@thomas2011] returns luminosity-weighted ages. However, the Lick-derived ages reported in Table \[fig: allvalues\], which were computed with metallicity as a free parameter, should be treated with some caution because of the strong age-metallicity degeneracy. For masses $11<\log\mathrm{M_*/M_\odot<11.5}$, the behavior is different and a clear difference between the ages of galaxies with different sizes is not visible. This lack of difference is present both in the full spectral fitting or in the Lick analysis. We stress that the results shown here are derived with using as definition of stellar masses the integral of the SFR. However, repeating the analysis using bins of stellar masses computed including the loss of mass due to the return or material to the ISM due to stellar evolution (which we have argued is not strictly correct), also gives the same qualitative results, as in the lower mass bin larger galaxies are younger than smaller one, and no clear trend with ages is visible in the higher mass bin. In Figure \[fig: mgfe\] and Figure \[fig: mgfe\_horizontal\] we show an example of a Lick index-index diagram, $<\mathrm{Fe}>$ (mean obtained with indices Fe5270 and Fe5335) versus Mg$_2$ (see for example @burstein1984 [@carollo1994; @fisher1996]), for the size-mass and the horizontal cut, respectively. We show results only for the two lowest redshift bins, as at the highest redshift we do not have the wavelength coverage for Mg$_2$, Fe5270 and Fe5335. The model lines are from [@thomas2011]. Dotted lines show loci of constant age (2, 4, 6 and 9 Gyrs) and constant \[$\alpha$/ Fe\], all computed at fixed solar metallicity. Small galaxies (red squared points) and large galaxies (blue triangles), while clearly showing differences in ages, do not show differences in \[$\alpha$/Fe\] for different sizes. The black points show the index values for galaxies with same mass and redshift binning as the others in the same plot, but with no split for sizes. All the galaxies appear to have super-solar $\alpha$ to Iron ratios ($[\alpha$/Fe\]$>0.3$), as expected for massive early-type galaxies (e.g., @leeworthey2005 [@thomas2005; @choi2014]). -------------------- ----------- ----------------------------------- ----------------------------------- ----------------------------------- ----------------------------------- ----------------------------------- ----------------------------------- [Size-Mass cut]{} $10.5<\log\mathrm{M<11}$ $11<\log\mathrm{M<11.5}$ [ ]{} [ ]{} [Spectral Fitting]{} [Lick (fixed \[Z/H\])]{} [Lick (free \[Z/H\])]{} [Spectral Fitting]{} [Lick (fixed \[Z/H\])]{} [Lick (free \[Z/H\])]{} [Redshift]{} [Size]{} [Log (Age/Gyrs)]{} [Log (Age/Gyrs)]{} [Log (Age/Gyrs)]{} [Log (Age/Gyrs)]{} [Log (Age/Gyrs)]{} [Log (Age/Gyrs)]{} [ $0.2<z<0.4$]{} [Small]{} [$0.84(0.85)^{+ 0.05}_{-0.05}$]{} [$0.64(0.68)^{+ 0.04}_{-0.02}$]{} [$0.70(0.74)^{+ 0.08}_{-0.08}$]{} [$0.72(0.73)^{+ 0.02}_{-0.02}$]{} [$0.44(0.46)^{+ 0.02}_{-0.02}$]{} [$0.12(0.12)^{+ 0.02}_{-0.02}$]{} [ ]{} [Large]{} [$0.50(0.51)^{+ 0.10}_{-0.10}$]{} [$0.28(0.30)^{+ 0.04}_{-0.04}$]{} [$0.02(0.04)^{+ 0.06}_{-0.04}$]{} [$0.72(0.74)^{+ 0.02}_{-0.02}$]{} [$0.42(0.44)^{+ 0.04}_{-0.02}$]{} [$0.20(0.22)^{+ 0.08}_{-0.06}$]{} [ $0.4<z<0.6$]{} [Small]{} [$0.49(0.51)^{+ 0.04}_{-0.04}$]{} [$0.42(0.44)^{+ 0.02}_{-0.02}$]{} [$0.40(0.46)^{+ 0.10}_{-0.06}$]{} [$0.56(0.60)^{+ 0.05}_{-0.05}$]{} [$0.48(0.50)^{+ 0.02}_{-0.04}$]{} [$0.60(0.60)^{+ 0.12}_{-0.10}$]{} [ ]{} [Large]{} [$0.35(0.36)^{+ 0.02}_{-0.02}$]{} [$0.30(0.32)^{+ 0.02}_{-0.02}$]{} [$0.30(0.32)^{+ 0.10}_{-0.08}$]{} [$0.41(0.44)^{+ 0.05}_{-0.05}$]{} [$0.30(0.34)^{+ 0.02}_{-0.02}$]{} [$0.32(0.36)^{+ 0.04}_{-0.06}$]{} [ $0.6<z<0.8$]{} [Small]{} [$0.56(0.59)^{+ 0.03}_{-0.03}$]{} [$0.36(0.36)^{+ 0.02}_{-0.02}$]{} [$0.44(0.44)^{+ 0.18}_{-0.14}$]{} [$0.59(0.61)^{+ 0.03}_{-0.03}$]{} [$0.26(0.26)^{+ 0.04}_{-0.04}$]{} [$0.34(0.34)^{+ 0.28}_{-0.22}$]{} [ ]{} [Large]{} [$0.42(0.49)^{+ 0.06}_{-0.06}$]{} [$0.24(0.32)^{+ 0.02}_{-0.02}$]{} [$0.18(0.28)^{+ 0.08}_{-0.08}$]{} [$0.65(0.67)^{+ 0.03}_{-0.03}$]{} [$0.36(0.40)^{+ 0.02}_{-0.02}$]{} [$0.48(0.52)^{+ 0.16}_{-0.14}$]{} [Horizontal cut]{} $10.5<\log\mathrm{M<11}$ $11<\log\mathrm{M<11.5}$ [ ]{} [ ]{} [Spectral Fitting]{} [Lick (fixed \[Z/H\])]{} [Lick (free \[Z/H\])]{} [Spectral Fitting]{} [Lick (fixed \[Z/H\])]{} [Lick (free \[Z/H\])]{} [Redshift]{} [Size]{} [Log (Age/Gyrs)]{} [Log (Age/Gyrs)]{} [Log (Age/Gyrs)]{} [Log (Age/Gyrs)]{} [Log (Age/Gyrs)]{} [Log (Age/Gyrs)]{} [ $0.2<z<0.4$]{} [Small]{} [$0.80(0.79)^{+ 0.12}_{-0.12}$]{} [$0.64(0.58)^{+ 0.02}_{-0.02}$]{} [$0.60(0.48)^{+ 0.06}_{-0.06}$]{} [$0.81(0.83)^{+ 0.03}_{-0.03}$]{} [$0.60(0.64)^{+ 0.04}_{-0.04}$]{} [$0.36(0.42)^{+ 0.12}_{-0.10}$]{} [ ]{} [Large]{} [$0.50(0.52)^{+ 0.10}_{-0.10}$]{} [$0.30(0.30)^{+ 0.04}_{-0.04}$]{} [$0.04(0.04)^{+ 0.06}_{-0.04}$]{} [$0.82(0.82)^{+ 0.01}_{-0.01}$]{} [$0.64(0.66)^{+ 0.08}_{-0.08}$]{} [$0.30(0.28)^{+ 0.20}_{-0.14}$]{} [ $0.4<z<0.6$]{} [Small]{} [$0.43(0.48)^{+ 0.07}_{-0.07}$]{} [$0.32(0.36)^{+ 0.02}_{-0.04}$]{} [$0.36(0.44)^{+ 0.10}_{-0.08}$]{} [$0.55(0.57)^{+ 0.06}_{-0.06}$]{} [$0.38(0.38)^{+ 0.02}_{-0.04}$]{} [$0.50(0.52)^{+ 0.14}_{-0.12}$]{} [ ]{} [Large]{} [$0.35(0.37)^{+ 0.02}_{-0.02}$]{} [$0.30(0.32)^{+ 0.02}_{-0.02}$]{} [$0.34(0.38)^{+ 0.12}_{-0.08}$]{} [$0.48(0.50)^{+ 0.04}_{-0.04}$]{} [$0.40(0.42)^{+ 0.02}_{-0.02}$]{} [$0.44(0.50)^{+ 0.08}_{-0.06}$]{} [ $0.6<z<0.8$]{} [Small]{} [$0.54(0.56)^{+ 0.03}_{-0.03}$]{} [$0.32(0.34)^{+ 0.02}_{-0.04}$]{} [$0.40(0.40)^{+ 0.26}_{-0.24}$]{} [$0.57(0.59)^{+ 0.03}_{-0.03}$]{} [$0.26(0.26)^{+ 0.04}_{-0.04}$]{} [$0.70(0.42)^{+ 0.22}_{-0.30}$]{} [ ]{} [Large]{} [$0.46(0.58)^{+ 0.06}_{-0.06}$]{} [$0.20(0.30)^{+ 0.02}_{-0.04}$]{} [$0.20(0.38)^{+ 0.22}_{-0.14}$]{} [$0.64(0.64)^{+ 0.02}_{-0.02}$]{} [$0.36(0.38)^{+ 0.04}_{-0.04}$]{} [$0.64(0.60)^{+ 0.24}_{-0.30}$]{} -------------------- ----------- ----------------------------------- ----------------------------------- ----------------------------------- ----------------------------------- ----------------------------------- ----------------------------------- Discussion ========== The analysis presented in the previous sections highlights an important change in behavior of QGs with masses respectively above and below $\sim10^{11} \mathrm{M}_\odot$: Below this mass threshold, and precisely in our $10.5<\log\mathrm{M}_*/\mathrm{M}_\odot<11$ mass bin, the smaller galaxies have systematically older stellar populations than their larger counterparts at each cosmic epoch. This is true for both our definitions of [*smallness*]{} and [*largeness*]{}, i.e., considering both the size-mass cut (relative sizes below the evolving size-mass relation), and the horizontal cut (absolute sizes below a constant threshold in kpc). In contrast, in our high-mass bin above 10$^{11} \mathrm{M}_\odot$, we do not detect any distinct trend for QGs of different sizes. We discuss more in detail below the implications of these different behaviors. The size-age relation of $M<10^{11} M_\odot$ QGs at $0.2 \leq z \leq 0.8$ ------------------------------------------------------------------------- Important implications for the origin of the observed evolution of the average size of QGs at masses $< 10^{11}\mathrm{M}_\odot$ can be inferred by putting together information on the evolution of the relative number density of smaller and larger galaxies and the fact that, in this mass bin, the former are older than the latter. In C13, the number density of compact galaxies (i.e., r$_{1/2}<2$ kpc) remains stable since $z=1$. This means that either the compact galaxies are growing in size, but some other effect is continuously adding new compact QGs (e.g., @damjanov2015a), keeping the number density constant by replacing the galaxies that grow out of that mass bin, or that the growth of the QG population occurs through the addition of larger, newly-quenched galaxies to a stable population of compact galaxies. These scenarios differ in the sizes of the new members of the QG population. In the first case, the new galaxies are compact and the ages of the smaller galaxies should be systematically younger than those of larger galaxies at the same redshift. In the second case, the new galaxies are larger, and the ages of the smaller galaxies would be systematically older. The results of this paper clearly support the second scenario. Compact galaxies are *older* than larger ones. This spectroscopic result is consistent with the purely photometric trends found by C13 in the same redshift range as here, and also with similar findings of @saracco2011 at $z\sim2$ and of @vanderwel2009 at $z=0$ (see however @yano2016 for opposite results obtained with photometry). Summarizing, the younger ages of larger galaxies plus the evidence that their number density is increasing with cosmic time together push towards the conclusion that progenitor bias is driving a large part of the apparent evolution of the median size of passive galaxies below $10^{11} \mathrm{M}_\odot$. This in itself reflects the changes in the size-mass relation of their star-forming progenitors (@toft2007 [@buitrago2008; @kriek2009]). As shown in @lilly2016, a half of the size difference between star-forming galaxies and QGs of a given mass is explained by progenitor bias effects in which the sizes of star-forming galaxies of a given mass increase with cosmic time, and the other half by differential radial fading after quenching of the stellar populations of the star forming progenitors (see also @tacchella2015b [@carollo2016].). About half of the average size evolution of QG emerges thus naturally by considering progenitor effects. Our results therefore imply that the increase with cosmic time of the number density of QGs at these ‘low’ masses, which is well quantified in the observed mass functions of quenched galaxies at different redshifts (@williams2009 [@ilbert2010; @ilbert2013; @dominguezsanchez2011]), is mostly driven by the addition of the population of larger, later quenched objects, with smaller galaxies keeping a stable number density though cosmic time. This is further substantiated by the fact that, between our highest and lowest redshift bins, the $\Delta$age of the compact galaxies (selected at constant absolute size through the horizontal cut) is consistent with passive evolution of their stellar populations, and therefore consistent with the idea that this population has hardly added any new members. In contrast, the smaller difference in ages for larger galaxies is consistent with the idea that new, younger and i.e., newly quenched galaxies have been added to this population. The size-age relation of $M>10^{11} M_\odot$ QGs at $0.2 \leq z \leq 0.8$ ------------------------------------------------------------------------- The signatures of progenitor bias are less apparent (if at all) at higher masses above $10^{11} \mathrm{M}_\odot$ (for us, specifically, in our $11<\log\mathrm{M}_*/\mathrm{M}_\odot<11.5$ mass bin). This difference has already been pointed out by C13, and is not unexpected. There is a host of evidence that this mass represents a threshold in the importance of merging: Above and below this threshold, QGs have respectively boxy or disky isophotes, cores or cusps, flat or steep metallicity gradients and are slow or fast rotators (see e.g. @bender1988 [@carollo1993; @faber2007; @cappellari2013a]). Similar results are also found in hydrodynamical simulations within a $\Lambda$CDM universe (@hopkins2009 [@feldmann2010]). Furthermore, as stressed in @peng2010, imposing that galactic evolution obeys to a continuity equation requires that very massive passive galaxies at $10^{11}$ M$_\odot$ must have generally undergone significant merging after quenching (see their Figure 16). Also, at these very high masses, C13 report a decrease in the number density of the most compact objects between the high and low redshift bin of $\sim30\%$, suggesting that at these masses the contribution of newly-quenched objects is less important. Using our spectroscopic sample, we (confirm the same trends of C13 for the number density evolution of large and small QGs in the low mass bin, but) find that the number density of small galaxies at high masses remains stable as well, as at the low masses. Our definition of small galaxies at higher masses differs however from that in C13, as we adopt a threshold of $<4.5$ kpc (instead of 2.5 kpc as in C13) to identify small systems. Another element for caution in handling our spectroscopically-estimated number densities comes from the possible incompleteness biases of the spectroscopic survey, although zCOSMOS was designed to yield a high and fairly uniform sampling rate across most of the field (about 70$\%$), and has delivered a high success rate in measuring redshifts [@lilly2009]. The timescales for the stellar mass growth in QGs of different masses and sizes ------------------------------------------------------------------------------- Our investigation of the \[$\alpha$/Fe\] abundance ratios for the different subsamples of QGs in our sample, i.e., QGs at different masses below and above $10^{11} \mathrm{M}_\odot$ and, within the same mass bin, of different sizes, have generally enhanced values relative to the solar value. This is a well-known property of massive QGs at redshift $z=0$ ( @carollo1993 [@carollo1994; @leeworthey2005; @thomas2005; @choi2014]), and has been hinted to be true also at higher redshifts in very small samples [@onodera2014]. Our study proves that, at least at masses above $10^{10.5} \mathrm{M}_\odot$, such enhanced abundance ratios, which support short timescales for the buildup of the stellar populations of QGs, are already achieved at least since $z=0.6$. Furthermore, within the errors and in the mass and redshift range of our analysis, we find that the abundance ratios of QGs do not show any substantial dependence on either mass, size or redshift. This suggests rather universal formation timescales for the stellar populations of massive QGs of all sizes over a broad range of stellar masses. This implies short gas consumption timescales for the formation of the bulk of the stellar populations. (Note that use of $\alpha$/Fe to infer quenching timescales depends strongly on the amount of mass produced during quenching). The result of @thomas2005, for a dependence of the abundance ratios on stellar mass, could nevertheless be recovered considering two points, $(i)$ their analysis extends down to significantly lower masses, and, also, $(ii)$ that they analyse the $z=0$ population, which will have added a non-negligible number of galaxies at the masses of our study in the intervening several billion years. The constancy of the \[$\alpha$/Fe\] ratios of the QG populations above and below the $10^{11} \mathrm{M}_\odot$ threshold at the redshifts of our study is in contrast to the different behaviors seen for their age-size relation, and is well explained if the most massive population originates from dry mergers of the less massive population. This further strengthens the argument for a major role of such mergers in leading to the emergence with cosmic time of the most massive QGs in the Universe at masses $>10^{11}\mathrm{M}_\odot$ Summary and Concluding Remarks {#conclusion} ============================== The observed average size of QGs is about $\sim3-5$ times larger today than at $z\sim2$. There are two main scenarios which have been proposed to explain this evolution: the size growth of individual galaxies or the progenitor bias introduced if newly formed members of the population are larger than the previous members. In this work, we measure the stellar ages of QGs in order to distinguish between these two scenarios which make quite different predictions for the variation of stellar population age with size. If the driver of this evolution is the addition of large newly-quenched objects then larger QG galaxies will be younger. If the driver is the size growth of individual galaxies, and small galaxies are being replaced by newly quenched objects, then the larger galaxies will be older. In the light of the fact that, at any epoch and stellar mass, star-forming galaxies are on average larger than passive galaxies, purity (and completeness) of the QG samples is clearly crucial when attempting the measurement of their stellar ages (see e.g., @keating2015). The polluting presence of star-forming galaxies in QG samples would bias the latter towards larger sizes, while also biasing their age estimates towards younger values. In this work we have selected our QG samples using galaxies securely identified as quiescent. Starting from the 20k zCOSMOS-bright catalog, we selected galaxies with absent, or very weak, emission lines. We stacked the spectra in bins defined in size, stellar mass and redshift, in order to study the average stellar population properties of the sample galaxies. Two binning schemes were used in size, a relative one normalised to the evolving mean mass-size relation, and an absolute one constructed at fixed physical size. We then used `pPXF` to derive best-fit ages from BC03 solar SSP templates. To further check our age results, we also computed absorption line strengths following the wavelength definitions of feature and pseudo-continua of the Lick system of spectral line and derived values for age, \[Z/H\] and \[$\alpha$/Fe\] comparing our results to the [@thomas2011] models. Our robust spectroscopic selection for quenched galaxies is much cleaner than the more frequently adopted color-color selections; in addition, we carefully checked the spectrum of each object by visual inspection, to ensure the absence of star-formation tracers. We are thus confident that contamination by star-forming galaxies is negligible in the results that we have discussed above. Reassuringly, the average age of QGs (not discriminating in size) increases with cosmic time. Turning to the ages as a function of sizes, we find that the $10^{11} \mathrm{M}_\odot$ mass scale is a ‘threshold’ above and below which the size-age relation changes behavior, as already pointed out in C13. Below $10^{11}{\mathrm{M}}_\odot$, larger galaxies have systematically younger ages than smaller ones. The $\Delta$age between small and large galaxies becomes more significant towards lower redshift. The $\Delta$age from the highest to lowest redshift bin in the small-size QG population is in good agreement with a passive evolution of its stellar populations. The younger ages of the larger galaxies at each redshift argues for newly quenched objects to be systematically larger at later epochs. This trend is visible using both our size binning schemes. We conclude that progenitor bias is a major and possibly the dominant component of the observed evolution in the average sizes of QG at these masses. Above $10^{11}{\mathrm{M}}_\odot$, where dry mergers are expected to play a major role in imprinting the well-known ‘dissipationless’ features that are observed at $z=0$ in this ultra-massive population, there is indeed no clear trend between ages and sizes. Size growth of individual galaxies through dry mergers is the most likely channel for the observed growth of the average size of the QG population at this top-mass end. The confirmation of a ‘transition’ mass around $10^{11}{\mathrm{M}}_\odot$ for the size-age behaviour – and thus for the dominant role of progenitor bias at low masses and dry mergers at high masses in driving the observed average size growth of QGs with time – highlights the fundamental importance of sample selection and of tuning the interpretation of the data to the specific sample selection. For example, @zanella2016 report that the size-age relation in their sample of, in quotation, $\mathrm{M} >4.5\times10^{10}{\mathrm{M}}_\odot$ QGs at $z \sim 1.5$ supports the mergers interpretation. A quick inspection of their analysis shows however that $\sim80\%$ of their galaxies actually have masses above $10^{11} \mathrm{M}_\odot$. Therefore, their result is better commented on as holding for this top-mass end population, which puts their result in agreement with our work. Interestingly, the $\alpha$-to-iron abundance ratio of the stellar populations of QGs at all masses within the $10^{10.5-11.5} \mathrm{M}_\odot$ window is rather constant since $z=0.6$. This ratio should reflect the formation timescales for the stellar populations in these systems. The constancy of the measured $[\alpha/\mathrm{Fe}]$ ratio thus suggests similar such timescales, independent of galaxy size, across the whole $10^{10.5-11.5} \mathrm{M}_\odot$ mass range for the galaxy population that has already quenched by our lowest redshift bin at $z \sim 0.3$, consistent with the idea that the most massive galaxies above $10^{11} \mathrm{M}_\odot$ are formed by mergers of lower mass galaxies. We acknowledge support from the Swiss National Science Foundation.
--- abstract: 'Nonlocal patch-based methods, in particular the Bayes’ approach of Lebrun, Buades and Morel [@LBM13], are considered as state-of-the-art methods for denoising (color) images corrupted by white Gaussian noise of moderate variance. This paper is the first attempt to generalize this technique to manifold-valued images. Such images, for example images with phase or directional entries or with values in the manifold of symmetric positive definite matrices, are frequently encountered in real-world applications. Generalizing the normal law to manifolds is not canonical and different attempts have been considered. Here we focus on a straightforward intrinsic model and discuss the relation to other approaches for specific manifolds. We reinterpret the Bayesian approach of Lebrun et al. [@LBM13] in terms of minimum mean squared error estimation, which motivates our definition of a corresponding estimator on the manifold. With this estimator at hand we present a nonlocal patch-based method for the restoration of manifold-valued images. Various proof of concept examples demonstrate the potential of the proposed algorithm.' author: - 'Friederike Laus[^1], Mila Nikolova[^2], Johannes Persch, and Gabriele Steidl' bibliography: - 'DR-ref.bib' subtitle: Extended Version title: ' A Nonlocal Denoising Algorithm for Manifold-Valued Images Using Second Order Statistics' --- =0 Introduction {#sec:intro} ============ In many situations where measurements are taken the obtained data are corrupted by noise, and typically one uses a stochastic model to describe the recorded data. If there are several, independent factors that may have an influence on the data acquisition, the central limit theorem suggests to model the noise as additive white Gaussian noise. This is also the standard noise model one encounters in image analysis, see, e.g., [@GW2008]. One might of course wonder whether this noise modeling is realistic and in fact, in many situations the image formation process already suggests a non-Gaussian model, e.g. Poisson noise in the case where images are obtained based on photon counting with a CCD device. But also in these cases, in order to benefit from the rich knowledge and all the appealing properties of the normal distribution, one often tries to transform the image in such a way that the assumption of Gaussian white noise is at least approximately fulfilled. For instance, for the Poisson noise this can be achieved by the so called Anscombe transform [@An48].\ Much effort has been spent on the denoising of images corrupted with white Gaussian noise and a huge amount of methods have been proposed in the literature. Among others we mention variational models with total variation regularizers [@ROF92] and many extensions thereof, denoising based on sparse representations over learned dictionaries [@EA06], nonlocal means [@GO08; @S2010; @YSM2012; @WPCMB07] and their generalizations [@CM2012; @DDT2009; @Kerv2014; @S2010], the piecewise linear estimator from Gaussian mixture models (PLE, E-PLE) [@YSM2012; @Wan13] and SURE guided Gaussian mixture models [@WM2013], patch-ordering based wavelet methods [@REC2014], the expected patch log-likelihood (EPLL) algorithm [@ZW2011] or better its multiscale variant [@PE2016], BM3D [@DFKE08] and BM3D-SAPCA [@DFKE2009], and the nonlocal Bayes’ algorithm of Lebrun et al. [@LBM13b; @LBM13]. The latter can be viewed as an optimized reinterpretation of the two step image denoising method (TSID) [@SDD2011; @YEA2001] in a Bayesian framework. For a recent review of the denoising problem and the different denoising principles we refer to [@LCBM2012]. Currently, nonlocal patch-based methods achieve the best results and the quality of the denoised images has become excellent for moderate noise levels. Even more, based on experiments with a set of 20.000 images containing about $10^{10}$ patches the authors of [@LN11] conjecture that for natural images, the recent patch-based denoising methods might already be close to optimality. Their conjecture points in the same direction as the paper of Chatterjee et al. [@CM10], who raised the question Is denoising dead?.\ The situation described above completely changes when dealing with manifold-valued images instead of real-valued ones, a situation which is frequently encountered in applications. For instance, images with values on the circle (periodic data) appear in interferometric synthetic aperture radar [@BKAE08; @DDT11], in applications involving the phase of Fourier transformed data [@BLSW14], or when working with the hue-component of an image in the HSV color space. Spherical data play a role when dealing with 3D directional information [@KS2002; @VO2002] or in the representation of a color image in the chromaticity-brightness (CB) color space [@CKS01]. SO(3)-valued data appear in electron backscattered tomography [@BaHiSc11; @BCHPS15]. Finally, to mention only a few examples, images with symmetric positive definite matrices as values are handled in DT-MRI imaging [@chefd2004regularizing; @FJ2004; @VBK13; @WFWBB06; @WPCMB07] or when covariance matrices are associated to image pixels [@TPM08]. Recently, some methods for the denoising of manifold-valued images have been suggested, among them variational approaches using embeddings in higher dimensional spaces [@rosman2012group] or based on (generalized) TV-regularization [@BBSW2015; @BW15b; @LSKC13; @CS13; @WDS2014].\ In this paper, we aim at generalizing the nonlocal patch-based denoising of Lebrun et al. [@LBM13b; @LBM13] to manifold-valued images. However, for general manifolds, already the question of how to define Gaussian white noise (or, more general, a normal distribution) is not canonically solved. Different approaches have been proposed in the literature, either by making use of characterizing properties of the real-valued normal distribution as for instance in [@OC95; @Pen06] or by restricting to particular manifolds such as spheres, see, e.g. [@MJ2000], the simplex [@MPE2013], or symmetric positive definite matrices [@SBBM15]. In this paper, we adopt a simple model for a normal distribution and in particular for Gaussian white noise on a manifold and discuss its relationship to existing models. We review the minimum mean squared error estimator in the Euclidean setting, which coincides with those of the Bayesian approach in [@LBM13] under the normal distribution assumption. This motivates our definition of a corresponding estimator on the manifold and gives rise to a nonlocal patch-based method for the restoration of manifold-valued images.\ The outline of this paper is as follows: in Section \[sec:model\_real\] we reinterprete the nonlocal Bayes algorithm of Lebrun et al. [@LBM13b; @LBM13] in a minimum mean square error estimation setting. This review in the Euclidean setting is necessary to understand its generalization to manifold-valued images. In Section \[sec:model\_random\] we introduce the notation on manifolds. Then, in Section \[sec:model\_manifold\] we detail the nonlocal patch-based denoising algorithm for manifold valued images. This requires to precise what we mean by the normal law on the manifolds of interest. We discuss the relation between this model and other existing ones. In Section \[sec:numerics\] we provide several numerical examples to demonstrate that our denoising approach is indeed computationally manageable. Examples, yet academical, for cyclic and directional data, and for images with values in the manifold of symmetric positive definite matrices show the potential of nonlocal techniques for manifold-valued images. Specific real-world applications are not within the scope of this paper. Finally, we draw conclusions and initiate further directions of research in Section \[sec:conclusions\]. Nonlocal Patch-Based Denoising of Real-Valued Images {#sec:model_real} ==================================================== In this section we consider the nonlocal Bayesian image denoising method of Lebrun et al. [@LBM13b; @LBM13]. In contrast to these authors we prefer to motivate the method by a minimum mean square estimation approach. One reason is that the best *linear* unbiased estimator in has a similar form as the MMSE, but does not rely on the assumption that the random variables are jointly normally distributed. This leaves potential for future work, e.g. when extending the model to other distributions than the normal distribution. Minimum Mean-Square Estimator {#Sec:MMSE} ----------------------------- Let $(\Omega,{\mathcal{A}},{\mathbb{P}})$ be a probability space and $X\colon\Omega\to {\mathbb{R}}^n$ and $Y\colon \Omega\to {\mathbb{R}}^n$ two random vectors. We wish to estimate $X$ given $Y$, i.e., we seek an estimator $T\colon {\mathbb{R}}^n\to {\mathbb{R}}^n$ such that $\hat{X} = T(Y)$ approximates $X$. A common quality measure for this task is the *mean square error* ${\mathbb{E}}{\left\| X-T(Y) \right\|_{2}}^2$, which gives rise to the definition of the *minimum mean square estimator* (MMSE) $$\begin{aligned} T_{\text{MMSE}} (Y) &= \operatorname*{arg\,min}_{T}{\mathbb{E}}{\left\| X-T(Y) \right\|_{2}}^2\\ & = \operatorname*{arg\,min}_{Z\in \sigma(Y)} {\mathbb{E}}{\left\| X-Z \right\|_{2}}^2, \end{aligned}$$ where $\sigma(Y)$ denotes the $\sigma$-algebra generated by $Y$ and $Z\in \sigma(Y)$ stands for all $\sigma(Y)$-measurable random variables $Z$, see, e.g., [@LC06]. Under weak additional regularity assumptions on the estimator $T$, the Lehmann-Scheffé theorem [@LS50; @LS55] states that the general solution of the minimization problem is determined by $$T_{\text{MMSE}}(Y)= {\mathbb{E}}(X|Y).$$ In general it is not possible to give an analytical expression of the MMSE. One exception constitutes of the normal distribution. Recall that a random vector $X$ is normally distributed with mean $\mu \in {\mathbb{R}}^n $ and covariance matrix $\Sigma\in {\mathbb{R}}^{n \times n}$, $X\sim {\mathcal{N}}(\mu,\Sigma)$, if and only if there exists a random vector $Z\in {\mathbb{R}}^l$, whose components are independent real-valued standard normally distributed random variables and a $p\times l$ matrix $A$, such that $X = AZ + \mu_X$, where $l$ is the rank of the covariance matrix $\Sigma_X = AA^{\mathrm{T}}$. If $\Sigma_X$ has full rank, then the probability density function (pdf) of $X \sim {\mathcal{N}}(\mu,\Sigma)$ with respect to the Lebesgue measure is given by $$\label{gaussian_density} p_X(x|\mu,\Sigma) = \frac{1}{(2\pi)^{\frac{n}{2}}}\frac{1}{{\lvert\Sigma\rvert}^{\frac{1}{2}}} {{\mathrm{e}}}^{-\frac{1}{2}(x-\mu)^{{\mathrm{T}}}\Sigma^{-1}(x-\mu)},$$ where $\lvert\Sigma\rvert$ denotes the determinant of $\Sigma$. In view of the next section it is useful to recall some properties of the normal distribution. [(Properties of Gaussian distribution on $\mathbb R^n$)]{} \[rem:prop\_gaussian\] 1. The Gaussian density function in maximizes the [entropy]{}\ $ H(X) \coloneqq \mathbb E \left[ -\log \left( p_X (X|\mu,\Sigma) \right) \right] $ over all density functions on $\mathbb R^n$ with fixed mean $\mu$ and covariance matrix $\Sigma$. 2. Let $x_1,\ldots,x_K \in \mathbb R^n$, $K\in {\mathbb{N}}$, be i.i.d.  realizations of an absolutely continuous distribution having first and second moments, denoted by $\mu$ and $\Sigma$. Then the likelihood function reads as $L(\mu,\Sigma|x_1,\ldots,x_K) = \prod_{k=1}^K p_X(x_k|\mu,\Sigma)$ and the [maximum likelihood (ML) estimator]{} is defined as $$\hat \mu \coloneqq\operatorname*{arg\,max}_{\mu} L(\mu,\Sigma |x_1,\ldots,x_K).$$ It holds that $$\label{sample_mean} \hat \mu = \frac{1}{K} \sum_{k=1}^K x_k = \operatorname*{arg\,min}_{x \in \mathbb R^n} \sum_{k=1}^K \lVert x-x_k\rVert_2^2$$ if and only if the density function is of the form , see, e.g., [[@DLS14; @Sta93]]{}. For the normal distribution the ML estimator of the covariance matrix reads as $$\label{covar} \hat{\Sigma} = \frac{1}{K}\sum_{k=1}^K (x_k-\hat{\mu}){(x_k-\hat{\mu})^{\mathrm{T}}}.$$ 3. The density function of the standard normal distribution ${\mathcal{N}}(0,\sigma^2I_n)$ with the $n \times n$ identity matrix $I_n$ is the kernel of the heat equation. In order to compute the MMSE estimator for Gaussian random variables we need to determine the conditional distribution of $X$ given $Y$. It is well known that, if $X \sim {\mathcal{N}}(\mu_X,\Sigma_X)$ and $Y\sim {\mathcal{N}}(\mu_Y,\Sigma_Y)$ are jointly normally distributed, i.e., $$\begin{pmatrix} X\\Y \end{pmatrix}\sim {\mathcal{N}}\Biggl(\begin{pmatrix} \mu_X\\ \mu_Y\end{pmatrix},\begin{pmatrix} \Sigma_X & \Sigma_{XY}\\ \Sigma_{YX} & \Sigma_Y \end{pmatrix}\Biggr),$$ then the conditional distribution of $X$ given $Y=a$ is normally distributed as well and reads as $$(X|Y=a) \sim {\mathcal{N}}\bigl(\mu_{X|Y},\Sigma_{X|Y} \bigr),$$ where $$\mu_{X|Y} = \mu_X + \Sigma_{XY} \Sigma^{-1}_Y (a-\mu_Y),\qquad \Sigma_{X|Y}= \Sigma_X-\Sigma_{XY}\Sigma^{-1}_Y\Sigma_{YX}.$$ As a consequence we obtain for normally distributed random vectors the MMSE estimator $$\label{mmse_est} T_{\mathrm{MMSE}}(Y)= {\mathbb{E}}(X|Y) = \mu_X + \Sigma_{XY} \Sigma^{-1}_Y (Y-\mu_Y) .$$ In our situation (denoising) fits into the above framework if we set $$\label{gaussian_setting} Y = X + \eta, \qquad X\sim {\mathcal{N}}(\mu_X,\Sigma_X), \quad \eta\sim {\mathcal{N}}(0,\sigma^2I_n),$$ where we assume that $X$ and $\eta$ are independent and $\sigma^2>0$ is known. Then $\mu_X = \mu_Y$, and by the independence of $X$ and $\eta$ further $\Sigma_{XY} = \Sigma_X$ and $$\label{hidden} \Sigma_Y = {\mathbb{E}}\left( (X+ \eta -\mu_X) (X+\eta - \mu_X)^{\mathrm{T}}\right) = \Sigma_X + \sigma^2 I_n.$$ Now, the MMSE of $X$ given $Y$ in becomes $$\begin{aligned} T_{\text{MMSE}}(Y) &= \mu_X + \Sigma_X (\Sigma_X + \sigma^2 I_d)^{-1} (Y - \mu_X) \\ &= \mu_Y + (\Sigma_Y - \sigma^2 I_n) \Sigma_Y^{-1}(Y-\mu_Y).\label{mmse_next} \end{aligned}$$ Two remarks may be useful to see the relation to other estimators. [(Relation between MMSE and BLUE)]{}\ The estimator of the general form $$\label{blue_est} T_{\mathrm{BLUE}}(Y) = {\mathbb{E}}(X) + \Sigma_{XY} \Sigma^{-1}_Y \bigl(Y-{\mathbb{E}}(Y)\bigr)$$ makes also sense for more general distributions. It is known as *best linear unbiased estimator* [(BLUE)]{}, as it is an unbiased estimator which has minimum mean-square error among all affine estimators. For jointly normally distributed $X$ and $Y$ it coincides with $T_{\text{\rm MMSE}}$. [(Relation between MMSE and MAP)]{}\ The MMSE can also be derived in a Bayesian framework under a Gaussian prior (see, e.g. [[@Fes16]]{}), which is detailed in the following. Let $Y = X + \eta$, $X\sim {\mathcal{N}}(\mu_X,\Sigma_X)$, $\eta\sim {\mathcal{N}}(0,\sigma^2 I_d)$, where $X$ and $\eta$ are independent. This implies $Y\sim {\mathcal{N}}(\mu_X,\Sigma_X + \sigma^2 I_d)$ and $(Y|X=x)\sim {\mathcal{N}}(x,\sigma^2 I_d)$, so that the respective densities are given by $$p_Y(y|X=x) = \frac{1}{(2\pi \sigma^2)^{\frac{d}{2}}} {{\mathrm{e}}}^{-\frac{1}{2\sigma^2}\lVert y-x\rVert_2^2}$$ and $$p_X(x) = \frac{1}{(2\pi)^{\frac{d}{2}}} \frac{1}{|\Sigma_X|^{\frac{1}{2}}}{{\mathrm{e}}}^{-\frac{1}{2}(x-\mu_X)^{{\mathrm{T}}}\Sigma_X^{-1}(x-\mu_X)}.$$ By Bayes’ formula we have $$p_X(x|Y=y) = \frac{p_Y(y|X=x) p_X(x)}{p_Y(y)}\propto p_Y(y|X=x) p_X(x),$$ and therewith, the maximum a posteriori (MAP) estimate reads as $$\begin{aligned} \hat x &= \operatorname*{arg\,max}_{x} \{p_X(x|Y=y)\} = \operatorname*{arg\,max}_{x} \{ p_Y(Y|X=x) p_X(x)\}\\ & = \operatorname*{arg\,max}_{x} \big\{ \log(p_Y(Y|X=x)) + \log( p_X(x)) \big\}\\ & = \operatorname*{arg\,min}_{x} \left\{ \frac{1}{2\sigma^2}\lVert x-y\rVert_2^2 + \frac{1}{2}(x-\mu_X)^{{\mathrm{T}}}\Sigma_X^{-1}(x-\mu_X) \right\}. \end{aligned}$$ Setting the gradient to zero results in $$\begin{aligned} (I_d + \sigma^2 \Sigma_X^{-1}) \hat x &= \sigma^2 \Sigma_X^{-1} \mu_X + y. \end{aligned}$$ Observing that $I_d + \sigma^2 \Sigma_X^{-1} = \Sigma_X^{-1}(\Sigma_X + \sigma^2 I_d)$ and $ \sigma^2 (\Sigma_X + \sigma^2 I_d)^{-1}=I_d - \Sigma_X (\Sigma_X + \sigma^2 I_d)^{-1} $, we obtain finally $$\begin{aligned} \hat{x} & = \sigma^2 (\Sigma_X + \sigma^2 I_d)^{-1}\mu_X +(\Sigma_X + \sigma^2 I_d)^{-1} \Sigma_X y \\ & = \mu_X + \Sigma_X(\Sigma_X + \sigma^2 I_d)^{-1} (y-\mu_X). \end{aligned}$$ In practical applications, the parameters $\mu_Y$ and $\Sigma_Y$ are unknown and need to be estimated using realizations (observations) $y_1,\ldots,y_K$ of $Y$. Here we use the ML estimators given in and . Note that the ML estimator for the covariance matrix is slightly biased. Instead we could also deal with an unbiased estimator by replacing the averaging factor by $\frac{1}{K-1}$. However, the numerical difference is negligible for large $K$. Summarizing our findings, we obtain the following empirical estimator $$\label{wichtig} \hat T_{\text{MMSE}}(y) = \hat \mu_Y +( \hat \Sigma_Y - \sigma^2 I_n ) \hat \Sigma_Y^{-1} (y- \hat \mu_Y).$$ \[neg\_cov\] Equation contains via the hidden assumption that $ \hat{\Sigma}_X = \hat{\Sigma}_Y - \sigma^2 I_n. $ However, based on the empirical covariance matrix $\hat{\Sigma}_Y$ it is not necessarily ensured that $\hat{\Sigma}_Y - \sigma^2 I_n$ is positive semi-definite and thus a valid covariance matrix. There are different ways to overcome this problem, e.g. replacing negative eigenvalues by a small positive value as for instance proposed in [@RJ11], compare also the discussion in [@LBM13b Section 3.5] or [@Yar12 page 406]. In our numerical experiments we did not observe that this issue had negative impacts on the results. Denoising Using the MMSE Approach {#subsec:denoise_real} --------------------------------- Next we describe how the results of the previous section can be used for image denoising. To this aim, let ${x\colon {\mathcal{G}}\to {\mathbb{R}}}$ be a discrete gray-value image, defined on a grid ${\mathcal{G}}= \{1,\ldots,N_1\}\times \{1,\ldots,N_2\}$. By a slight abuse of notation we also write $x\in {\mathbb{R}}^{N}$, where $N = N_1N_2$ for the columnwise reshaped version of the image. It will be always clear from the context to which notation we refer. We assume that the image is corrupted with white Gaussian noise, i.e., $$y = x + \eta,$$ where $\eta$ is now a realization of $ {\mathcal{N}}(0,\sigma^2 I_{N})$. Based on $y$ we wish to reconstruct the original image $x$.\ We use the fact that natural images are to some extend self-similar, i.e., small similar patches may be found several times in the image, and that for these patches locally a normality assumption holds approximately true, see, e.g. [@ZW2011]. To formalize this idea, consider an $s \times s$ neighborhood (patch) $y_i$ centered at $i = (i_1,i_2)\in \mathcal{G}$, where $s = 2\kappa+1$, $\kappa\in{\mathbb{N}}$. After vectorization this corresponds to a realization of an normally distributed random vector $Y_i\sim {\mathcal{N}}(\mu_i,\Sigma_i)$, where $n = s^2$. This patch is referred to as a reference patch in the following. Similar patches are interpreted as other realizations of ${\mathcal{N}}(\mu_i,\Sigma_i)$. There are several strategies to define similar patches. Take for example, for a fixed $K\in {\mathbb{N}}$, the $K$ nearest patches with respect to the Euclidean distance in a $w \times w$ search window around $i$, where $w = 2\nu + 1 \gg s$, $\nu \in{\mathbb{N}}$. Let ${\mathcal{S}}(i)$ denote the set of centers of patches similar to $y_i$. Then the estimates of the expectation value and the covariance become $$\hat{\mu}_i = \frac{1}{K} \sum_{k\in{\mathcal{S}}(i)} y_k \qquad {\rm and} \qquad \hat{\Sigma}_i = \frac{1}{K}\sum_{k\in {\mathcal{S}}(i)} (y_k-\hat{\mu}_i)(y_k-\hat{\mu}_i)^{\mathrm{T}}.$$ The obtained estimates are then used to restore the reference patch and all its similar patches with as: $$\label{3a} \hat{y}_j = \hat \mu_i + (\hat \Sigma_i-\sigma^2 I_n) \hat \Sigma_i^{-1}(y_j - \hat \mu_i),\qquad j\in{\mathcal{S}}(i).$$ Proceeding as above for all pixels $i\in {\mathcal G}$ yields a variable number of estimates for each pixel. Therewith, the final estimate at pixel $i$ is obtained as an average over all patches containing the pixel $i$ (aggregation). There are some fine-tuning steps that were partly also considered in [@LBM13b; @LBM13]. This is summarized in the following remark. [(Fine tuning steps)]{} \[details\] 1. *Boundaries*: Special attention has to be paid to patches at the boundaries of an image. There are at least two possibilities: Either, one extends the image, e.g. by mirroring, or one considers only patches lying completely inside the image together with appropriately smaller search windows. To our opinion the second strategy is preferable since it does not introduce artificial information. However, it leads to less estimates at the boundaries of the image, but we observed that this does not yield visible artifacts in practice. 2. *Flat areas*: Flat areas, where differences between patches are only caused by noise, require a special consideration, as it is very likely that the estimated covariance matrix will not have full rank. In this case, the patches are better denoised by only using their mean. Flat areas might be detected using the empirical variance of the patches, which is close to $\sigma^2$. 3. *Second step*: The similarity of patches and the covariance structure of the patches can be better estimated using the first step denoised image as an oracle image for a second step. 4. *Acceleration*: To speed up the denoising procedure, each patch that has been used (and therefore denoised at least once) in a group of similar patches is not considered as reference patch anymore. Nevertheless, it may be denoised several times by being potentially chosen in other groups. The whole denoising procedure is given in Algorithm \[Alg:NL\_real\]. We would like to point out the differences between the two steps, which look at first glance very similar: Step 2 uses the denoised image from Step 1 in order to find similar patches and to estimate the covariance matrix, but reuses the original noisy image for the other computations, i.e, for the mean patch and the restored image.\ **Input:** noisy image $y\in {\mathbb{R}}^{N,d}$, variance $\sigma^2$ of the noise **Output:** first step denoised image $\hat{y}$ and final image $\tilde{y}$ **Parameters:** $s_1,s_2$ sizes of patches, $K_1,K_2$ numbers of similar patches, $\gamma$ homogeneous area parameter, $w_1,w_2$ sizes of search areas **Step 1:** Determine the set ${\mathcal{S}}_1(i)$ of centers of $K_1$ patches similar to $y_i$ in a $w_1\times w_1$ window around $i$ Compute the empirical mean patch, $\hat{\mu}_i = (\hat{\mu}_{i,j})_{j=1}^{s_1^2}$, $$\hat{\mu}_i =\frac{1}{K_1} \sum_{k\in{\mathcal{S}}_1(i)} y_k $$ **Homogeneous area test:** Compute the mean value $\hat{m}_i =\frac{1}{s_1^2} \sum_{j=1}^{s_1^2} \hat{\mu}_{i,j}$ and the empirical variance of the patches $$\hat{\sigma}^2_i =\frac{1}{d K_1 s_1^2} \sum_{k\in{\mathcal{S}}_1(i)} \bigl(y_k- {\mathbf 1}_{s_1^2} \otimes\hat{m}_i\bigr)^{\mathrm{T}}\bigl(y_k- {\mathbf 1}_{s_1^2}\otimes\hat{m}_i\bigr)$$ Compute the restored patches $\hat{y}_k = {\mathbf 1}_{s_1^2}\otimes\hat{m}_i$, $k\in {\mathcal{S}}_1(i)$ Compute the empirical covariance matrix $$\hat{\Sigma}_i =\frac{1}{K_1}\sum_{k\in {\mathcal{S}}_1(i)} (y_k -\hat{\mu}_i)(y_k -\hat{\mu}_i)^{\mathrm{T}}$$ Compute the restored patches $\hat{y}_j = \hat{\mu}_i+ (\hat{\Sigma}_i - \sigma^2 I_{s_1^2}) \hat{\Sigma}_i^{-1} (y_j- \hat{\mu}_i)$, $j\in {\mathcal{S}}_1(i)$ **Aggregation:** Obtain the first estimate $\hat{y}$ at each pixel by computing the average over all restored patches containing the pixel **Step 2:** Determine in a $w_2\times w_2$ window around $i$ the set ${{\mathcal{S}}_2}(i)$ of centers of $K_2$ patches which are similar to patch $\hat{y}_i$ of the denoised image in Step 1. Compute the empirical mean patch, $\hat{\mu}_i = (\hat{\mu}_{i,j})_{j=1}^{s_1^2}$, $$\tilde{\mu}_i =\frac{1}{K_2} \sum_{k\in{\mathcal{S}}_2(i)} y_k $$ **Homogeneous area test:** Compute the mean value by $\tilde{m}_i =\frac{1}{s_2^2} \sum_{j=1}^{s_2^2} \tilde{\mu}_{i,j}$ and the empirical variance of the patches $$\tilde{\sigma}^2_i =\frac{1}{d K_2 s_2^2} \sum_{k\in{\mathcal{S}}_2(i)} \bigl(y_k- {\mathbf 1}_{s_2^2}\otimes\tilde{m}_i\bigr)^{\mathrm{T}}\bigl(y_k- {\mathbf 1}_{s_2^2}\otimes\tilde{m}_i\bigr)$$ Compute the restored patches $\tilde{y}_j = {\mathbf 1}_{s_2^2} \otimes\tilde{m}_i$, $j\in {\mathcal{S}}_2(i)$ Compute the empirical covariance matrix $$\widetilde{\Sigma}_i =\frac{1}{K_2}\sum_{k\in {\mathcal{S}}_2(i)}(\hat{y}_k -\tilde{\mu}_i)(\hat{y}_k -\tilde{\mu}_i)^{\mathrm{T}}+ \sigma^2 I_{s_2^2}$$ Compute the restored patches $\tilde{y}_j = \tilde{\mu}_i+(\widetilde{\Sigma}_i- \sigma^2 I_{s_2^2})\widetilde{\Sigma}_i^{-1} (y_j- \tilde{\mu}_i)$, $j\in {\mathcal{S}}_2(i)$ **Aggregation:** Obtain the final estimate $\tilde{y}$ at each pixel by computing the average over all restored patches containing the pixel The overall approach can be generalized to images with values in ${\mathbb{R}}^d$, $d > 1$, in a straightforward way, dealing now with $n$-dimensional random vectors, where $n = s^2 d$. In particular, RGB-color images ($d=3$) can be denoised in this way. At this point, considering the three color channels independently does usually not yield good results as there is a significant correlation between the red, the green, and the blue color channel. This correlation is correctly taken into account in the three-variate setting. As an alternative, Lebrun et al. [[@LBM13]]{} suggested to work in the so-called $Y_o U_o V_o$ color space [[@OKS1980]]{}, which is a variant of the YUV space where transform from the RGB space is orthogonal and thus does not change the noise statistics. This color system separates geometric from chromatic information and thereby decorrelates the color channels, so that treating them independently does not create noticeable color artifacts as it would be the case in the RGB space. Random Points on Manifolds {#sec:model_random} ========================== Instead of $\mathbb R^d$-valued images we are now interested in images having values in a $d$-dimensional manifold $M$. We start by introducing the necessary notation in Riemannian manifolds. In our numerical examples we will deal with images having components on the $d$-sphere $\mathbb S^d$ equipped with the Euclidean metric of the embedding spaces ${\mathbb{R}}^{d+1}$, $d=1,2$, and the manifold of positive definite $r \times r$-matrices ${\operatorname{SPD}}(r)$, $r=2,3$, with the affine invariant metric. For these manifolds the specific expressions of the following quantities are given in Appendix \[app:ex\_mani\]. Further, we will consider the open probability simplex $\Delta_{d} \subset {\mathbb{R}}_{>0}^{d+1}$, $d=1$, with the Rao-Fisher metric obtained from the categorial distribution and the hyperbolic manifold $\mathbb H^d$, $d=2$, equipped with the Minkowski metric. Besides many textbooks on differential geometry the reader may have a look into Pennec’s paper [@Pen06] to get an overview. We adapted our notation to this paper. #### Manifolds If not stated otherwise, let ${\mathcal{M}}$ be a complete, connected $n$-dimensional Riemannian manifold. All of the previously mentioned manifolds are complete, except for the probability simplex. Observe that we will work with $s \times s$ patches of $d$-dimensional manifolds $M$ such that we finally deal with product manifolds ${\mathcal{M}}= M^{s^2}$ of dimension $n = s^2 d$ with the usual product metric. By $T_{\bm x} {\mathcal{M}}$ we denote the tangent space of ${\mathcal{M}}$ at $\bm x \in {\mathcal{M}}$ and by $\langle \cdot,\cdot \rangle_{\bm x} \colon T_{\bm x} {\mathcal{M}}\times T_{\bm x} {\mathcal{M}}\rightarrow \mathbb R$ the Riemannian metric. Let $\gamma_{{\bm x},v}(t)$, ${\bm x} \in\mathcal M$, $v\in T_{\bm x} {\mathcal{M}}$, be the geodesic starting from $\gamma_{{\bm x},v}(0) = {\bm x}$ with $\dot\gamma_{{\bm x},v} (0) = v$. Since ${\mathcal{M}}$ is complete, the exponential map $\exp_{\bm x}\colon T_{\bm x} {\mathcal{M}}\rightarrow \mathcal M$ with $$\exp_{\bm x}(v) \coloneqq \gamma_{\bm x,v}(1)$$ is well-defined for every $\bm x \in {\mathcal{M}}$. The exponential map realizes a local diffeomorphism (exponential chart) from a “sufficiently small neighborhood” of the origin $0_{\bm x}$ of $T_{\bm x}{\mathcal{M}}$ into a neighborhood of ${\bm x} \in {\mathcal{M}}$. To precise how large this “small neighborhood” can be chosen, we follow the geodesic $\gamma_{{\bm x},v}$ from $t=0$ to infinity. It is either minimizing all along or up to a finite time $t_0$ and not any longer afterwards. In the latter case, $\gamma_{{\bm x},v}(t_0)$ is called *cut point* and the corresponding tangent vector $t_0v$ is called *tangential cut point*. The set of all cut points of all geodesics starting from $\bm x$ is the *cut locus* $\mathcal{C}({\bm x})$ and the set of corresponding vectors $\mathcal{C}_T(0_{\bm x})$ the *tangential cut locus*. Then the open domain $\mathcal{D}_T(0_{\bm x})$ around $0_{\bm x}$ bounded by the tangential cut locus is the maximal domain for which the exponential chart at $\bm x$ is injective. It is connected and star-shaped with respect to $0_{\bm x}$ and $ \exp_{\bm x} \mathcal{D}_T (0_{\bm x}) = {\mathcal{M}}\backslash {\cal C} (\bm \mu) $. This allows to define the inverse exponential map as $$\log_{\bm x} \coloneqq \exp_{\bm x}^{-1}\colon {\mathcal{M}}\backslash {\cal C} (\bm \mu) \to T_{\bm x}\mathcal M.$$ For the $d$-sphere $\mathbb S^d$ the cut locus of ${\bm x}$ is just its antipodal point $-\bm x$. Thus the tangential cut locus $\mathcal{D}_T(0_{\bm x})$ is the ball with radius $\pi$ around $0_{\bm x}$ and $\mathcal{C}_T(0_{\bm x})$ its boundary. For Hadamard manifolds which are complete, simply-connected manifolds with non-positive sectional curvature [@B2014], as ${\operatorname{SPD}}(r)$ or $\mathbb H^d$, we have that $\mathcal{D}_T(0_{\bm x}) = T_{\bm \mu} {\mathcal{M}}$. The Riemannian metric yields a distance function $\operatorname{dist}_{{\mathcal{M}}}\colon {\mathcal{M}}\times {\mathcal{M}}\rightarrow \mathbb R_{\ge 0}$ on the manifold by $ \operatorname{dist}_{\cal M}(\bm x,\bm y) = \langle \log_{\bm x} (\bm y), \log_{\bm x} (\bm y) \rangle_{\bm x} $ and a measure ${\rm d}_{{\mathcal{M}}} (\bm x)$ written in local coordinates $x = (x^1,\ldots,x^n)$ by ${\rm d}_{{\mathcal{M}}} (\bm x) = \sqrt{ G(x) } \, {{\,\mathrm{d}}}x$, where $ G(x) \coloneqq \big( \big\langle \frac{\partial}{\partial x ^i}, \frac{\partial}{\partial x^j} \big\rangle_{\bm x} \big)_{i,j = 1}^n $ and ${{\,\mathrm{d}}}x \coloneqq {{\,\mathrm{d}}}x^1 \ldots {{\,\mathrm{d}}}x^n$.\ #### Random Points Let $(\Omega, {\mathcal{A}}, \mathbb P)$ be a probability space and ${\cal B} ({\mathcal{M}})$ the Borel $\sigma$-algebra on ${\mathcal{M}}$ (with respect to $\operatorname{dist}_{{\mathcal{M}}}$). A measurable map $\bm X\colon \Omega \rightarrow {\mathcal{M}}$ is called a *random point* on ${\mathcal{M}}$. We consider only absolutely continuous random points $\bm X$ with probability density $p_{\bm X}$, i.e., $\mathbb P(\bm X \in B) = \int_B p_{\bm X} ({\bm x}) \, {\rm d}_{{\mathcal{M}}}({\bm x})$ for all $B \in {\cal B} ({\mathcal{M}})$ and $\mathbb P (\bm X \in {\mathcal{M}}) = 1$. The *variance* of $\bm X$ with respect to a given point $\bm y$ is defined as $$\label{variance} \sigma^2_{\bm X} (\bm y) \coloneqq {\mathbb{E}}\bigl( \operatorname{dist}_{{\mathcal{M}}}(\bm X,\bm y)^2\bigr) = \int_{{\mathcal{M}}} \operatorname{dist}_{{\mathcal{M}}} (\bm x, \bm y)^2 \, p_{\bm X}(\bm x) {{\,\mathrm{d}}}_{{\mathcal{M}}} (\bm x),$$ and local minimizers of $\bm y \mapsto \sigma^2_{\bm X} (\bm y)$ are called *Riemannian centers of mass* [@Karcher1977]. For a discussion of the existence and uniqueness of global minimizers, known as *Fr[é]{}chet expectation* or *means* ${\mathbb{E}}(\bm X)$ of $\bm X$ see, e.g., [@AR11; @Karcher1977; @Kendall1990]. For Hadamard manifolds with curvature bounded from below the Riemannian center of mass exists and is unique. For the spheres ${\mathbb S}^d$, if the support of $p_{\bm X}$ is contained in a geodesic ball of radius $r < \pi/2$, then the Riemannian center of mass is unique within this ball and it is the global minimizer of . In the following we assume that the variance is finite and the cut locus has a probability measure zero at any point $\bm y \in {\mathcal{M}}$. Then a necessary condition for $\bm \mu$ to be a Riemannian center of mass is $$\label{karcher_mean} \int_{{\mathcal{M}}} \, \log_{\bm \mu} (\bm x) \, {{\,\mathrm{d}}}_{{\mathcal{M}}} (\bm x) = 0.$$ For Hadamard manifolds with curvature bounded from below this condition is also sufficient. Assuming that the mean $\bm \mu = {\mathbb{E}}(\bm X)$ is known, we define the covariance matrix $\Sigma$ of $\bm X$ (with respect to $\bm \mu$) by\ $$\label{cov} \Sigma = {\mathbb{E}}\bigl(\log_{\bm \mu} (\bm X) \log_{\bm \mu} (\bm X)^{\mathrm{T}}\bigr) = \int_{{\mathcal{M}}} \log_{\bm \mu} (\bm x) \log_{\bm \mu} (\bm x)^{\mathrm{T}}\, p_{\bm X}(\bm x) \, {{\,\mathrm{d}}}_{{\mathcal{M}}} (\bm x).$$ In practice, typically ${\mathbb{E}}(\bm X)$ and $\Sigma$ are unknown and need to be estimated. Given observations $\bm x_1,\ldots, \bm x_K \in {\mathcal{M}}$ of a random point $\bm X$, we estimate the mean point by $$\label{karcher_disc} \hat{\bm \mu} \in \operatorname*{arg\,min}_{\bm x \in {\mathcal{M}}} \frac{1}{K} \sum_{k=1}^K \operatorname{dist}_{{\mathcal{M}}}(\bm x,\bm x_k)^2,$$ which is according to [@BP03] a consistent estimator of $\mathbb E(\bm X)$ and can be computed by a gradient descent algorithm, see, e.g. [@ATV13]. An estimator for the covariance matrix reads as $$\label{cov_empir} \hat{\Sigma} =\frac{1}{K} \sum_{k=1}^K \log_{\hat {\bm \mu}} (\bm x_k) \log_{\hat {\bm \mu}}(\bm x_k)^{\mathrm{T}}.$$ Nonlocal Patch-Based Denoising of Manifold-Valued Images {#sec:model_manifold} ======================================================== In this section we propose an NL-MMSE denoising algorithm for manifold-valued images. To this end, we have to specify what we mean by “normally distributed” random points on manifolds. In contrast to the vector space setting, there does not exist a canonical definition of a normally distributed random vector on a manifold since various properties characterizing the normal distribution on ${\mathbb{R}}^n$ as those in Remark \[rem:prop\_gaussian\], cannot be generalized to the manifold setting in a straightforward way. Here we rely on a simple approach which transfers normally distributed zero mean random vectors on tangent spaces via the exponential map to the manifold. Based on this definition we will see how the NL-MMSE from Section \[sec:model\_real\] carries over to manifold-valued images.\ Gaussian Random Points {#gaussian_random_points} ---------------------- In the following, we describe the Gaussian model used in this paper for Hadamard manifolds and spheres and discuss its relation to other models for small variances. For each tangent space $T_{\bm x} {\mathcal{M}}$ with fixed orthonormal basis $\{e_{{\bm x},i}\}_{i=1}^{n}$ we can identify the element $ \sum_{i=1}^n x^i e_{{\bm x},i} \in T_{\bm x}{\mathcal{M}}$ with the local coordinate vector $x = (x^i)_{i=1}^n \in {\mathbb{R}}^n$, which establishes an isometry between $T_{\bm x} {\mathcal{M}}$ and ${\mathbb{R}}^n$. Note that also the expressions in  and  are basis dependent, but assuming a fixed basis the relation skipped for simplicity of notation. Now, let $\bm \mu \in {\mathcal{M}}$ and let $h\colon \mathbb R^n \rightarrow T_{\bm \mu} {\mathcal{M}}$ be the linear isometric mapping $$\label{def_h} h(x) \coloneqq \sum_{i=1}^n x^i e_{\bm \mu,i} \in T_{\bm \mu}{\mathcal{M}}.$$ Let ${\cal D}_{\bm \mu} \coloneqq h^{-1} \left( {\cal D}_T ( 0_{\bm \mu} ) \right) \subseteq {\mathbb{R}}^n$. Since $\exp_{\bm \mu}$ is continuous, we have for any $B \in {\cal B} ( {\mathcal{M}})$ that $B_n \coloneqq h^{-1} (\log_{\bm \mu} (B)) \subseteq {\cal D}_{\bm \mu}$ is a Borel set and for any integrable function $F$ it holds $$\begin{aligned} \label{rueck} \int_{B} F(\bm x) {{\,\mathrm{d}}}_{{\mathcal{M}}} (\bm x ) &= \int_{B_n} F \left( \exp_{\bm \mu} ( h (x)) \right) \, \big| G(x) \big|^\frac12 \, {{\,\mathrm{d}}}x, \end{aligned}$$ where $ G(x) = \big( \langle {\rm d} (\exp_{\bm \mu})_{h(x)} [e_{\bm \mu,i}], {\rm d} (\exp_{\bm \mu})_{h(x)} [e_{\bm \mu,j}] \rangle \big)_{i,j=1}^n $. Conversely, for any Borel set $B_n \subseteq {\cal D}_{\bm \mu}$ and any integrable function $f$ we see that $B \coloneqq \exp_{\bm \mu}(h(B_n)) \in {\cal B}({\mathcal{M}})$ and $$\begin{aligned} \label{transform} \int_{B_n} f(x) {{\,\mathrm{d}}}x &= \int_B f \big( h^{-1} (\log_{\bm \mu}\bm (x)) \big) \, \big| \tilde G(\bm x) \big|^\frac12 {{\,\mathrm{d}}}_{{\mathcal{M}}} \, (\bm x), \end{aligned}$$ where $ \tilde G(\bm x) = \big( \langle {\rm d} (\log_{\bm \mu})_{\bm x} [e_{\bm x,i}], {\rm d} (\log_{\bm \mu})_{\bm x} [e_{\bm x,j}] \rangle_{\bm \mu} \big)_{i,j=1}^n. $ If $Z \sim {\cal N}(0,I_n)$ is standard normally distributed on ${\mathbb{R}}^n$ with pdf $p_Z$, then $$\label{definition} \bm Z \coloneqq \exp_{\bm \mu} (h(Z))$$ is a random point on ${\mathcal{M}}$. For Hadamard manifolds, we have $D_{\bm \mu} = {\mathbb{R}}^n$ so that for $\bm x \coloneqq \exp_{\bm \mu}(h(x))$, $$\label{radial} \|x\|_2^2 = \langle h(x), h(x) \rangle_{\bm \mu} = \langle \log_{\bm \mu} (\bm x), \log_{\bm \mu} (\bm x) \rangle_{\bm \mu} = {\rm dist}_{{\mathcal{M}}} (\bm \mu,\bm x)^2.$$ Thus, $\bm Z$ has the pdf $$\label{stand_normal} \begin{split} p_{\bm Z} (\bm z) &= p_Z (h^{-1} \left(\log_{\bm \mu} (\bm z)) \right) \lvert\tilde G (\bm z)\rvert^{\frac{1}{2}}\\ &= \frac{1}{(2 \pi)^{n/2}} {\rm e}^{-\frac12 {\rm dist}_{{\mathcal{M}}} (\mu, \bm z)^2 } \lvert\tilde G (\bm z)\rvert^{\frac{1}{2}}. \end{split}$$ Note that by incorporating the factor $\lvert\tilde G (\bm z)\rvert^{\frac{1}{2}}$ into the density function we avoid problems as discussed in [@Jer05]. By construction and it follows directly that the mean of $\bm Z$ is $\bm \mu$ and the covariance is $I_n$. We consider $\bm Z$ as normally distributed on ${\mathcal{M}}$ and write $\bm Z \sim {\mathcal{N}}_{{\mathcal{M}}}(\bm \mu,I_n)$. In other words, $\bm Z$ is normally distributed on ${\mathcal{M}}$ with mean $\bm \mu$ and covariance $I_n$ if $Z \coloneqq h^{-1} (\log_{\bm \mu} (\bm Z))$ is standard normally distributed on $\mathbb R^n$. If $D_{\bm \mu} \not = {\mathbb{R}}^n$ as it is the case for $d$-spheres, we assume that up to a set of Lebesgue measure zero ${\mathbb{R}}^n = {\dot \bigcup}_{j \in \mathcal{J}} {\cal D}_{\bm \mu,j}$, where $\mathcal{J} \subseteq \mathbb Z$ is an index set and ${\cal D}_{\bm \mu,0} \coloneqq {\cal D}_{\bm \mu}$. Further, we suppose that there are diffeomorphisms $\varphi_j\colon {\cal D}_{\bm \mu,j} \rightarrow {\cal D}_{\bm \mu}$ such that for $x \in {\cal D}_{\bm \mu,j}$ it holds $\exp_{\bm \mu} \left( h (x) \right) = \exp_{\bm \mu} \left( (h \circ \varphi_j) (x) \right)$. Then, in order to obtain the pdf of $\bm Z$ in , we have to replace $p_Z$ in by the wrapped function $$\begin{aligned} \label{stand_normal_sphere} \tilde p_Z(z) \coloneqq \frac{1}{(2 \pi)^{n/2}} \sum_{j \in {\cal J}} {\rm e}^{-\frac12 \|\varphi_j^{-1} (z)\|_2^2 } \, |{\rm d} \varphi_j^{-1} (z)|, \qquad z \in {\cal D}_{\bf \mu}. \end{aligned}$$ Now, we follow the same lines as in the Euclidean setting and agree that $\bm X$ is normally distributed with mean $\bm \mu$ and positive definite covariance $\Sigma = A A^{\mathrm{T}}$ if $X = h^{-1} (\log_\mu(\bm X)) = A Z \sim {\cal N} (0, \Sigma)$, respectively, $$\label{definition_1} \bm X \coloneqq \exp_{\bm \mu} (h(X)), \qquad X \sim {\cal N} (0, \Sigma),$$ and write $\bm X \sim {\cal N}_{{\mathcal{M}}} (\bm \mu, \Sigma)$. The following proposition shows how the pdf of a normally distributed random point looks for various one-dimensional manifolds.\ \[1d\_manifolds\] The pdf of a random point $\bm X \sim {\cal N}_{\mathcal{M}}(\bm \mu,\sigma^2 I_n)$ is given by 1. the [*log-normal distribution*]{} for ${\mathcal{M}}= {\mathbb{R}}_{>0} = \operatorname{SPD}(1)$, $$p_{\bm X}(\bm x) =\frac{1}{\sqrt{2\pi \sigma^2}} {{\mathrm{e}}}^{-\frac{1}{2\sigma^2} ( \ln(\bm x)-\ln(\bm \mu))^2}$$ with respect to the measure ${{\,\mathrm{d}}}_{{\mathbb{R}}_{>0}}(\bm x) = \tfrac{1}{ \bm x} {{\,\mathrm{d}}}\bm x$ on ${\mathbb{R}}_{>0}$; 2. the $2\pi$-*wrapped Gaussian distribution* for ${\mathcal{M}}= {\mathbb{S}}^1$, $$\begin{aligned} \label{circle} p_{\bm X}(\bm x(t)) &= \frac{1}{\sqrt{2\pi \sigma^2}}\sum_{j\in {\mathbb{Z}}} {{\mathrm{e}}}^{-\frac{1}{2\sigma^2} (t - t_\mu + 2j\pi)^2} \end{aligned}$$ with respect to the parameterization\ $\bm \mu \coloneqq (\cos (t_\mu), \sin (t_\mu))^ {\mathrm{T}}$ and the Lebesgue measure $\!{{\,\mathrm{d}}}t$; 3. the $2\pi$-*wrapped, even shifted Gaussian distribution* for ${\mathcal{M}}= \Delta_{1}$, $$\label{simplex} p_{\bm X}(\bm x(t)) =\frac{1}{\sqrt{2\pi \sigma^2}} \sum_{j \in \mathbb Z} \Big( {{\mathrm{e}}}^{-\frac{1}{2\sigma^2} (t + t_\mu + 2j\pi)^2} + {{\mathrm{e}}}^{-\frac{1}{2\sigma^2} (t - t_\mu + 2j\pi)^2} \Big)$$ with respect to the parameterization $ \bm x(t) = \frac12 ( 1 + \cos (t), 1 - \cos (t) )^{\mathrm{T}}$, $ t \in (0,\pi)$, $ \bm \mu = \frac12 ( 1 + \cos (t_\mu), 1 - \cos (t_\mu) )^{\mathrm{T}}$ and Lebesgue measure ${{\,\mathrm{d}}}t$. The proof of the proposition is given in the Appendix \[app:prop\]. The above definition of normally distributed random points has the advantage that it adopts the affine invariance of the Gaussian distribution known from the Euclidean setting via the tangent space. Moreover, it is easy to sample from the distribution. \[sampling\] Sampling of a ${\cal N}_{\mathcal{M}}(\bm \mu, \sigma^2 I_n)$ distributed random variable can be performed as follows: i) sample from ${\cal N}(0,\sigma^2 I_n)$ in $\mathbb R^n$, ii) apply $h$ which by requires only the knowledge of an orthogonal basis in $T_{\bm \mu} {\mathcal{M}}$, and iii) map the result by $\exp_{\bm \mu}$ to ${\mathcal{M}}$. For the one-dimensional manifolds in Proposition \[1d\_manifolds\], the pdfs in (i) - (iii) are the kernels of the heat equations with the corresponding Laplace-Beltrami operators. This is in general not true for higher dimensions [@Grig2009]. However, numerical experiments show that samples from the Gauss-Weierstrass kernel on $\mathbb S^2$ [@FGS1998 p. 112] and from the heat kernel on ${\rm SPD}(r)$ [@terras1988 p. 107] are very similar. For kernel density estimations on special Hadamard spaces we refer also to [@CBA2015] and for kernels in connection with dithering on the sphere we refer to [@GPS2012]. Neither the maximizing entropy nor the ML estimation property from Remark \[rem:prop\_gaussian\] generalize to the above setting. In [@Pen06] Pennec showed that under certain conditions on ${\cal D}_T (0_{\bm \mu})$ the pdf of a random point on ${\mathcal{M}}$ that maximizes the entropy given prescribed mean value $\bm \mu$ and covariance $\Sigma$ is of the form $\frac{1}{\psi} {{\mathrm{e}}}^{- \frac12 (\log_{\bm \mu} (\bm x))^{\mathrm{T}}\tilde{\Sigma} (\log_{\bm \mu} (\bm x))}$ with normalization constant $\psi$. Said and co-workers made use of the ML-estimator property in order to generalize the (isotropic) normal distribution to ${\mathcal{M}}= {\rm SPD}(r)$ in [@SBBM15] and to symmetric spaces of non-compact type in [@SHBV16]. They proposed the following density function for a normal distribution with mean $\bm \mu$ and covariance $\sigma^2 I_n$: $$\label{entropy_min} p_{\bm X}(\bm x) = \frac{1}{\psi(\sigma)}{{\mathrm{e}}}^{-\frac{1}{2\sigma^2}\operatorname{dist}_{{\mathcal{M}}}(\bm \mu, \bm x)^2}.$$ For the concrete definition of $\psi$ and the noise simulation according to this model in the case of ${\mathcal{M}}= \operatorname{SPD}(r)$ see [@SBBM15] and the Appendix \[sec:said\]. For $n=1$ the models coincide by Proposition \[1d\_manifolds\]. For this distribution it is clear that the Riemannian center of mass is given by $\bm \mu$ and that the ML estimator for $\bm \mu$ is the empirical Karcher mean. Figure \[fig:spd\_noise\], left shows a $100 \times 100$ image of realizations of normally distributed noise on ${\operatorname{SPD}}(2)$ by our model ${\mathcal{N}}(I_2, \sigma^2 I_2)$, where $\sigma = 0.5$. For comparison, in Figure \[fig:spd\_noise\], right the same is shown for realizations of ${\mathcal{N}}_{Said}(I_d,\sigma^2 I_d)$, also for $\sigma = 0.5$. The noise looks visually very similar, which is also confirmed by Table \[tab:noise\_parameters\]. The first two rows of Table \[tab:noise\_parameters\] present the estimated mean $\bm \mu$ based on , the covariance matrix $\Sigma$ based on  and the estimated standard deviation $\sigma$ for our noise model and those of Said et al.. [lccc]{} & $\bm \mu$ & $\sigma$ & $\Sigma$\ ${\mathcal{N}}_{\mathrm Said}$ &$\begin{pmatrix} 0.9960& 0.0044\\0.0044 &0.9944 \end{pmatrix}$ & $0.5051$ & $\begin{pmatrix*}[r] 0.2513 & 0.0025 & -0.0044 \\ 0.0025 & 0.2533 & -0.0017 \\ -0.0044 & -0.0017 & 0.2608 \end{pmatrix*}$\ ${\mathcal{N}}$ &$\begin{pmatrix}0.9980 & 0.0036\\0.0036 & 0.9943 \end{pmatrix}$ & $0.5026$ & $\begin{pmatrix*}[r] \phantom{-}0.2567 & 0.0017 & 0.0001 \\ 0.0017 & 0.2487 & -0.0008 \\ 0.0001 & -0.0008 & 0.2523 \end{pmatrix*}$\ Below we give an example for the above pdf  and those in for the manifold ${\mathcal{M}}= {\mathbb H}^2$. Let ${\mathbb H}^2 \coloneqq \{\bm x \in \mathbb R^3: x_1^2 + x_2 ^2 - x_3^2 = -1, \; x_3 >0\}$ be the hyperbolic manifold equipped with the Minkowski metric $\langle \bm x, \bm y \rangle_{{\mathbb H}^2} \coloneqq x_1y_1 + x_2y_2 - x_3y_3$. The distance reads as $ {\rm dist}_{{\mathbb H}^2} (\bm x, \bm y) = \operatorname{arcosh}\left(-\langle \bm x, \bm y \rangle_{{\mathbb H}^2} \right) $ and $$\begin{aligned} \exp_{\bm x} (v) &= \cosh \left( \sqrt{\langle v,v \rangle_{{\mathbb H}^2}} \right) \bm x + \sinh \left( \sqrt{\langle v,v \rangle_{{\mathbb H}^2}} \right) \frac{v}{\sqrt{\langle v,v \rangle_{{\mathbb H}^2}}},\\ \log_{\bm x} (\bm y) &= \frac{\operatorname{arcosh}\left(- \langle \bm x, \bm y \rangle_{{\mathbb H}^2} \right)} {\left(\langle \bm x, \bm y \rangle_{{\mathbb H}^2}^2 -1\right)^\frac12} \left(\bm x + \langle \bm x, \bm y \rangle_{{\mathbb H}^2} \bm x\right). \end{aligned}$$ We parametrize $\bm x\in {\mathbb H}^2$ as $$\bm x(\alpha, r) = \begin{pmatrix} \cos(\alpha)\sinh(r)\\ \sin(\alpha)\sinh(r)\\ \cosh(r) \end{pmatrix}, \quad \alpha \in [0,2\pi),\ r \in [0,\infty).$$ First, we compute the pdf of an ${\cal N}_{{\mathbb H}^2}(\bm \mu, \sigma^2 I_2)$ distributed random point, where $\mu \coloneqq (0,0,1)^{\mathrm{T}}$. We obtain $ {\rm dist}_{{\mathbb H}^2} (\bm \mu, \bm x) = r $ and $\{ e_{\bm \mu,1} = (1,0,0)^{\mathrm{T}}, \, e_{\bm \mu,2} = (0,1,0)^{\mathrm{T}}\}$ and for the other points ($r \not = 0$) $$e_{\bm x,1} \coloneqq \frac{1}{\sinh(r)} \frac{\partial }{\partial \alpha} \bm x(\alpha,r) = \begin{pmatrix} -\sin(\alpha)\\ \cos(\alpha)\\ 0 \end{pmatrix}, \quad e_{\bm x,2} \coloneqq \frac{\partial }{\partial r} \bm x(\alpha,r) = \begin{pmatrix} \cos(\alpha) \cosh( r)\\ \sin(\alpha) \cosh( r)\\ \sinh (r) \end{pmatrix}.$$ Then the measure on ${\mathbb H}^2$ reads ${\rm d}_{{\mathbb H}^2} (\bm x) = \sinh( r ) {{\,\mathrm{d}}}\alpha {{\,\mathrm{d}}}r$. Straightforward computation gives $${{\,\mathrm{d}}}(\log_{\bm \mu})_{\bm x} [e_{\bm x,1}] = \frac{r}{\sinh (r)} \begin{pmatrix} -\sin(\alpha) \\ \cos(\alpha) \\ 0 \end{pmatrix}, \quad {{\,\mathrm{d}}}(\log_{\bm \mu})_{\bm x} [e_{\bm x,2}] = \begin{pmatrix} \cos(\alpha) \\ \sin(\alpha)\\ 0 \end{pmatrix}$$ so that $ |\tilde G(\bm x)|^{\frac{1}{2}} = r/\sinh (r) $. Consequently, the density is $$p_{\bm X} (\bm x(\alpha,r)) = \frac{1}{2 \pi\sigma^2} {{\mathrm{e}}}^{-\frac{r^2}{2\sigma^2}} \, \frac{r}{\sinh (r)}.$$ In contrast, the entropy minimizing pdf is given by $$p_{\bm X} \bigl(\bm x(\alpha,r)\bigr) = \frac{1}{\psi} {{\mathrm{e}}}^{-\frac{r^2}{2 \sigma^2}}, \quad \psi \coloneqq 2\pi\int_0^\infty {{\mathrm{e}}}^{-\frac{r^2}{2\sigma^2}} \sinh(r) {{\,\mathrm{d}}}r\\ = 2 \pi \sigma {{\mathrm{e}}}^{\frac{\sigma^2}{2}} \int_0^{\sigma}{{\mathrm{e}}}^{-\frac{t^2}{2}} {{\,\mathrm{d}}}t.$$ Besides the kernels of the heat equation, the [von Mises-Fisher distribution]{} is frequently considered as “spherical normal distribution” on ${\mathbb{S}}^d$. We briefly comment on this distribution. \[relation\_FM\] For ${\mathbb{S}}^1$ it is well-known that the wrapped Gaussian distribution is closely related to the von Mises distribution $M(\bm \mu,\kappa)$ *[@GGD1953; @langevin1905; @mises1918]* whose density function reads as $$p_{{\mathrm{MF}}} (\bm x|\bm \mu,\kappa) = \frac{1}{2\pi I_0(\kappa)} \,{{\mathrm{e}}}^{\kappa \cos(\bm x- \pi -\bm\mu)}, \qquad \bm x \in [-\pi,\pi), \label{pdf_MF}$$ where $I_n$ denotes is the modified Bessel function of first kind and order $n$. The parameter $\bm\mu$ is referred to as *mean direction*, $\kappa > 0 $ is the *concentration parameter*. The von Mises distribution is the distribution that maximizes the entropy under the constraint that the real and imaginary parts of the first [circular]{} moment (or, equivalently, the circular mean and circular variance) are specified. The maximum likelihood characterization is analogously to the one given in Remark \[rem:prop\_gaussian\], where the sample mean is replaced by the [sample mean direction]{}. A good matching between the pdfs of the wrapped Gaussian and those of the von Mises for high concentration (i.e. large $\kappa$ respective small $\sigma^2$) can be found by taking the same center $\bm\mu$ and $\sigma^2 =-2\log\bigl(A(\kappa)\bigr)$ with $A(\kappa) = \frac{I_1(\kappa)}{I_0(\kappa)}$, see *[@MJ2000]*.\ The von Mises distribution on ${\mathbb{S}}^1$ can be generalized to ${\mathbb{S}}^{d}$, leading to the *von Mises-Fisher distribution* given by $$p_{{\mathrm{MF}}} (\bm x|\bm \mu,\kappa) = \left(\frac{\kappa}{2}\right)^{\frac{d-1}{2}} \frac{1}{\Gamma\bigl(\tfrac{d+1}{2}\bigr) I_{\frac{d-1}{2}}(\kappa) } \,{{\mathrm{e}}}^{\kappa \bm\mu^{\mathrm{T}}\bm x},\label{pdf_vonMises}$$ where $\kappa >0$, $\lVert \bm\mu \rVert = 1$ and $\Gamma$ denotes the gamma function. For $d = 2$, the von Mises-Fisher distribution is also known as *Fisher distribution* and the pdf simplifies to $$p_{{\mathrm{MF}}} (\bm x|\bm \mu,\kappa)= \frac{\kappa }{\sinh(\kappa)}\,{{\mathrm{e}}}^{\kappa \bm\mu^{\mathrm{T}}\bm x}.$$ In Figure \[Fig:noise\_S2\] we compare samples of our Gaussian noise model ${\mathcal{N}}(0,\sigma^2)$ with the von Mises-Fisher distribution $M(\bm\mu,\kappa)$ on ${\mathbb{S}}^2$ for $\bm\mu = (0,0,1)$ and $\sigma^2 = \frac{1}{\kappa}$. NL-MMSE on Manifold-Valued Images --------------------------------- Assume that $\bm Y = \exp_{\bm\mu}(Y)$ is a random point on ${\mathcal{M}}$ arising from a normally distributed random point $\bm X = \exp_{\bm\mu}(X) \sim {\cal N}_{\mathcal{M}}(\bm\mu,\Sigma)$ in the sense $$Y = X + \eta,\qquad X\sim {\mathcal{N}}(0_{\bm \mu},\Sigma_X), \quad \eta\sim {\mathcal{N}}(0,\sigma^2I_n),$$ where by slight abuse of notation we write $\exp_{\bm \mu}$ and $\log_{\bm \mu} $ instead of $ \exp_{\bm \mu} \circ h$ and $ h^{-1} \circ \log_{\bm \mu}$, respectively, as also done in Subsection \[gaussian\_random\_points\]. In the following we propose an estimator for $\bm X$ based on $\bm Y = \exp_{\bm\mu} (Y)$, which arises from a two-step estimation procedure and is motivated by the Euclidean MMSE described in Section \[Sec:MMSE\]. In order to avoid technical difficulties we restrict our attention to Hadamard manifolds ${\mathcal{M}}$ such that the exponential and logarithmic map are globally defined and the Riemannian center of mass exists and is uniquely determined.\ In the first step, we estimate the mean $\bm\mu = {\mathbb{E}}(\bm X)$ as $$\begin{aligned} \label{man_first} \bm \mu = \operatorname*{arg\,min}_{\bm Z\in \sigma(\{\emptyset,\Omega\})} {\mathbb{E}}\bigl[\operatorname{dist}_{\mathcal{M}}(\bm X,\bm Z)^2\bigr]. \end{aligned}$$ Note that this corresponds to the definition given in , since those random variables $\bm Z$ that are measurable with respect to the trivial $\sigma$-algebra $\{\emptyset,\Omega\}$ are exactly the constant random variables. By construction we have ${\mathbb{E}}(\bm X) = {\mathbb{E}}(\bm Y)$. Once $\bm \mu$ is known, we estimate covariance matrix of $X$ by $$\begin{aligned} \label{man_sec} T_{\text{MMSE}}(\bm Y)&= \operatorname*{arg\,min}_{\log_{\bm \mu} (\bm Z) \in \sigma ( \log_{\bm \mu} (\bm Y))} {\mathbb{E}}\bigl[\lVert \log_{\bm \mu}(\bm X) -\log_{\bm \mu}(\bm Z) \rVert_2^2\bigr]\\ & = \operatorname*{arg\,min}_{Z\in \sigma(Y)} {\mathbb{E}}\bigl[\lVert X-Z\rVert_2^2\bigr] = {\mathbb{E}}(X|Y). \end{aligned}$$ In our specific Gaussian noise setting we are now in the same situation as described after Remark \[rem:prop\_gaussian\], so that by combining and we finally arrive at the estimator $$T(\bm Y) = \exp_{{\bm \mu}} \bigl( ( \Sigma_Y-\sigma^2 I_n) \Sigma_Y^{-1} \, \log_{ {\bm \mu}}( \bm Y) \bigr).$$ Next, we describe how to estimate $\bm \mu$ and $\Sigma_Y$ based on samples. To this aim, let ${x\colon {\mathcal{G}}\to M}$ be a discrete image defined on a grid ${\mathcal{G}}= \{1,\ldots,N_1\}\times \{1,\ldots,N_2\}$ with values in a $d$-dimensional manifold $M$. As for real-valued images, we consider small $s \times s$ image patches centered at $i = (i_1,i_2)\in {\cal G}$. We assume that the patch $\bm y_i$ corresponds to a realization of a normally distributed random point $\bm Y_i \sim {\mathcal{N}}_{{\mathcal{M}}}({\bm \mu}_i,\Sigma_i)$ on ${\mathcal{M}}$, where ${\mathcal{M}}= M^{s^2}$ is the product manifold of dimension $n = s^2 d$ equipped with the distance ${\rm dist}_{{\mathcal{M}}}^2 (\bm x, \bm y) = \sum_{j=1}^{s^2} \operatorname{dist}_M (\bm x_j,\bm y_j)^2$. We fix $K\in {\mathbb{N}}$ and take the $K$ nearest patches with respect to $\operatorname{dist}_{{\mathcal{M}}}$ in a $w \times w$ search window around $i$. These patches are interpreted as other realizations of the same random point. Let ${\mathcal{S}}(i)$ denote the set of centers of the patches similar to $\bm y_i$. Then the empirical estimates for the mean and the covariance in , respective  read as $$\begin{aligned} \hat{{\bm \mu}}_i \in \operatorname*{arg\,min}_{{\bm \mu} \in {\mathcal{M}}} \sum_{j \in {\mathcal{S}}(i)} \operatorname{dist}_{\mathcal{M}}({\bm \mu}, {\bm y}_j)^2, \qquad \hat{\Sigma}_i = \frac{1}{K}\sum_{j \in {\mathcal{S}}(i)} \log_{\hat{{\bm \mu}}_i}(\bm y_j) \log _{\hat{{\bm \mu}}_i}({\bm y}_j)^{\mathrm{T}}, \end{aligned}$$ and are used to restore the reference patch and all its similar patches by $$\label{MMSE_manifold} \hat{\bm y}_j = \exp_{\hat {\bm \mu}_i} \bigl( (\hat \Sigma_i-\sigma^2 I_n) \hat \Sigma_i^{-1}(\log_{\hat {\bm \mu}_i}(\bm y_j)) \bigr), \qquad j\in{\mathcal{S}}(i).$$ This can be considered as the manifold counterpart to . With slight modifications the fine-tuning details listed in Remark \[details\] can be generalized to manifolds. The treatment of patches at the boundaries, the acceleration and the second step are analogously as in the real-valued case, only for flat areas and the aggregation step one has to replace the empirical variance respective the mean by their manifold counterparts. The two steps of the algorithm are summarized in Algorithm \[Alg:NL\_M\]. **Input:** noisy image $\bm y\in M^{N}$, variance $\sigma^2$ of noise **Output:** first step denoised image $\hat{\bm y}$ and final image $\tilde{\bm y}$ **Parameters:** $s_1,s_2$ sizes of patches, $K_1,K_2$ numbers of similar patches, $\gamma$ homogeneous area parameter, $w_1,w_2$ sizes of search area **Step 1:** Set $\mathcal{M} = M^{s_1^2}$ Determine the set ${\mathcal{S}}_1(i)$ of centers of $K_1$ patches similar to $\bm y_i$ in a $w_1\times w_1$ window around $i$ with respect to the distance measure on the manifold Compute by a gradient descent algorithm the Karcher mean patch, $\hat{\mu}_i = (\hat{\mu}_{i,j})_{j=1}^{s_1^2}$, $$\begin{aligned} \hat{\bm \mu}_i &\in \operatorname*{arg\,min}\limits_{\bm y\in {\mathcal{M}}} \biggl\{\frac{1}{K_1} \sum_{j\in{\mathcal{S}}_1(i)} \operatorname{dist}_{{\mathcal{M}}}(\bm y,\bm y_j)^2\biggr\} \end{aligned}$$ **Homogeneous area test:** Compute by a gradient descent algorithm the Karcher mean value $ \hat{\bm m}_i \in \operatorname*{arg\,min}\limits_{\bm y\in M} \biggl\{\frac{1}{K_1 s_1^2} \sum_{j\in{\mathcal{S}}_1(i)}\sum_{k=1}^{s_1^2} \operatorname{dist}_{M}(\bm y,\bm y_{j,k})^2\biggr\} $ and the empirical variance of the patches $$\hat{\sigma}^2_i = \frac{1}{d K_1 s_1^2}\sum_{j \in {\mathcal{S}}_1(i)} \sum_{k=1}^{s_1^2}\operatorname{dist}_{M}(\hat{\bm m}_i,\bm y_{j,k})^2$$ Compute the restored patches as $\hat{\bm y}_j = \mathbf{1}_{s_2^2}\otimes \hat{\bm m}_i$, $j\in {\mathcal{S}}_1(i)$ Compute the empirical covariance matrix $$\hat{\Sigma}_i =\frac{1}{K_1}\sum_{j\in {\mathcal{S}}_1(i)} \log_{\hat{\bm \mu}_i}(\bm y_j)\log_{\hat{\bm \mu}_i}(\bm y_j)^{\mathrm{T}}$$ Compute the restored patch $\hat{\bm y}_j = \exp_{\hat{\bm \mu}_i}\bigl( ( \hat{\Sigma}_i- \sigma^2 I_{s_1^2}) \hat{\Sigma}_i^{-1} \log_{\hat{\bm \mu}_i}(\bm y_j)\bigr)$, ${j\in {\mathcal{S}}_1(i)}$ **Aggregation:** Obtain the first estimate $\hat{\bm y}$ at each pixel by computing the Karcher mean over all restored patches containing the pixel **Step 2:** Set $\mathcal{M} = M^{s_2^2}$ Determine the set ${\mathcal{S}}_2(i)$ of centers of $K_2$ patches similar to the denoised image $\hat{y}_i$ in the first step in a $w_2\times w_2$ window around $i$ Compute the Karcher mean patch, $\hat{\mu}_i = (\hat{\mu}_{i,j})_{j=1}^{s_1^2}$, $$\begin{aligned} \tilde{\mu}_i &\in \operatorname*{arg\,min}\limits_{\bm y\in {\mathcal{M}}} \biggl\{\frac{1}{K_2} \sum_{j\in{\mathcal{S}}_2(i)} \operatorname{dist}_{{\mathcal{M}}}(\bm y,\bm y_j)^2\biggr\} \end{aligned}$$ **Homogeneous area test:** Compute the Karcher mean value $ \tilde{\bm m}_i \in \operatorname*{arg\,min}\limits_{\bm y\in M} \biggl\{\frac{1}{K_2 s_2^2} \sum_{j\in{\mathcal{S}}_2(i)}\sum_{k=1}^{s_2^2} \operatorname{dist}_{M}(\bm y,\bm y_{j,k})^2\biggr\} $ and the empirical variance of the patches $$\tilde{\sigma}^2_i = \frac{1}{d K_2 s_2^2}\sum_{j \in {\mathcal{S}}_2(i)} \sum_{k=1}^{s_2^2}\operatorname{dist}_{M}(\tilde{\bm m}_i,\bm y_{j,k})^2$$ Compute the restored patches $\tilde{\bm y}_j = \mathbf{1}_{s_2^2}\otimes \tilde{\bm m}_i$, $j\in {\mathcal{S}}_2(i)$ Compute the empirical covariance matrix $$\widetilde{\Sigma}_i =\frac{1}{K_2}\sum_{j\in {\mathcal{S}}_1(i)} \log_{\tilde{\mu}_i}(\hat{\bm y}_j)\log_{\tilde{\mu}_i}(\hat{\bm y}_j)^{\mathrm{T}}+ \sigma^2 I_{s_2^2}$$ Compute the restored patch $\tilde{\bm y}_j = \exp_{\tilde{\bm \mu}_i}\bigl((\widetilde{\Sigma}_i- \sigma^2 I_{s_2^2}) \widetilde{\Sigma}_i^{-1} \log_{\tilde{\bm \mu}_i}(\bm y_j)\bigr)$, ${j\in {\mathcal{S}}_2(i)}$ **Aggregation:** Obtain the final estimate $\tilde{\bm y}$ at each pixel by computing the Karcher mean over all restored patches containing the pixel Numerical Results {#sec:numerics} ================= In this section we provide numerical examples to illustrate the good performance of the NL-MMSE Algorithm \[Alg:NL\_M\]. As manifolds we consider the circle ${\mathbb{S}}^1$, the sphere ${\mathbb{S}}^2$ and the positive definite matrices $\operatorname{SPD}(r)$ for $r=2,3$. While Algorithm \[Alg:NL\_M\] is implemented in <span style="font-variant:small-caps;">Matlab</span>, the basic manifold functions, like logarithmic and exponential maps, as well as the distance function are implemented as C++ functions in [Manifold-valued Image Restoration Toolbox(MVIRT)](http://www.mathematik.uni-kl.de/imagepro/members/bergmann/mvirt/)[^3] and imported into <span style="font-variant:small-caps;">Matlab</span> using `mex`-interfaces with the GCC 4.8.4 compiler. The experiments are carried out on a Dell Precision T1500 running Ubuntu 14.04 LTS, Core i7, 2.93 GHz, and 8 GB RAM, using <span style="font-variant:small-caps;">Matlab</span> 2014b. To compare different methods we used as performance measure the mean squared error $$\epsilon = \frac{1}{N}\sum_{i\in\mathcal{G}} \operatorname{dist}_M(\hat{\bm x}_i,\bm x_i)^2,$$ where $x$ denotes the original image and $\hat{x}$ is the restored one. The parameters of all involved algorithms were optimized with respect to this error measure on the grids detailed below. We compared the following denoising methods: 1. [**NL-MMSE**]{}: we implemented Algorithm \[Alg:NL\_M\] with parameters from the following grid search (in <span style="font-variant:small-caps;">Matlab</span> notation): patch size $s\in\{3:2:11\}$, window size $w\in\{9:2:127\}$, number of neighbors $K\in\{1:1:1200\}$, and $\gamma\in\{0.1:0.1:2\}$. We would like to mention that we started on coarser grids and refined them during the parameter search. We briefly comment on general guidelines for the parameter in the following subsection. The final parameters for our experiments are listed in Table \[tab:algparam\]. Note that in the first three experiments we optimized only one set of parameters, i.e., they are the same in both steps, while the parameters for both steps are optimized for the last three examples. 2. [**NL-means**]{}: we implemented a generalization of the NL-means algorithm [@BCM2005; @S2010] for manifold-valued images. Since this algorithm is not available in the general form required for our noisy images, we describe it in the next subsection. Concerning the grid search we used the same grids as in (i) for $s,w,K$. Further, $\delta $ is optimized on $\{0.5:0.5:50\}$, and $\tau$ on $\{0.1:0.1:1\}$. 3. [**TV approach**]{}: we applied the manifold version of the variational denoising approach with ${\rm dist}_{\mathcal{M}}^2$ as data fidelity term and anisotropic discrete total variation (TV) term as proposed in [@WDS2014]. Furthermore, for cyclic data we also added a second regularization term to (iii), called $\operatorname{TV}_2$, which is a manifold version of second order differences. This method was proposed for the circle in [@BLSW14] and for more general symmetric spaces in [@BBSW2015] and we used the corresponding programs. Using the notation from [@BBSW2015], we did a grid search for the regularization parameter $\alpha$ of the $\operatorname{TV}$ term in $\{ 0.01:0.01:1\}$ and for the regularization parameter $\beta$ of the $\operatorname{TV}_2$ term in $\{0.1:0.1:5\}$. The main drawback of the variational methods are their extensive running time compared to (i) and (ii). Figure $s_1$ $s_2$ $w_1$ $w_2$ $K_1$ $K_2$ $\gamma$ --------------------------------------- ------- ------- ------- ------- -------- -------- ---------- Figure \[fig:corals:hue\_denoised\] $7$ $7$ $81$ $81$ $70$ $70$ $1$ Figure \[fig:s2coral:denoisedchroma\] $5$ $5$ $37$ $37$ $110$ $110$ $1$ Figure \[fig:spd3\] $5$ $5$ $59$ $59$ $415$ $415$ $.8$ Figure \[fig:spd2\] $9$ $9$ $115$ $115$ $1038$ $1038$ $1$ Figure \[fig:arts1\] $9$ $7$ $119$ $123$ $186$ $86$ $1.1$ Figure \[fig:arts2\] $3$ $5$ $127$ $127$ $65$ $54$ $0.8$ : Parameters for the NL-MMSE Algorithm \[Alg:NL\_M\] in the examples.[]{data-label="tab:algparam"} Parameter Selection {#subsec:parameters} ------------------- Our algorithm requires several input parameters. Besides the variance $\sigma^2$ of the noise (which is assumed to be known or otherwise may be estimated in constant areas), these are the size of the patches and of the search zone as well as the number of similar patches that are kept and a parameter for the homogeneous area criterion. Even if it is not possible to state general parameter constellations that are valid for the different manifolds there are some general principles how to choose good parameters. Based on these principles we may obtain a first set of parameters, which may be fine-tuned by varying one of them while keeping the rest fixed. 1. patch size $s$: In general, the considered patches are rather small ($s\in \{3,5,7\}$), their exact value depends on the amount of noise, measured in terms of the variance $\sigma^2$ of the noise. The higher the noise, the larger the patches, as they contain less information when the noise level is high. 2. window size $w$: The size of the search zone depends on the one hand on the patch size and on the other hand on the number of similar patches, which depends itself on the dimension of the manifold. The larger those values are, the larger the search zone should be. 3. number of similar patches $K$: The number of similar patches has to be large enough to guarantee that the estimated covariance matrix is invertible with high probability, which depends on the patch size and on the dimension $d$ of the manifold. On the other hand, it should also not be too large as in this case also non-similar patches are chosen. As a rule of thumb we observed that $K = 3 s^2 d$ yields good results in practice. 4. homogeneous area parameter $\gamma$: This value should be close to 1 and it should be the larger, the more constant areas an image contains. Nonlocal Means on Manifolds {#subsec:nl_manifold} --------------------------- In this section we briefly discuss how to generalize the NL-means approach introduced in [@BCM2005] to manifolds. The fundamental difference to the NL-MMSE lies, besides some details, in the incorporation of second order information. Let $y\colon\mathcal{G}\rightarrow M$ be a noisy manifold-valued image. Consider a $s \times s$ patch $y_i \in M^{s^2}$ centered at $i = (i_1,i_2)\in {\mathcal G}$. For each $i\in\mathcal{G}$ we denote by $\mathcal{S}(i)$ the set of $K$ similar patches to $y_i$, selected in a $w\times w$ search window around $y_i$. Similar patches are found with respect to a weighted distance on the product manifold, i.e., $$\widetilde{\operatorname{dist}}_{M^{s^2}}(y_i,y_j)^2 \coloneqq \sum_{k_1=-\lfloor\frac{s-1}{2}\rfloor}^{\lfloor\frac{s-1}{2}\rfloor} \sum_{k_2=-\lfloor\frac{s-1}{2}\rfloor}^{\lfloor\frac{s-1}{2}\rfloor} {{\mathrm{e}}}^{-\frac{1}{2\delta^2}(k_1^2+k_2^2)} \operatorname{dist}_M(y_{i_1+k_1,i_2+k_2},y_{j_1+k_1,j_2+k_2})^2, \label{eq:nl:sim}$$ where $\delta > 0$, $y_i\in M^{s^2}$ denotes the whole patch and $y_{i_1,i_2}\in M$ denotes a pixel value.\ The aggregation step is done by averaging the patch centers, weighted by the distance of the patches, i.e., let $$\begin{aligned} \label{eq:nlweights} \omega_{i,j} = \begin{cases} {{\mathrm{e}}}^{-\frac{1}{2\tau^2}\widetilde{\operatorname{dist}}_{M^{s^2}}(y_i,y_j)^2}&\ i\neq j,\\ \max_{j\in\mathcal{S}(i),j\neq i}\{\omega_{i,j}\}&\ i = j, \end{cases}\quad\text{and}\qquad W_i = \sum_{j\in\mathcal{S}(i)} \omega_{i,j}. \end{aligned}$$ Then the restored pixel value is given by $$\hat{y}_{i} = \operatorname*{arg\,min}_{y\in M} \biggl\{\frac{1}{W_i}\sum_{j\in\mathcal{S}(i)}\omega_{i,j} \operatorname{dist}_M(y,y_{j})^2\biggr\}.\label{eq:nl:mean}$$ For the weights in we use the maximal weight approach for the center patch, which was proposed in [@S2010] as the best choice without introducing an extra parameter. Let us briefly comment on two related approaches. A NL-means denoising algorithm for DT-MRI images was given in [@WPCMB07]. The authors use the affine invariant distance on $\operatorname{SPD}(r)$ as similarity measure in and the log-Euclidean mean for computing the mean , while we perform both steps with the same affine invariant distance measure. The authors of [@PBDW08] introduce a semi-local method for denoising manifold-valued data motivated by the corresponding variational method for real valued data. Their method performs an iterative averaging over circular shaped neighborhoods with weights depending on the pixel similarity and distances between the pixels on the image grid. In contrast to [@PBDW08] we consider patches around the pixels for computing their similarity. Noise on Color Channels {#subsec:color_channels} ----------------------- Manifold-valued images naturally appear in various color image models different from the RGB model. In the following, we consider the hue-saturation-value (HSV) and the chromaticity-brightness (CB) color model. We added Gaussian noise to the hue and the cromaticity channels. We are aware of the fact, that natural color images are in general not corrupted by Gaussian noise in only one color channel. However, we provide these academical examples as a proof of concept. How such single channel noise affects the whole image can be seen in Figure \[fig:coral\_together\]. [0.24]{} [0.24]{} [0.24]{} [0.24]{} [0.24]{} [0.24]{} [0.24]{} [0.24]{} Cyclic data appears in the hue component of the HSV color model. The hue component of the *sponge* is considered in the first row of Figure \[fig:coral\]. The second column \[fig:corals:hue\_noisy\] shows the noisy hue corrupted by wrapped Gaussian noise of standard deviation $\sigma = 0.6$. Applying the TV denoising method with optimized parameters $\alpha = 0.45,\ \lambda=\tfrac{\pi}{2}$, resp., NL-MMSE, leads to the results in Figure \[fig:corals:hue\_tv\], resp. Figure \[fig:corals:hue\_denoised\]. Despite rather flat areas in the image, the NL-MMSE approach outperforms the variational TV method. Spherical data occurs in the chromaticity component of the CB color space. At this point, the chromaticity is defined as the direction of the RGB color vector and the brightness is given by its length. We deal with the chromaticity of the *sponge* image in the second row of Figure \[fig:coral\]. We corrupted it by Gaussian noise of standard deviation $\sigma = 0.2$, which yields the image shown in Figure \[fig:s2coral:noisychroma\]. Figure \[fig:s2coral:tvchroma\] gives the result of the TV method with $\alpha = 0.21,\ \lambda=\tfrac{\pi}{2}$. Denoising with the NL-MMSE results in Figure \[fig:s2coral:denoisedchroma\] which is again better than the previous one. [0.24]{} [0.24]{} [0.24]{} [0.24]{} [0.24]{} [0.24]{} [0.24]{} [0.24]{} Matrix-Valued Data {#subsec:spd} ------------------ In this subsection, we provide two examples for images having values in $\operatorname{SPD}(r)$ for $r=2,3$. A matrix $\bm x\in\operatorname{SPD}(r),\ r = 2,3$, is depicted as an ellipse ($r = 2$) or an ellipsoid ($r=3$) whose principal axis are determined by the spectral decomposition of $\bm x$. [0.45]{} [0.45]{} [0.45]{} [0.45]{} First we examine the effect of the acceleration used in the NL-MMSE approach. To this aim we consider the $64 \times 64$ image with $\operatorname{SPD}(3)$ values depicted in Figure \[fig:spd3:orig\]. The image is corrupted by Gaussian noise of standard deviation $\sigma = 0.125$, see Figure \[fig:spd3:noisy\]. Using NL-MMSE yields Figure \[fig:spd3:pro\]. Taking all pixels as center of a reference patch, i.e., skipping the acceleration, gives the result in Figure \[fig:spd3:all\]. Visually, there is nearly no difference between the two results, and also the errors are roughly the same. However, having a look at the running time there is a large difference between the two approaches. The accelerated algorithm needs $245$ seconds and is about one hundred times faster as the non-accelerated version, which needs $40691$ seconds. This justifies the acceleration step. [0.32]{} [0.32]{} [0.32]{} [0.32]{} [0.32]{} Next consider the artificial image of size $65 \times 65$ consisting of $\operatorname{SPD}(2)$ matrices in Figure \[fig:spd2:orig\] and its corrupted version with Gaussian noise of standard deviation $\sigma = 0.15$ in Figure \[fig:spd2:noisy\]. In the denoising result with the TV method with parameters $\alpha = 0.25,\ \gamma = 1$ in Figure \[fig:spd2:tv\], the typical stair casing effect is visible. Figure \[fig:spd2:nl\] depicts the result of NL-means using the optimized parameters $s = 33, w = 9,\ \delta = 2,\ K = 81,\ \tau = 0.2$ which looks better than the previous one. However, the NL-MMSE with the same parameters for both steps yields a denoised image with error $\epsilon = 0.0049$, and changing the parameters of the second step to $s_2=7,\ K_2 = 193,\ w_2 = 41$ we finally obtain an error of $\epsilon = 0.0042$, compare Figure \[fig:spd2:denoised\]. This error is a lower than those of the TV and NL-means methods. Moreover, this example shows that different parameters in both steps allow a further improvement of the algorithm. In the following examples we optimize the parameters of both steps separately. Cyclic Data {#subsec:cyclic} ----------- Next we compare the proposed NL-MMSE for the artificial image in Figure \[fig:arts1:orig\] and its noisy version corrupt with wrapped Gaussian noise of standard deviation $\sigma = 0.3$ in Figure \[fig:arts1:noisy\]. These images as well as their denoised versions via the $\operatorname{TV}$ approach and the $\operatorname{TV}$-$\operatorname{TV}_2$ method were taken from [@BLSW14]. The original image can be found in the toolbox [MVIRT](http://www.mathematik.uni-kl.de/imagepro/members/bergmann/mvirt/). The TV approach leads to the result in Figure \[fig:arts1:tv1\]. While the jumps between flat areas are preserved, the method suffers from stair casing. The combined first and second order approach in Figure \[fig:arts1:tv12\] improves the results, but the edges between flat areas are smoothed. Not surprisingly, the result obtained with the NL-means approach in Figure \[fig:arts1:nlm\] with parameters $s = 11, w = 23,\ \delta = 46,\ K = 33,\ \tau = 0.2$ has the worst error, even if the reconstructions of the paraboloid in the bottom right and at the edges are pretty good. Here an extra fitting for constant regions as incorporated in the fine tuning of NL-MMSE, would be necessary. The best result, shown in Figure \[fig:arts1:mmse\], is achieved with the NL-MMSE, see Figure \[fig:arts1:mmse\]. On the one hand, sharp edges are preserved, while one the other hand also constant and linear parts are well reconstructed. [0.6]{} ![Comparison of denoising methods for an image with values in $\mathbb S^1$.[]{data-label="fig:arts1"}](arts1colormap "fig:") [0.45]{} ![Comparison of denoising methods for an image with values in $\mathbb S^1$.[]{data-label="fig:arts1"}](arts1orig "fig:") [0.45]{} ![Comparison of denoising methods for an image with values in $\mathbb S^1$.[]{data-label="fig:arts1"}](arts1_noisy "fig:") [0.45]{} ![Comparison of denoising methods for an image with values in $\mathbb S^1$.[]{data-label="fig:arts1"}](arts1_TV1 "fig:") [0.45]{} ![Comparison of denoising methods for an image with values in $\mathbb S^1$.[]{data-label="fig:arts1"}](arts1_TV12 "fig:") [0.45]{} ![Comparison of denoising methods for an image with values in $\mathbb S^1$.[]{data-label="fig:arts1"}](arts1_nlm "fig:") [0.45]{} ![Comparison of denoising methods for an image with values in $\mathbb S^1$.[]{data-label="fig:arts1"}](arts1_mmse "fig:") Spherical Data {#subsec:spherical} -------------- [0.32]{} [0.32]{} [0.32]{} [0.32]{} [0.32]{} [0.32]{} [0.32]{} [0.32]{} Finally we consider the artificial image with values distributed over the whole sphere as shown in Figure \[fig:arts2:orig\]. The image consists of vortex like structures of different sizes and directions and a smoothly varying background. It is affected by Gaussian noise with standard deviation $\sigma=0.3$, see Figure \[fig:arts2:noisy\]. We compare our method with the TV approach (parameters: $\alpha=0.24,\ \gamma=\tfrac{\pi}{2}$) in Figure \[fig:arts2:tv1\], TV-$\operatorname{TV}_2$ (parameters: $\alpha=0.18,\ \beta = 2.6,\ \gamma=\tfrac{\pi}{2}$) in Figure \[fig:arts2:tv12\], and NL-means (parameters: $\epsilon$, $s = 23, w = 127,\ \delta = 1.5,\ K = 104,\ \tau = 0.2$) in Figure \[fig:arts2:nl\_mean\]. Note that the running time of the NL-mean is the same as the NL-MMSE in this example, i.e., 20 seconds, while the TV-$\operatorname{TV}_2$ method needs around minutes. The first order TV suffers from stair casing, which is removed with the second order term, but the error is still the largest among all tested methods. Next we have a look at the oracle image after Step 1 of NL-MMSE in Figure \[fig:arts2:oracle\]. We see that this image has already a slightly smaller error than both TV and the NL-means approaches. However, it still contains some noise in the background, which is removed in the second step and leads to an improvement in the error, see Figure \[fig:arts2:final\]. Figure \[fig:arts2:patch\] shows the reconstruction with Algorithm \[Alg:NL\_M\] without the second order-update step, i.e., we perform only Step 1, where we replace $\hat{\bm y}_j = \exp_{\hat{\bm \mu}_i}\bigl(\hat{\Sigma}_i (\hat{\Sigma}_i + \sigma^2 I_{s_1^2})^{-1} \log_{\hat{\bm \mu}_i}(y_j)\bigr) $ with $ \hat{\bm y}_j = \hat{\bm \mu}_i $. The parameters are $s = 5,\ K = 6$ and $w,\gamma$ from before. In comparison to the oracle image there are small visible differences, but the error is worse. A disadvantage of this method is its running time. While the oracle image computation needs less then 10 seconds, about 30 seconds are required to get the result in Figure \[fig:arts2:patch\]. The time difference originates from the larger patch size which is needed to get a comparable result. This experiment further shows that Algorithm \[Alg:NL\_M\] is also able to handle data having values on the whole sphere which is a manifold with positive curvature. Here, we implicitly assume that the computed Karcher means are unique, which is a reasonable assumption, since similar patches should be pointwise contained in regular balls. Note that this does not prevent the patches to cover the whole sphere. Conclusion and Future Work {#sec:conclusions} ========================== We proposed a counterpart of the nonlocal Bayes’ denoising approach of Lebrun et al. [@LBM13b; @LBM13] for manifold-valued images. The basic idea consists in translating the MMSE for similar image patches to the manifold-valued setting . To this aim, we used an intrinsic definition of a normal distribution and in particular of white noise on Riemannian manifolds. We demonstrated by various numerical experiments that our method performs very well when dealing with moderate noise variances. Up to now all our examples were artificial ones. In future work we want to apply our method to real-world data. In particular we intend to examine whether our noise model covers specific applications. The close relation between different models of Gaussian noise for small $\sigma$ should be specified for the manifolds of interest. Moreover, it is well known that in various applications the variance $\sigma^2$ is either not known or not constant for the whole image. Therefore the noise estimation and the incorporation of spatially varying noise is an interesting research topic. Another issue is related to Remark \[neg\_cov\]. Even in the Euclidean setting the topic of negative eigenvalues in the estimation of $\Sigma_X$ requires further discussion. Other directions of future work include other image restoration tasks as for instance inpainting. This needs additional information, e.g., based on hyperpriors as in [@AADGM2015] or a fixed number of Gaussian models (covariance matrices), see e.g. [@YSM2012]. Proof of Proposition \[1d\_manifolds\] {#app:prop} ======================================= For one-dimensional manifolds we have $x = x^1$ and $|G(x)| = |\tilde G(\bm x)| =1$. In the following we set $e_{\bm \mu} \coloneqq e_{\bm \mu,1}$. \(i) With Appendix \[app:ex\_mani\] we obtain $\operatorname{dist}_{{\rm SPD}(1)}({\bm \mu} , {\bm x}) = \left|\ln\left( \frac{\bm \mu}{\bm x} \right) \right|$ so that with we obtain the pdf stated in (i). By we have $$\frac{1}{ \sqrt{2 \pi\sigma^2} } \int_{{\mathbb{R}}} {{\mathrm{e}}}^{-\frac{1}{2\sigma^2} x^2} \, {{\,\mathrm{d}}}x = \frac{1}{ \sqrt{2 \pi\sigma^2} } \int_{{\mathbb{R}}_{>0}} {{\mathrm{e}}}^{-\frac{1}{2\sigma^2} ( \ln (\bm x) - \ln(\bm \mu))^2} \, {{\,\mathrm{d}}}_{{\mathbb{R}}_{>0}} (\bm x)$$ which implies by the transformation theorem that ${{\,\mathrm{d}}}_{{\mathbb{R}}_{>0}} (\bm x) = \frac{1}{\bm x} \, {{\,\mathrm{d}}}\bm x$.\ (ii) In the given parameterization it holds that $e_{\bm \mu} = (-\sin (t_\mu), \cos (t_\mu))^{\mathrm{T}}$, $t_\mu \in [-\pi,\pi)$ and with Appendix \[app:ex\_mani\] further $$\exp_{\bm \mu} (h(x)) = \exp_{\bm \mu} ( x e_{\bm \mu} ) = \bm \mu \cos (|x|) + \frac{x}{|x|} \sin (|x|)e_{\bm \mu} = \begin{pmatrix} \cos (x+ t_\mu)\\ \sin (x+ t_\mu) \end{pmatrix}.$$ It holds ${\cal D}_{\bm \mu} = (-\pi,\pi)$ and we can choose $\varphi_j\colon \left( (2j-1) \pi , (2j+1) \pi \right) \rightarrow (-\pi,\pi)$ as $\varphi_j(x) \coloneqq x - 2j\pi$, $j \in {\mathbb{Z}}$ in . Plugging this into results in $$\begin{aligned} p_{\bm X} (\bm x (t)) &= \frac{1}{\sqrt{2 \pi \sigma^2}} \sum_{j \in {\mathbb{Z}}} {{\mathrm{e}}}^{-\frac{1}{2 \sigma^2} \left( h^{-1} (\log_{\bm \mu} (\bm x(t))) + 2j\pi \right)^2} = \frac{1}{\sqrt{2 \pi \sigma^2}} \sum_{j \in {\mathbb{Z}}} {{\mathrm{e}}}^{-\frac{1}{2 \sigma^2} \left( {\rm d}_{\mathbb S^1} (\bm \mu , \bm x(t)) + 2j \pi \right)^2 }\\ &= \frac{1}{\sqrt{2 \pi \sigma^2}} \sum_{j \in {\mathbb{Z}}} {{\mathrm{e}}}^{-\frac{1}{2 \sigma^2} \left( t - t_\mu + 2j\pi \right)^2}. \end{aligned}$$ \(iii) First note that $\Delta_1$ is not complete, but for any $\bm \mu$ the function $\exp_{\bm \mu}$ is defined a.e. on $T_{\bm \mu}\Delta_1$. More precisely, using Appendix \[app:ex\_mani\] we obtain $ e_{\bm \mu} =\tfrac{1}{2} \sin (t_\mu) (1,-1)^{\mathrm{T}}$ and $$\exp_{\bm \mu} (h(x)) = \frac12 + \frac12 \begin{pmatrix} \cos(t_\mu)\\-\cos(t_\mu) \end{pmatrix} \cos (x) + \frac12 \begin{pmatrix} \sin(t_\mu)\\-\sin(t_\mu) \end{pmatrix} \sin (x) = \frac12 + \frac12 \begin{pmatrix} \cos(x- t_\mu)\\ -\cos(x- t_\mu) \end{pmatrix},$$ which is only in $\Delta_1$ if $ x \not \in \{t_\mu + j \pi: j \in \mathbb Z\}$. Here have ${\cal D}_{\bm \mu} = (t_\mu-\pi,t_\mu)$ and setting $$\begin{aligned} \varphi_{2j} (x) &\coloneqq x - 2 j \pi \quad {\rm for} \quad x \in (t_\mu + (2j-1)\pi, t_\mu + 2j \pi),\\ \varphi_{2j + 1} (x) &\coloneqq 2t_\mu- (x - 2j \pi ) \quad {\rm for} \quad x \in (t_\mu + 2j \pi, t_\mu + (2j+1)\pi) \end{aligned}$$ we obtain by that $$\tilde p_{X} (x) = \frac{1}{\sqrt{2 \pi \sigma^2}} \sum_{j \in {\mathbb{Z}}} \left( {{\mathrm{e}}}^{-\frac{1}{2 \sigma^2} \left( x + 2j\pi \right)^2} + {{\mathrm{e}}}^{-\frac{1}{2 \sigma^2} \left( 2t_\mu - x + 2j\pi \right)^2} \right).$$ With we obtain the assertion. Example Manifolds {#app:ex_mani} ================= #### Sphere ${\mathbb{S}}^{d}$ Let ${\mathbb{S}}^{d} = \bigl \{ \bm x\in {\mathbb{R}}^{d+1}\colon {\left\| \bm x \right\|_{2}}=1\bigr\}$. The geodesic distance is given by $$\operatorname{dist}_{{\mathbb{S}}^{d}}(\bm x, \bm y) = \arccos(\langle \bm x, \bm y\rangle),$$ where $\langle\cdot,\cdot\rangle$ is the standard scalar product in ${\mathbb{R}}^{d+1}$. The tangential space at $\bm x\in{\mathbb{S}}^{d}$ is given by $T_{\bm x} {\mathbb{S}}^{d}=\bigl\{v\in{\mathbb{R}}^{d+1}\vert \langle {\bm x} ,v\rangle=0\bigr\}$. The Riemannian metric is the metric from the embedding space, i.e., the Euclidean inner product. The exponential and logarithmic map read as $$\begin{aligned} \exp_{\bm x} (v) &= {\bm x} \cos\bigl(\lVert v\rVert\bigr)+\frac{v}{\lVert v\rVert}\sin\bigl(\lVert v\rVert\bigr),\\ \log_{\bm x} (\bm y) &= \operatorname{dist}_{{\mathbb{S}}^{d}}(\bm x, \bm y) \, \frac{\bm y-\langle {\bm x},{\bm y} \rangle \bm x}{\lVert \bm y-\langle \bm x, \bm y \rangle \bm x\rVert}, \quad \bm x \not = - \bm y. \end{aligned}$$ #### Positive Definite Matrices ${\operatorname{SPD}}(r)$ The dimension of ${\operatorname{SPD}}(r)$ is $d = \frac{r(r+1)}{2}$. We denote by $\operatorname{Exp}$ and $\operatorname{Log}$ the matrix exponential and logarithm defined by\ $ {\operatorname{Exp}(x) \coloneqq \sum_{k=0}^\infty \frac{1}{k!} x^k} $ and $\operatorname{Log}(x) \coloneqq -\sum_{k=1}^\infty \frac{1}{k} (I-x)^k$, $\rho(I-x) < 1$, where $\rho$ denotes the spectral radius. Then the affine invariant geodesic distance is given by $$\operatorname{dist}_{{\operatorname{SPD}}(r)}(\bm x,\bm y) = \bigl\lVert\operatorname{Log}(\bm x^{-\frac12} \bm y \bm x^{-\frac12} )\bigr\rVert_{\mathrm{F}},$$ where $\lVert \cdot \rVert_{\mathrm{F}}$ denotes the Frobenius norm of matrices. The tangential space at $\bm x \in {\mathcal{M}}$ is $T_{\bm x}{\mathcal{M}}= \{\bm x\} \times \operatorname{Sym}(r)$, where $\operatorname{Sym}$ denotes the space of symmetric $r \times r$ matrices. The Riemannian metric reads $\langle v_1,v_2 \rangle_{\bm x} = \operatorname{tr}(v_1 \bm x^{-1} v_2 \bm x^{-1})$. As orthogonal basis in $T_{\bm x} {\mathcal{M}}$ we use $e_{\bm x,ij} \coloneqq \bm x^\frac12 e_{ij} \bm x^\frac12$, $i,j \in \{1,\ldots,r\}, \, j\le i$, where $$e_{ij} = \left\{ \begin{array}{ll} e_i e_i^{\mathrm{T}}&\mathrm{if} \; i=j,\\ \frac{1}{\sqrt{2}} \bigl(e_i e_j^{\mathrm{T}}+e_j e_i^{\mathrm{T}}\bigr)&\mathrm{otherwise} \end{array} \right.$$ and $e_i \in \mathbb R^r$ are the $r$-dimensional unit vectors. Finally, the exponential and the logarithmic map read $$\begin{aligned} \label{exp_spd} \exp_{\bm x} (v) &= \bm x^{\frac12} \operatorname{Exp}\bigl( \bm x^{-\frac12} v \bm x^{-\frac12}\bigr) \bm x^{\frac12},\\ \log_{\bm x}(\bm y) &= \bm x^{\frac{1}{2}}\operatorname{Log}\bigl(\bm x^{-\frac{1}{2}} \, \bm y \, \bm x^{-\frac{1}{2}}\bigr) \bm x^{\frac{1}{2}}. \end{aligned}$$ For more information on the affine invariant metric and its relation to the log-Euclidean metric we refer, e.g., to [@AFPA2005; @pennec2006riemannian], #### Probability Simplex $\Delta_d$ In the open probability simplex $\Delta_{d} \coloneqq \{\bm x\in {\mathbb{R}}_{> 0}^{d+1}: \sum_{i=1}^{d+1} x_i= 1\}$ equipped with the Fisher-Rao metric arising from the categorial distribution $ \langle u,v\rangle_{\bm x} = \langle\tfrac{u}{\sqrt{\bm x}},\tfrac{v}{\sqrt{\bm x}}\rangle $ the geodesic distance is given by $$\operatorname{dist}_{\Delta_d} (\bm x,\bm y) = 2\arccos\bigl(\langle\sqrt{\bm x},\sqrt{\bm y}\rangle\bigr),$$ where the square root is meant componentwise. Its tangential space is given by $ T_{\bm x} {\mathcal{M}}= \{y\in{\mathbb{R}}^{d+1}: \langle y,{\mathbf 1}\rangle = 0\} $. The exponential map reads $$\exp_{\bm x} (v) = \frac{1}{2}\Bigl(\bm x+\frac{v_x^2}{\lVert v_x\rVert_2^2}\Bigr)+ \frac{1}{2}\Bigl(\bm x-\frac{v_x^2}{\lVert v_x\rVert_2^2}\Bigr)\cos\bigl(\lVert v_x\rVert_2\bigr) +\frac{v}{\lVert v_x\rVert_2}\sin\bigl(\lVert v_x\rVert_2\bigr),$$ where $v_x \coloneqq \tfrac{v}{\sqrt{\bm x}}$ and vector multiplications are meant componentwise. While the above function maps onto the closure of $\Delta_d$ we have to consider only the dense set in $T_{\bm x}\Delta_d$ with $\exp_{\bm x} (v) \in \Delta_d$. The logarithmic map is determined by $$\log_{\bm x} (\bm y )= \operatorname{dist}_{\Delta_d} (\bm x, \bm y) \, \frac{ \sqrt{\bm x \bm y}-\langle \sqrt{\bm x},\sqrt{\bm y}\rangle {\bm x} }{ \sqrt{1-\langle\sqrt{\bm x},\sqrt{\bm y}\rangle^2}}.$$ An orthonormal basis can be constructed by taking a basis of $T_{\bm x}{\mathcal{M}}$, e.g., $$\bigl\{(1,-1,0,0,\dots)^{\mathrm{T}},(1,1,-2,0,\dots)^{\mathrm{T}},\dots,(1,1,\dots,1,-d)^{\mathrm{T}}\bigr\}\subset{\mathbb{R}}^{d+1}.$$ and applying Gram-Schmidt orthonormalization process w.r.t. the inner product $\langle\cdot,\cdot\rangle_x$. Simulation of Gaussian Noise Model by Said et al. \[46\] {#sec:said} ======================================================== In the following we explain how to generate samples from the normal distribution ${\mathcal{N}}_{\text{Said}}(\bm \mu,\sigma^2 I_n)$ on ${\operatorname{SPD}}(r)$ ($n =\dim(\operatorname{SPD}(r)) = \frac{r(r+1)}{2}$), which was only sketched in [@SBBM15]. To do so, we parametrize $\bm x\in \operatorname{SPD}(r)$ by its eigenvalues and eigenvectors (spectral decomposition), given as $$\bm x(\rho,\bm u) = \bm u \operatorname{diag}({{\mathrm{e}}}^\rho){\bm u^{\mathrm{T}}},$$ where $\bm u\in \operatorname{O}(r)$ is an orthogonal matrix and $\operatorname{diag}({{\mathrm{e}}}^\rho)$ is the diagonal matrix with diagonal $({{\mathrm{e}}}^{\rho_1},\ldots,{{\mathrm{e}}}^{\rho_r})$.\ As it is shown in [@SBBM15], in order to sample from ${\mathcal{N}}_{\text{Said}}(\bm \mu,\sigma^2 I_n)$ it suffices to generate samples from ${\mathcal{N}}_{\text{Said}}(I_r,\sigma^2 I_n)$. Indeed, if $\bm x\sim {\mathcal{N}}_{\text{Said}}(I_r,\sigma^2 I_n)$, then $\bm \mu^{\frac{1}{2}}\bm x{\bigl(\bm \mu^{\frac{1}{2}}\bigr)^{\mathrm{T}}}\sim {\mathcal{N}}_{\text{Said}}(\bm \mu,\sigma^2 I_n)$. Further, for sampling from ${\mathcal{N}}_{\text{Said}}(I_r,\sigma^2 I_n)$ it is enough to sample from the uniform distribution on $\operatorname{O}(r)$ to generate $\bm u$ and from the distribution with density $$\label{other_dist} p(\rho) \propto \exp\biggl\{-\frac{\rho_1^2+\ldots+\rho_r^2}{2\sigma^2}\biggr\}\prod_{i<j} \sinh\biggl(\frac{{\left| \rho_i-\rho_j \right|}}{2}\biggr)$$ to generate $\rho$. Once these are obtained, they can be plugged into the spectral decomposition $\bm x=\bm x(\rho,\bm u)$ to obtain $\bm x\sim {\mathcal{N}}(I_r,\sigma^2 I_n)$.\ Sampling from the uniform distribution on $\operatorname{O}(r)$ can be done using a matrix $A$ whose components are i.i.d. standard normally distributed. Computing the QR-decomposition $\bm a=\bm u\bm r$ with $\bm u$ orthogonal and $\bm r$ upper triangular, $\bm u$ is uniformly distributed on $\operatorname{O}(r)$, see, e.g., [@Chi12].\ Sampling from the multivariate density $f$ in can be achieved using the *acceptance-rejection* method, see, e.g., [@RC13]. As dominating density we choose the density of the Euclidean Gaussian distribution ${\mathcal{N}}(0,\tilde{\sigma}^2 I_r)$, where $\tilde{\sigma}^2 = \frac{2\sigma^2}{2-2(r-1)\sigma^2}$. As we need $\tilde{\sigma}^2>0$, this allows only to sample in the case where $\sigma^2< \frac{1}{r-1}$. However, numerical experiments indicate that this completely suffices to generate realistic Gaussian noise matrices which might arise in applications. In order to use the acceptance-rejection method we have to show that $\frac{f(\rho)}{g(\rho)}\leq C$ for some constant $C>0$, where $f$ is proportional to the density we want to sample from and $g$ is proportional to the chosen dominating density. In our situation, we choose $$g(\rho) = \exp\biggl\{- \frac{\rho_1^2+\ldots + \rho_r^2}{2\tilde{\sigma}^2}\biggr\}\\ \propto \frac{1}{(2\pi\tilde{\sigma}^2)^{\frac{r}{2}}} \exp\biggl\{- \frac{\rho_1^2+\ldots + \rho_r^2}{2\tilde{\sigma}^2}\biggr\}.$$ To show $\frac{f(\rho)}{g(\rho)}\leq C$, we first estimate $$\begin{aligned} \prod_{i<j} \sinh\Bigl(\tfrac{{\left| \rho_i-\rho_j \right|}}{2}\Bigr) & = \prod_{i\neq j}\biggl[ \sinh\Bigl(\tfrac{{\left| \rho_i-\rho_j \right|}}{2}\Bigr)\biggr]^{\frac{1}{2}} = \prod_{i\neq j}\biggl[ \frac{1}{2}\Bigl({{\mathrm{e}}}^{\frac{\lvert\rho_i-\rho_j\rvert}{2}}-{{\mathrm{e}}}^{\frac{-\lvert\rho_i-\rho_j\rvert}{2}}\Bigr)\biggr]^{\frac{1}{2}}\\ & \leq 2^{-\frac{r(r-1)}{2}}\prod_{i\neq j}{{\mathrm{e}}}^{\frac{\lvert\rho_i-\rho_j\rvert}{4}} = 2^{-\frac{r(r-1)}{2}}\exp\biggl\{\frac{1}{4}\sum_{i\neq j}\lvert\rho_i-\rho_j\rvert\biggr\}\\ & \leq 2^{-\frac{r(r-1)}{2}}\exp\biggl\{\frac{1}{4}\sum_{i\neq j}\rvert\rho_i\lvert+\lvert\rho_j\rvert\biggr\} = 2^{-\frac{r(r-1)}{2}}\exp\biggl\{\frac{r-1}{2}\sum_{i=1}^r\lvert\rho_i\rvert\biggr\}. \end{aligned}$$ Using $$\begin{aligned} \exp\biggl\{-\frac{1}{2\tilde{\sigma}^2}\sum_{i=1}^r \rho_i^2\biggr\} & = \exp\biggl\{-\frac{2-2(r-1)\sigma^2}{4\sigma^2}\sum_{i=1}^r \rho_i^2\biggr\}\\ &= \exp\biggl\{-\frac{1}{2\sigma^2}\sum_{i=1}^r \rho_i^2\biggr\}\exp\biggl\{\frac{2(r-1)}{4}\sum_{i=1}^r \rho^2_i\biggr\} \end{aligned}$$ we finally obtain $$\begin{aligned} \frac{f(\rho)}{g(\rho)}& \leq \frac{2^{-\frac{r(r-1)}{2}}\exp\bigl\{-\frac{1}{2\sigma^2}\sum_{i=1}^r \rho_i^2\bigr\}\exp\bigl\{\frac{r-1}{2} \sum_{i=1}^r\lvert\rho_i\rvert\bigr\}}{\exp\bigl\{-\frac{1}{2\sigma^2}\sum_{i=1}^r \rho_i^2\bigr\}\exp\bigl\{\frac{r-1}{2}\sum_{i=1}^r \rho^2_i\bigr\}}\\ & =2^{-\frac{r(r-1)}{2}} \exp\biggl\{\frac{r-1}{2}\sum_{i=1}^r \underbrace{\lvert\rho_i\rvert-\rho^2_i}_{\leq \frac{1}{4}} \biggr\} \\ &\leq C, \end{aligned}$$ where $C = {{\mathrm{e}}}^{\frac{r(r-1)}{8}} \, 2^{-\frac{r(r-1)}{2}}>0$. #### Acknowledgments We would like to thank R. Bergmann for fruitful discussions and for providing the test image in Figure \[fig:arts1\]. Funding by the German Research Foundation (DFG) within the project STE 571/13-1 is gratefully acknowledged. [^1]: Department of Mathematics, Technische Universität Kaiserslautern, Paul-Ehrlich-Str. 31, 67663 Kaiserslautern, Germany, {friederike.laus, persch, steidl}@mathematik.uni-kl.de. [^2]: CMLA – CNRS, ENS Cachan, 61 av. President Wilson, 94235 Cachan Cedex, France, nikolova@cmla.ens-cachan.fr [^3]: http://www.mathematik.uni-kl.de/imagepro/members/bergmann/mvirt/
--- author: - 'Mitsuo Higaki[^1]' - 'Yasunori Maekawa[^2]' - 'Yuu Nakahara[^3]' title: 'On the two-dimensional steady Navier-Stokes equations related to flows around a rotating obstacle' --- Introduction ============ Let $\mathcal B$ be a rigid body immersed in a viscous incompressible fluid that fills the whole space. Assume that the body rotates with a constant angular velocity $a\in {\mathbb{R}}\setminus\{0\}$ and the exterior of $\mathcal B(t)$ is described as $\Omega(t) \subset {\mathbb{R}}^2$. The time dependent domain $\Omega(t)$ is defined as $$\begin{aligned} \label{rota} \begin{split} \Omega(t) & \, = \, \big \{ y\in {\mathbb{R}}^2~|~ y = O (a t) x\,, x\in \Omega \big \}\,,\\ O (a t ) & \, = \, \begin{pmatrix} \cos a t & -\sin a t\\ \sin a t & \cos a t \end{pmatrix}\,, \end{split} \end{aligned}$$ where a given exterior domain $\Omega(0) = \Omega \subset{\mathbb{R}}^2$ has a smooth boundary $\partial \Omega$. The flow around the rotating body is described by the following Navier-Stokes equations: $$\label{NS} \left\{ \begin{aligned} \partial_t v -\Delta v + v\cdot \nabla v + \nabla q & \,=\, g \,, \qquad t>0\,,~y \in \Omega (t)\,, \\ {\rm div}\, v & \,=\, 0 \,, \qquad t>0\,,~ y \in \Omega (t)\,. \\ \end{aligned}\right.$$ Here $v=v(y,t) = (v_1(y,t), v_2 (y,t))^\top$ and $q=q(y,t)$ are respectively unknown velocity field and pressure field, and $g = g(y,t) = (g_1(y, t), g_2(y,t))^\top$ is a given external force. We use the standard notation for derivatives: $\partial_t = \frac{\partial}{\partial t}$, $\partial_j = \frac{\partial}{\partial x_j}$, $\Delta = \sum_{j=1}^2 \partial^2_j$, ${\rm div}\, v = \sum_{j=1}^2 \partial_j v_j$, $v\cdot \nabla v = \sum_{j=1}^2 v_j \partial_j v$, while $x^{\bot}=(-x_2,x_1)^\top$ denotes the vector which is perpendicular to $x =(x_1,x_2)^\top$. To get rid of the difficulty due to the time-dependence of the domain we take the reference frame by making change of variables for $t\geq 0$ and $x\in \Omega$ $$\begin{aligned} y \, = \, O (a t) x\,, \quad u (x,t) &\, = \, O (a t)^\top v (y,t)\,, \quad p (x,t) \, = \, q (y,t)\,, \\ f (x,t) &\, = \, O (a t)^\top g (y,t)\,. \end{aligned}$$ Then is equivalent to the equations: $$\label{NSNS} \left\{ \begin{aligned} \partial_t u -\Delta u - a ( x^\bot \cdot \nabla u - u^\bot ) + \nabla p & \,=\, -u\cdot \nabla u + f \,, \quad & t>0\,,~x \in \Omega \,, \\ {\rm div}\, u & \,=\, 0\,, & t>0\,,~ x \in \Omega \,. \\ \end{aligned}\right.$$ In order to understand the structure of solutions at spatial infinity it is important to study this system in ${\mathbb{R}}^2$. The effect of the boundary is expressed as a force in this case. Motivated by this observation, as a model problem, in this paper we study the above nonlinear system in ${\mathbb{R}}^2$ and in the steady case. Thus, assuming that $f$ is independent of $t$, we are interested in the following system: $$\tag{NS$_a$}\label{NS_a} \left\{ \begin{aligned} -\Delta u - a ( x^\bot \cdot \nabla u - u^\bot ) + \nabla p & \,=\, - u\cdot\nabla u + f \,, \quad\ &x \in {\mathbb{R}}^2\,,\\ {\rm div}\, u \,& \,=\, 0\,, &x \in {\mathbb{R}}^2\,. \\ \end{aligned} \right.$$ Our aim is to show the existence and the asymptotic behavior of the solution to . For this purpose we first consider the linearized problem $$\tag{S$_a$}\label{S_a} \left\{ \begin{aligned} -\Delta u - a ( x^\bot \cdot \nabla u - u^\bot ) + \nabla p & \,=\, f \,, \quad\ &x \in {\mathbb{R}}^2\,,\\ {\rm div}\, u \,& \,=\, 0\,, &x \in {\mathbb{R}}^2\,. \\ \end{aligned} \right.$$ We will show that there exists a unique solution to such that the leading term of the asymptotic behavior of the flow at infinity is the rotational profile $c \frac{x^\bot}{|x|^2}$ whose coefficient $c$ is determined by the external force $f$. Before stating the main theorem, let us recall some known results on the mathematical analysis of flows around a rotating obstacle. So far the mathematical results on this topic have been obtained mainly for the three-dimensional problem, as listed below. For the nonstationary problem the existence of global weak solutions is proved by Borchers [@Bo], and the unique existence of time-local regular solutions is shown by Hishida [@H1] and Geissert, Heck, and Hieber [@GHH], while the global strong solutions for small data are obtained by Galdi and Silvestre [@GSi]. The spectrum of the linear operator related to this problem is studied by Farwig and Neustupa [@FN]; see also the linear analysis by Hishida [@H2]. The existence of stationary solutions to the associated system is proved in [@Bo], Silvestre [@Si], Galdi [@G1], and Farwig and Hishida [@FH0]. In particular, in [@G1] the stationary flows with the decay order $O(|x|^{-1})$ are obtained, while the work of [@FH0] is based on the weak $L^{3}$ framework, which is another natural scale-critical space for the three-dimensional Navier-Stokes equations. In $3$D case the asymptotic profiles of these stationary flows at spatial infinity are studied by Farwig and Hishida [@FH1; @FH2] and Farwig, Galdi, and Kyed [@FGK], where it is proved that the asymptotic profiles are described by the Landau solutions, stationary self-similar solutions to the Navier-Stokes equations in ${\mathbb{R}}^3\setminus\{0\}$. It is worthwhile to mention that, also in the two-dimensional case, the asymptotic profile is given by the stationary self-similar solution $c \frac{x^\bot}{|x|^2}$. The stability of the above stationary solutions has been well studied in the three-dimensional case; The global $L^2$ stability is proved in [@GSi], and the local $L^3$ stability is obtained by Hishida and Shibata [@HShi]. All results mentioned above are considered in the three-dimensional case, while only a few results are known so far for the flow around a rotating obstacle in the two-dimensional case. An important progress has been made by Hishida [@H3], where the asymptotic behavior of the two-dimensional stationary Stokes flow around a rotating obstacle is investigated in details. Recently, the nonlinear problem is analyzed in [@HMN], and the existence of the unique solution decaying as $O(|x|^{-1})$ is proved for sufficiently small $a$ and $f$ when the external force $f$ is of divergence form $f={\rm div}\, F$ and $F$ has a scale critical decay. Moreover, the leading profile at spatial infinity is shown as $C \frac{x^\bot}{|x|^2}$ under the additional decay condition on $F$ such as $F=O(|x|^{-2-r})$, $r>0$. Since we consider the problem in ${\mathbb{R}}^2$ in this paper, by virtue of the absence of the physical boundary, we can show the existence of solutions to without assuming the smallness of the angular velocity $a$. To state our result let us introduce the function space. For a fixed number $s\geq 0$ the weighted $L^\infty$ space $L^\infty_s ({\mathbb{R}}^2)$ is defined as $$\begin{aligned} \label{def.L^infty_s} L^\infty_s ({\mathbb{R}}^2) & \,=\, \big \{ f\in L^\infty ({\mathbb{R}}^2)~|~ (1+|x|)^s f \in L^\infty ({\mathbb{R}}^2) \big \}\,. \end{aligned}$$ The space is a Banach space equipped with the natural norm $$\|f\|_{L^\infty_s} \,=\, {\rm ess.sup}_{x\in{\mathbb{R}}^2} (1+|x|)^s |f(x)|\,.$$ The first result of this paper is stated as follows. \[thm.main\] Let $a \in {\mathbb{R}}\setminus \{ 0 \}$ and $r\in[0,1). $ Assume that $f \in L^\infty_{3+r}({\mathbb{R}}^2)^2$. Then there exists a unique $(u,p) \in L^\infty_1 ({\mathbb{R}}^2)^2 \times L^\infty ({\mathbb{R}}^2)$ such that: 1. The couple $(u,p)$ satisfies in the sense of distributions. 2. The velocity $u$ belongs to $L^\infty_1({\mathbb{R}}^2)^2$ and satisfies $$\begin{aligned} \label{1.1} u(x) \,=\, \int_{|y|<\frac{|x|}2} y^\bot \cdot f(y) {\,{\rm d}}y \ \frac{ x^{\bot} }{4\pi |x|^2} + \mathcal{R}[f](x)\,, \end{aligned}$$ with $$\begin{aligned} \label{1.2} |\mathcal{R}[f](x)| \leq \frac{C}{(1+|x|)^{1+r}}\big (\frac{1}{1-r} + \frac{1}{|a|^\frac{1+r}{2}} \big ) ||f||_{L^\infty_{3+r}}\,,\end{aligned}$$ and in particular, it follows that $$\begin{aligned} \label{1.3} \| \mathcal{R}[f]\|_{L^\infty_{1+r}} & \leq C \big (\frac{1}{1-r}+\frac{1}{|a|^\frac{1+r}{2}} \big ) \| f\|_{L^\infty_{3+r}}\end{aligned}$$ with a numerical constant $C$. 3. The pressure $p$ is given by $$\begin{aligned} \label{bangou} p(x) \,=\, \frac{1}{2\pi} \int_{{\mathbb{R}}^2} \frac{x-y}{|x-y|^2} \cdot f(y) {\,{\rm d}}y\,. \end{aligned}$$ \(1) The representation leads to the regularity of the pressure such as $p\in L^\infty_1 ({\mathbb{R}}^2)$. The solution $(u,p)\in L^\infty_1({\mathbb{R}}^2)^2 \times L^\infty_1({\mathbb{R}}^2)$ satisfying in the sense of distributions is unique by virtue of the uniqueness result in Hishida [@H3 Lemma 3.5]. \(2) In [@H3] the result of Theorem \[thm.main\] is firstly established under the conditions on $f$ such as $f \in L^1({\mathbb{R}}^2)^2 \cap L^\infty({\mathbb{R}}^2)^2,\ x^\bot \cdot f \in L^1({\mathbb{R}}^2)^2$, and $f(x)=O(|x|^{-3}(\log{|x|})^{-1})$ as $|x|\rightarrow \infty$. Our result improves his result, and in particular, the critical case $f = O(|x|^{-3})$ is treated. We note that, in the case $r=0$, the integral in the right-hand side of does not converge in general when $|x|\rightarrow \infty$. To study the nonlinear problem it is reasonable to consider the linear problem when the external force $f$ is given by $f={\rm div}\,F$ with $F(x) = O(|x|^{-2})$ in view of the structure of the nonlinear term $u\cdot \nabla u = {\rm div}\, (u \otimes u)$. Here the matrix $(u_i v_j)_{1\leq i,j \leq 2}$ is written as $u \otimes v$. The following result is essentially obtained in [@HMN]. \[lem.F\] Let $a \in {\mathbb{R}}\setminus \{ 0 \}$, $r\in[0,1)$, and $q\in (1,\infty)$. Assume that $f\in L^2({\mathbb{R}}^2)^2$ is of divergence form $f={\rm div}\, F=(\partial_1 F_{11} + \partial_2 F_{12}, \partial_1 F_{21} + \partial_2 F_{22})^\top$ with $F=(F_{ij})_{1\leq i,j\leq 2} \in L^\infty_{2+r}({\mathbb{R}}^2)^{2\times2} $. Then there exists a unique $(u,p) \in L_1^\infty({\mathbb{R}}^2)^2 \times L^q ({\mathbb{R}}^2)$ such that $(u,p)$ satisfies in the sense of distributions, and $u$ satisfies $$\begin{aligned} \label{thelemma2} u(x) \,=\, \int_{|y|<\frac{|x|}2} (F_{12}(y) - F_{21}(y)) {\,{\rm d}}y \frac{x^\bot}{4\pi |x|^2} + \mathcal{R}[f](x)\,,\end{aligned}$$ and $\mathcal{R}[f]$ satisfies $$\begin{aligned} \label{thelemma3} \begin{split} |\mathcal{R}[f](x)| & \leq C \min \big\{ \frac{1}{|a| |x|^3}, \frac{1}{|x|} \big\} \int_{|y|\leq \frac{|x|}{2}} |F(y) | {\,{\rm d}}y \\ & \qquad + \frac{C}{(1+|x|)^{1+r}} \frac{1}{1-r} \|F\|_{L^\infty_{2+r}}\,. \end{split}\end{aligned}$$ Here $C$ is a numerical constant independent also of $a$ and $r$. In particular, it follows that $$\begin{aligned} \label{1.4} \| \mathcal{R}[f]\|_{L^\infty_{1+r}} \leq C \big (\frac{1}{1-r} + \frac{1 + \log |a|}{|a|^\frac{r}{2}} \big ) \| F \|_{L^\infty_{2+r}}\end{aligned}$$ with a numerical constant $C$. \(1) In fact, the statement of [@HMN Theorem 3.1] is a slightly different from Theorem \[lem.F\] above. So we give a sketch of the proof of Theorem \[lem.F\] in Section \[sec.pre\], based on the key pointwise asymptotic estimate of the fundamental solution, see Lemma \[lem.thm.linear.whole.1\] below, which is due to [@HMN Lemma 3.3]. \(2) Estimate is derived from . Indeed, when $|x|\leq 1$ the first term in the right-hand side of is estimated as $C|x|\|F\|_{L^\infty}$, while when $|x|\geq 1$ this term is estimated by dividing into two cases (i) $|a||x|^2\leq 1$ and (ii) $|a||x|^2\geq 1$. The factor $\log |a|$ in is required only when $r$ is near $0$ in order to ensure that the constant $C$ is independent of $r$. The linear results of Theorem \[thm.main\] and \[lem.F\] are applied to the nonlinear problem . The result for the nonlinear problem is stated as follows. \[thm.main2\] Let $a \in {\mathbb{R}}\setminus \{ 0 \}$ and $r\in [0,1)$. Then there exists $\delta=\delta(a,r)>0$ such that, for any $f \in L^\infty_{3+r}({\mathbb{R}}^2)^2$ satisfying $x^\bot \cdot f\in L^1 ({\mathbb{R}}^2)$ and $$\begin{aligned} \label{delta} \| x^\bot \cdot f\|_{L^1} + \| f \|_{L^\infty_{3+r}} < \delta\,, \end{aligned}$$ there exists a unique solution $(u,p)$ to such that $$\begin{aligned} \begin{split} u(x) \,=\, \alpha U(x) + v(x)\,, \end{split}\end{aligned}$$ where $$\begin{aligned} \label{alpha} \alpha \,=\, \frac12 \int_{{\mathbb{R}}^2} y^{\bot} \cdot f(y)\ {\,{\rm d}}y\,, \qquad U(x) \,=\, \frac1{2\pi} \frac{x^\bot}{|x|^2}&(1-e^{-\frac{|x|^2}4})\,, \end{aligned}$$ and $$\begin{aligned} \label{1.5} \| v \|_{L^\infty_{1+r}} & \leq C_r \big (\frac{1}{1-r} + \frac{1}{|a|^\frac{1+r}{2}} \big) \big (\| x^\bot \cdot f \|_{L^1} + \| f \|_{L^\infty_{3+r}} \big )\,,\end{aligned}$$ and the pressure $p$ is given by $$\begin{aligned} \label{bangou2} p(x) \,=\, \nabla \cdot (-\Delta)^{-1} \nabla \cdot (u \otimes u) - \nabla \cdot (-\Delta)^{-1} f\,. \end{aligned}$$ Here the constant $C_r$ depends only on $r$. [In Theorem \[thm.main2\] the solution is constructed as the solution to the integral equation associated with , which is formulated based on the fundamental solution to the linearized problem . The uniqueness is proved for this class of solutions. ]{} This paper is organized as follows. In Section \[sec.pre\] we collect the estimates which reflect the effect of the rotation. Most of them are the abstractions from [@H3; @HMN]. Theorem \[thm.main\] is proved in Section \[sec.thm.main\]. Finally, Theorem \[thm.main2\] is proved in Section \[sec.nonlinear\]. Preliminaries {#sec.pre} ============= Let us consider the linear problem in the whole plane for $a \in {\mathbb{R}}\setminus \{0\}$: $$\tag{S$_a$}\label{a} - \Delta u -a (x^\bot \cdot \nabla u-u^\bot)+ \nabla p \,=\, f\,, \qquad {\rm div}\ u \,=\, 0\,, \qquad \quad x \in {\mathbb{R}}^2\,.$$ Let $q\in [1,\infty]$. The couple $(u,p)\in L^\infty ({\mathbb{R}}^2)^2\times L^q({\mathbb{R}}^2)$ is said to be a weak solution to if (i) ${\rm div}\, u=0$ in the sense of distributions, and (ii) $(u,p)$ satisfies $$\label{def.weak.whole} \int_{{\mathbb{R}}^2} u\cdot T_{-a} \phi {\,{\rm d}}x - \int_{{\mathbb{R}}^2} p \, {\rm div}\, \phi {\,{\rm d}}x \,=\, \int_{{\mathbb{R}}^2} f \cdot \phi {\,{\rm d}}x\,, \qquad {\rm for~all} \ \ \phi\in \mathcal{S}({\mathbb{R}}^2)^2\,,$$ where $$\begin{aligned} T_a u \,=\, -\Delta u -a(x^\bot \cdot \nabla u -u^\bot)\,.\end{aligned}$$ Let $\mathbb{I}=(\delta_{ij})_{1\leq i,j \leq 2}$ be the identity matrix. The velocity part of the fundamental solution to plays a central role throughout this paper, which is defined as $$\label{gamma} \Gamma_{a}(x,y) \,=\, \int_{0}^{\infty} O(a t)^{\top}K(O(a t)x-y,t) {\,{\rm d}}t\,,$$ where $$\begin{aligned} \label{kernel} K(x,t) \, = \, G(x,t) \mathbb{I} + H(x,t)\,, \qquad H(x,t) \, = \, \int_{t}^{\infty} \nabla^2 G(x,s) {\,{\rm d}}s\,, \end{aligned}$$ and $G(x,t)$ is the two-dimensional Gauss kernel $$G(x,t) \,=\, \frac{1}{4\pi t} e^{-\frac{|x|^2}{4t}}\,.$$ Similarly, the pressure part of the fundamental solution is defined as $$Q(x-y) \,=\, \frac1{2\pi}\log{|x-y|}\,,$$ for the following identity holds. $${\rm div}\, (x^\bot \cdot \nabla u-u^\bot )=x^\bot \cdot \nabla{\rm div}\, u \,=\, 0\,.$$ We can also write $H(x,t)$ in as follows $$\begin{aligned} H(x,t) \,=\, -\frac{(x \otimes x)}{|x|^2} G(x,t) + \bigg( \frac{x \otimes x}{|x|^2} - \frac{\mathbb I}{2} \bigg) \frac{1- e^{-\frac{|x|^2}{4t}}}{\pi |x|^2}\,.\end{aligned}$$ The next lemma is proved in [@H3; @HMN]. \[lem.thm.linear.whole.1\] Set $$\label{def.Lxy} L(x,y) \,=\, \frac{x^\bot \otimes y^\bot}{4\pi |x|^2}\,.$$ Then for $m=0,1$ the kernel $\Gamma_{a}(x,y)$ satisfies $$\label{est.lem.thm.linear.whole.1.1} \begin{split} & | \nabla_y ^m \big ( \Gamma_{a}(x,y)- L(x,y) \big ) | \\ & \le C \bigg ( \delta_{0m} \min \big\{ \frac{1}{|a| |x|^2}, \frac{1}{|a|^{\frac{1}{2}} |x|} \big\} + |x|^{1-m} \min \big\{ \frac{1}{|a| |x|^3}, \frac{1}{|x|} \big\} + \frac{|y|^{2-m}} {|x|^2}\bigg )\,,\\ & \quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad \quad\quad \quad\quad\quad \quad\quad\quad\quad\quad {\rm for} \quad |x|> 2 |y|\,. \end{split}$$ Here $\delta_{0m}$ is the Kronecker delta and $C$ is independent of $x$, $y$, and $a$. \[rem.lem.thm.linear.whole.1\] \(1) The asymptotic estimate like is proved in [@H3] when $m=0$, and then the dependence on $|a|$ is improved by [@HMN] which is needed to solve the nonlinear problem. The detailed proof for the case $m=1$ of is given by [@HMN]. \(2) Note that, when $|y|>2|x|$, since $\Gamma_a(x,y)=\Gamma_{-a}(y,x)^\top $ and $(y^\bot \otimes x^\bot)^\top=x^\bot \otimes y^\bot$ we have a similar estimate: $$\begin{aligned} \label{reverce lem} \begin{split} & | \Gamma_{a}(x,y)- \frac{x^\bot \otimes y^\bot}{4\pi |y|^2} | \\ & \le C \bigg (\min \big\{ \frac{1}{|a| |y|^2}, \frac{1}{|a|^{\frac{1}{2}} |y|} \big\} + |y| \min \big\{ \frac{1}{|a| |y|^3}, \frac{1}{|y|} \big\} + \frac{|x|^{2}} {|y|^2}\bigg )\,,\\ & \quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad \quad\quad \quad\quad\quad \quad\quad\quad\quad\quad {\rm for} \quad |y|> 2|x|\,. \end{split}\end{aligned}$$ [*Proof of Theorem \[lem.F\]*]{}. Here we give a sketch of the proof of Theorem \[lem.F\]. The unique solution $u$ to decaying at spatial infinity is expressed as $u(x) = \int_{{\mathbb{R}}^2} \Gamma_a (x,y) f (y) {\,{\rm d}}y$, and we focus on the proof of and . By the integration by parts we have $$\begin{aligned} \int_{{\mathbb{R}}^2} \Gamma_a (x,y) f (y) {\,{\rm d}}y & \,=\, -\int_{{\mathbb{R}}^2} \nabla_y \Gamma_a (x,y) F (y) {\,{\rm d}}y \\ & \,=\, - \bigg ( \int_{|y|<\frac{|x|}{2}} + \int_{\frac{|x|}{2}\leq |y|} \bigg ) \nabla_y \Gamma_a (x,y) F (y) {\,{\rm d}}y \\ & \,=\, I (x) + II(x)\,.\end{aligned}$$ The term $I$ is further decomposed as $$\begin{aligned} I (x) & \,=\, -\int_{|y| < \frac{|x|}{2}} \nabla_y L(x,y) F (y) {\,{\rm d}}y - \int_{|y| < \frac{|x|}{2}} \nabla_y \big ( \Gamma_a (x,y) - L(x,y) \big ) F (y) {\,{\rm d}}y\\ & \,=\, I_1 (x) + I_2 (x)\,.\end{aligned}$$ By the definition of $L(x,y)$ we have $-\big ( \nabla_y L(x,y)\big ) F = (F_{12}-F_{21}) \frac{x^\bot}{4\pi |x|^2}$, which implies $$\begin{aligned} I_1 (x) \,=\, \int_{|y| < \frac{|x|}{2}} \big (F_{12}(y) -F_{21}(y) \big ) {\,{\rm d}}y \frac{x^\bot}{4\pi |x|^2}\,. \label{est.I_1}\end{aligned}$$ As for $I_2$, when $|x| \geq 1$ we have from with $m=1$, $$\begin{aligned} \label{est.I_2} |I_2 (x) | & \leq C \min \big\{ \frac{1}{|a| |x|^3}, \frac{1}{|x|} \big\} \int_{|y|\leq \frac{|x|}{2}} |F(y) | {\,{\rm d}}y + \frac{C}{|x|^2} \int_{|y|\leq \frac{|x|}{2}} |y| |F(y)| {\,{\rm d}}y \nonumber \\ & \leq C \min \big\{ \frac{1}{|a| |x|^3}, \frac{1}{|x|} \big\} \int_{|y|\leq \frac{|x|}{2}} |F(y) | {\,{\rm d}}y + \frac{C}{(1+|x|)^{1+r}} \frac{1}{1-r} \| F \|_{L^\infty_{2+r}}\,,\end{aligned}$$ where $C$ is a numerical constant independent also of $r$. Next we have from the direct calculation $$|(\nabla_x K)(x,t)| \leq C \big( t^{-\frac{3}{2}} e^{-\frac{|x|^2}{16t}} + \int_{t}^{\infty} s^{-\frac{5}{2}} e^{- \frac{|x|^2}{16s}} {\,{\rm d}}s \big)\,,$$ which implies $$\begin{aligned} \int_0^\infty |(\nabla K) (O(a t) x,t )| {\,{\rm d}}t\leq \frac{C}{|x|}\,, \qquad ~x\ne 0\,.\end{aligned}$$ Then by the change of the variables $y = O(a t)z$ we have $$\begin{aligned} \label{proof.thm.linear.whole.5} |II(x)| & \leq \big| \int_{|y| \ge \frac{|x|}{2} } \nabla_{y} \Gamma_{a}(x,y) F(y) {\,{\rm d}}y \big| \nonumber \\ & \leq \int_{0}^{\infty} \int_{|y| \ge \frac{|x|}{2} } |(\nabla K)(O(a t)x - y, t)| |F(y)| {\,{\rm d}}y {\,{\rm d}}t \nonumber \\ & \leq C \| F \|_{L^\infty_{2+r}} \int_{|z| \ge \frac{|x|}{2} } \bigg( \int_{0}^{\infty} |(\nabla K)(O(a t)(x - z), t)| {\,{\rm d}}t \bigg) (1+|z|)^{-2-\gamma} {\,{\rm d}}z \nonumber \\ & \leq C \| F \|_{L^\infty_{2+r}} \int_{|z| \geq \frac{|x|}{2}} |x-z|^{-1} (1+|z|)^{-2-\gamma} {\,{\rm d}}z \nonumber \\ & \leq \frac{C}{(1+|x|)^{1+\gamma}} \|F\|_{L^\infty_{2+r}}\,.\end{aligned}$$ Here $C$ is a numerical constant. From , , and , we conclude and for $r\in [0,1)$. The proof of Theorem \[lem.F\] is complete. Proof of linear result {#sec.thm.main} ====================== In this section we prove Theorem \[thm.main\]. Set $$\begin{aligned} \label{th1} L [f](x) \,=\, \int_{\mathbb{R}^2} \Gamma_a (x,y) f(y) {\,{\rm d}}y\,,\end{aligned}$$ where $\Gamma_a (x,y)$ is given by . It is known by [@H3 Lemma 3.5] that $u=L[f]$ together with $p$ defined by is the unique weak solution to decaying at spatial infinity. So we focus on the proof of the estimates and here. To apply Lemma \[lem.thm.linear.whole.1\] we first divide into three parts: $$\begin{aligned} L [f](x)&\,=\, U_1(x) +U_2(x) + U_3(x) \\ &\,=\, \bigg ( \int_{|y|<\frac{|x|}{2}} + \int_{\frac{|x|}{2}\leq |y|\leq 2 |x|} + \int_{2|x|<|y|} \bigg )\, \Gamma_a (x,y) f(y) {\,{\rm d}}y\,.\end{aligned}$$ By Lemma \[lem.thm.linear.whole.1\] we have $$\label{th2} U_1(x) \,=\, \int_{|y|<\frac{|x|}{2}} y^\bot \cdot f(y) {\,{\rm d}}y \frac{x^\bot}{4\pi |x|^2} + W_1(x)$$ with $$\begin{aligned} \label{th3} |W_1(x)| &\leq C \min \big\{ \frac{1}{|a| |x|^2}, \frac{1}{|a|^{\frac{1}{2}} |x|} \big\} \int_{|y|<\frac{|x|}{2}} |f(y)| {\,{\rm d}}y \nonumber \\ & \qquad + C \min \big\{ \frac{1}{|a| |x|^2}, 1 \big\} \int_{|y|<\frac{|x|}{2}} |f(y)| {\,{\rm d}}y + \frac{C}{|x|^{2}} \int_{|y|<\frac{|x|}{2}} |y|^2 |f(y)| {\,{\rm d}}y \nonumber \\ &\leq \begin{cases} & \displaystyle C \big ( \frac{1}{|a|^\frac12} + 1\big ) \| f \|_{L^\infty}\,, \qquad \qquad |x|\leq 1\,, \\ & \displaystyle C \big \{ \frac{1}{|a|^\frac{1+r}{2} |x|^{1+r}} + \frac{1}{(1-r)|x|^{1+r}} \big \} \| f \|_{L^\infty_{3+r}}\,, \qquad \qquad |x|\geq 1 \end{cases} \nonumber \\ & \leq \frac{C}{(1+|x|)^{1+r}} \big ( \frac{1}{|a|^\frac{1+r}{2}} + \frac{1}{1-r} \big )\| f \|_{L^\infty_{3+r}}\,.\end{aligned}$$ Similarly, by Remark \[rem.lem.thm.linear.whole.1\] we have $$\begin{aligned} \label{th4} |U_3(x)| &\leq C \int_{2|x|<|y|} \bigg ( \min \big\{ \frac{1}{|a| |y|^2}, \frac{1}{|a|^{\frac{1}{2}} |y|} \big\} + \min \big\{ \frac{1}{|a| |y|^2}, 1 \big\} + \frac{|x|}{|y|} \bigg ) |f(y)| {\,{\rm d}}y \nonumber \\ & \leq \begin{cases} & \displaystyle C \big ( \frac{1}{|a|^\frac12} + 1 \big ) \| f\|_{L^\infty_{3+r}}\,, \qquad \qquad |x|\leq 1\,, \\ & \displaystyle \frac{C}{|x|^{1+r}} \big ( \frac{1}{|a|^\frac{1}{2}} + 1 \big ) \| f\|_{L^\infty_{3+r}}\,, \qquad \qquad |x|\geq 1\,. \end{cases}\end{aligned}$$ From we have $$\begin{aligned} \label{th4''} |U_3 (x)| \leq \frac{C}{(1+|x|)^{1+r}} \big ( \frac{1}{|a|^\frac{1}{2}} + 1\big ) \| f \|_{L^\infty_{3+r}}\end{aligned}$$ with a numerical constant $C$. Finally, we decompose $U_2(x)$ as $$\label{th5} U_2(x) \,=\, U_{2,1}(x) + U_{2,2}(x)\,, \qquad U_{2,1} \,=\, U_{2,1,1} + U_{2,1,2}$$ with $$\begin{aligned} U_{2,1,1}(x) &\,=\, \int_{\frac{|x|}{2}\leq |y|\leq 2|x|} \int^l_0 O(at)^\top \frac{1}{8\pi t} e^{-\frac{|O(at)x-y|^2}{4t}} f(y) {\,{\rm d}}t {\,{\rm d}}y\,, \\ U_{2,1,2}(x) &\,=\, \int_{\frac{|x|}{2}\leq |y|\leq 2|x|} \int^\infty_l O(at)^\top \frac{1}{8\pi t} e^{-\frac{|O(at)x-y|^2}{4t}} f(y) {\,{\rm d}}t {\,{\rm d}}y\,, \\ U_{2,2}(x) &\,=\, \int_{\frac{|x|}{2}\leq |y|\leq 2|x|} \int^\infty_0 O(at)^\top \big ( K(O(at) x-y) - \frac{1}{8\pi t} e^{-\frac{|O(at)x-y|^2}{4t}} \mathbb{I} \big ) f(y) {\,{\rm d}}t {\,{\rm d}}y\,,\end{aligned}$$ where $l=l(a,|x|)>0$ will be chosen later. We start from the estimate of $U_{2,1,1}(x)$. By Fubini’s theorem and changing the variable as $z=O(at)x-y$ we obtain $$\begin{aligned} \label{th6.1} |U_{2,1,1}(x)| & \leq \frac{C}{(1+|x|)^{3+r}} \|f\|_{L^\infty_{3+r}} \int_{\frac{|x|}{2}\leq |y|\leq 2|x|} \int^l_0 t^{-1}e^{-\frac{|O(at)x-y|^2}{4t}} {\,{\rm d}}t {\,{\rm d}}y \nonumber \\ &\leq \frac{C}{(1+|x|)^{3+r}} \|f\|_{L^\infty_{3+r}} \int^l_0 \int_{{\mathbb{R}}^2} t^{-1} e^{-\frac{|O(at)x-y|^2}{4t}} {\,{\rm d}}y {\,{\rm d}}t \nonumber \\ &\leq \frac{C}{(1+|x|)^{3+r}} \|f\|_{L^\infty_{3+r}} \int_0^l \int_{{\mathbb{R}}^2} t^{-1} e^{-\frac{|z|^2}{4t}} {\,{\rm d}}z {\,{\rm d}}t \nonumber \\ &\leq \frac{C}{(1+|x|)^{3+r}} l \, \|f\|_{L^\infty_{3+r}} \,.\end{aligned}$$ Here $C$ is a numerical constant. Next we estimate $U_{2,1,2}$. Since $$\begin{aligned} O(at)^\top \,=\, -\frac1{a} \frac{{\,{\rm d}}}{{\,{\rm d}}t} \dot{O}(at)^\top\,,\end{aligned}$$ the integrating by parts yields $$\begin{aligned} \label{th7.1} U_{2,1,2}(x) &= -\frac1{2a} \int_{\frac{|x|}{2}\leq |y|\leq 2|x|} \int^\infty_l \big( \frac{{\,{\rm d}}}{{\,{\rm d}}t} \dot{O}(at)^\top \big ) G(O(at)x-y,t) f(y) {\,{\rm d}}t {\,{\rm d}}y \nonumber \\ &= \frac1{2a} \int_{\frac{|x|}{2}\leq |y|\leq 2|x|} \int^\infty_l \dot{O}(at)^\top \frac{{\,{\rm d}}}{{\,{\rm d}}t} \big ( G(O(at)x-y,t) \big ) f(y) {\,{\rm d}}t {\,{\rm d}}y + W_2(x)\,,\end{aligned}$$ and the remainder term $W_2$ is estimated as $$\begin{aligned} \label{th7.2} |W_2(x)| &\leq \frac{C}{|a|} \int_{\frac{|x|}{2}\leq |y|\leq |x|} |G(O (a l) x-y, l) f(y)| {\,{\rm d}}y \nonumber \\ & \leq \frac{C}{(1+|x|)^{1+r}} \frac{1}{l |a|} \| f\|_{L^\infty_{3+r}} \,.\end{aligned}$$ To estimate the first term in the right-hand side of we use the following calculation, $$\begin{aligned} & \frac{{\,{\rm d}}}{{\,{\rm d}}t} G(O(at)x-y,t) \\ &\quad =\frac{e^{-\frac{|O(at)x-y|^2}{4t}}}{4\pi} \big\{ -t^2 + t^{-3} \frac{|O(at)x-y|^2}{4} - a t^{-2} \frac{(\dot O(at)x) \cdot (O(at)x - y)}{2} \big\} \,.\end{aligned}$$ Hence we have $$\begin{aligned} \big | \int_{\frac{|x|}{2}\leq |y|\leq 2|x|} \int^\infty_l \dot{O}(at)^\top \frac{{\,{\rm d}}}{{\,{\rm d}}t} \big ( G(O(at)x-y,t) \big ) f(y) {\,{\rm d}}t {\,{\rm d}}y \big | \hspace{-9cm} \\ &\leq C \int_{\frac{|x|}{2}\leq |y|\leq 2|x|} \int^\infty_l \big ( t^{-2} + t^{-3}|O(at)x-y|^2 \big ) e^{-\frac{|O(at)x-y|^2}{4t}} |f(y)| {\,{\rm d}}t {\,{\rm d}}y \\ &\quad + C \ |a| \int_{\frac{|x|}{2}\leq |y|\leq 2|x|} \int^\infty_l t^{-2} |x| |O(at)x-y| e^{-\frac{|O(at)x-y|^2}{4t}} |f(y)| {\,{\rm d}}t {\,{\rm d}}y \\ &\leq \frac{C}{(1+|x|)^{3+r}} \|f\|_{L^\infty_{3+r}} \big ( \frac{1}{l} |x|^2 +|a||x| \int_{\frac{|x|}{2}\leq |y|\leq 2|x|} \int_0^\infty t^{-\frac32} e^{-\frac{|O(at)x-y|^2}{8t}} {\,{\rm d}}t {\,{\rm d}}y \big )\,,\end{aligned}$$ and then, by the change of variables as $z = O(at)^\top y$ we see $$\begin{aligned} \label{th7.3} &\leq \frac{C}{(1+|x|)^{3+r}} \|f\|_{L^\infty_{3+r}} \big (\frac1{l} |x|^2 + |a||x| \int_{\frac{|x|}{2}\leq |z|\leq 2|x|} \int_0^\infty t^{-\frac32}e^{-\frac{|x-z|^2}{8t}} {\,{\rm d}}t {\,{\rm d}}z \big ) \nonumber \\ &\leq \frac{C}{(1+|x|)^{3+r}} \|f\|_{L^\infty_{3+r}} \big (\frac1{l} |x|^2 + |a||x| \int_{|x-z|\leq3|x|} \frac{{\,{\rm d}}z}{|x-z|} \big ) \nonumber \\ &\leq \frac{C}{(1+|x|)^{1+r}} (\frac{1}{l} + |a|) \| f\|_{L^\infty_{3+r}}\,.\end{aligned}$$ Then , , and implies that $$\label{th7} |U_{2,1,2}(x)| \leq \frac{C}{(1+|x|)^{1+r}} (\frac{1}{l|a|} + 1) \|f\|_{L^\infty_{3+r}}\,.$$ Here $C$ is a numerical constant. On the other hand, the term $U_{2,2}$ converges absolutely without using the effect of rotation. Indeed, changing the variables $y = O(at) z$, we have $$\begin{aligned} \label{th8.1} \begin{split} \big | \int_{\frac{|x|}{2}\leq |y|\leq 2|x|} \int^\infty_0 O(at)^\top \big ( K(O(at)x-y,t) - \frac{1}{8\pi t} e^{-\frac{|O(at)x-y|^2}{4t}} \mathbb{I} \big ) f(y) {\,{\rm d}}t {\,{\rm d}}y \big | \\ \leq \frac{C}{(1+|x|)^{3+r}} \|f\|_{L^\infty_{3+r}} \int_{|z|\leq2|x|} \int^\infty_0 |B(x-z,t)| {\,{\rm d}}t {\,{\rm d}}z \,, \end{split}\end{aligned}$$ where $B(x,t)$ is given by $$\begin{aligned} B(x,t) \,=\, \bigg ( \frac{e^{-\frac{|x|^2}{4t}}}{8\pi t} - \frac{1-e^{-\frac{|x|^2}{4t}}}{2\pi |x|^2} \bigg ) \bigg ( \mathbb{I} - 2\frac{x\otimes x}{|x|^2} \bigg )\,.\end{aligned}$$ For any fixed $x-z$ we have from the change of variables as $s = \frac{|x-z|^2}{4t}$, $$\begin{aligned} \label{th8.2} \int_0^\infty |B(x-z,t) | {\,{\rm d}}t \leq \big | \mathbb{I} - 2\frac{(x-z)\otimes (x-z)}{|x-z|^2} \big | \int_0^\infty \frac{-s e^{-s} + 1-e^{-s}}{s^2} {\,{\rm d}}s \leq C\,,\end{aligned}$$ where $C$ is independent of $x-z$. Here we have used the identity $$\begin{aligned} \frac{{\,{\rm d}}}{{\,{\rm d}}s} \big ( \frac{e^{-s}-1}{s} \big ) \,=\, \frac{-s e^{-s} + 1-e^{-s}}{s^2} > 0\,.\end{aligned}$$ We combine with to conclude $$\begin{aligned} \label{th8} |U_{2,2}(x)| \leq \frac{C}{(1+|x|)^{3+r}} \|f\|_{L^\infty_{3+r}}\,.\end{aligned}$$ Here $C$ is a numerical constant. Collecting , , , and , we see $$\begin{aligned} |U_2(x)| \leq \frac{C}{(1+|x|)^{1+r}} \big\{ \frac{l}{(1+|x|)^2} + \frac{1}{l|a|} + 1 \big\} \| f\|_{L^\infty_{3+r}}\,,\end{aligned}$$ and thus, by taking $l = \frac{1+|x|}{|a|^{\frac12}}$, $$\begin{aligned} \label{th9} |U_2(x)| \leq \frac{C}{(1+|x|)^{1+r}} \big\{ \frac{1}{(1+|x|) |a|^\frac12} + 1 \big\} \| f\|_{L^\infty_{3+r}}\,.\end{aligned}$$ From , , , , and , we obtain and . The proof of Theorem \[thm.main\] is complete. Proof of nonlinear result {#sec.nonlinear} ========================= We are now in a position to give a proof of our main result. The unique existence and the asymptotic behavior of solutions to will be obtained by combining the results of Theorem \[lem.F\] and Theorem \[thm.main\] by applying the standard fixed point argument. For $r\in [0,1)$ and $\delta\in (0,1)$ we introduce the function space $X_{r,\delta}$ as follows. $$\begin{aligned} X_{r,\delta} \,=\, \{ v \in L^\infty_{1+r}({\mathbb{R}}^2)^2 ~|~ \ ||v||_{L^\infty_{1+r}}\leq\delta, \ \ {\rm div}\ v= 0 \}\,.\end{aligned}$$ We also set $$\begin{aligned} \label{the1} \begin{split} U(x) & \,=\, \frac1{2\pi} \frac{x^\bot}{|x|^2}(1-e^{-\frac{|x|^2}4})\,,\\ \alpha & \,=\, \frac{\int_{{\mathbb{R}}^2} y^{\bot} \cdot f(y)\ {\,{\rm d}}y}{\int_{{\mathbb{R}}^2} y^{\bot} \cdot \Delta U(y) \ {\,{\rm d}}y} \,=\, \frac12 \int_{{\mathbb{R}}^2} y^{\bot} \cdot f(y)\ {\,{\rm d}}y\,, \\ w(x) & \,=\, u(x) - \alpha U(x)\,, \end{split} \end{aligned}$$ Here we have used the fact $\int_{{\mathbb{R}}^2} x^\bot \cdot \Delta U {\,{\rm d}}x = 2$, which is derived from the identity $$\Delta U \,=\, (-\partial_2 G\,, \partial_1 G)^\top\,, \qquad G(x) \,=\, \frac{1}{4\pi} e^{-\frac{|x|^2}{4}}\,.$$ The direct computation leads to the existence of a scalar function $P_U \in L^\infty ({\mathbb{R}}^2)$ such that $$\begin{aligned} -a\big (x^\bot \cdot \nabla \alpha U - \alpha U^\bot \big ) + \alpha^2 U \cdot \nabla U \,=\, \nabla P_U\,.\end{aligned}$$ Then $w$ satisfies the following equations in ${\mathbb{R}}^2$: $$\begin{aligned} \begin{cases} -\Delta w -a(x^\bot \cdot \nabla w-w^\bot) + \nabla\pi \\ \qquad \,=\, -\alpha \big (U\cdot \nabla w + w\cdot \nabla U \big ) - w\cdot \nabla w -\alpha \Delta U + f\,, \\ {\rm div}\, w \,=\, 0\,. \end{cases}\end{aligned}$$ Here $\pi = p - P_U$. Let us recall that for $f \in L^\infty_{3} ({\mathbb{R}}^2)^2$, the function $L[f](x) = \int_{{\mathbb{R}}^2} \Gamma_a (x,y) f(y) {\,{\rm d}}y$ defines the unique weak solution to decaying at spatial infinity. Then we introduce the map $\Phi$ as $$\begin{aligned} \label{the4} \Phi[w](x) \,=\, L\big[-\alpha(U\cdot \nabla w + w\cdot \nabla U) - w\cdot \nabla w -\alpha \Delta U + f\big](x)\,.\end{aligned}$$ Here we consider the leading profile of . Since $U\cdot \nabla w + w\cdot \nabla U = {\rm div}\, (U\otimes w +w\otimes U)$ and $w\cdot \nabla w= {\rm div}\, ( w\otimes w )$, we see that $$\begin{aligned} \label{the5} \begin{split} \frac1{4\pi}&\left\{ \int_{|y|<\frac{|x|}{2}} (U\otimes w +w\otimes U)_{2,1}-(U\otimes w +w\otimes U)_{1,2}\ {\,{\rm d}}y \right. \\ &\qquad \left. + \int_{|y|<\frac{|x|}{2}} (w\otimes w)_{2,1} - (w\otimes w)_{1,2}\ {\,{\rm d}}y \right\} \\ \,=\,0\,. \end{split}\end{aligned}$$ From and , Theorems \[thm.main\] and \[lem.F\] yield $$\begin{aligned} \label{phi} \begin{split} \Phi[w](x) \,=\, & \mathcal{R}[-\alpha (U\cdot \nabla w + w\cdot \nabla U) - w\cdot \nabla w -\alpha \Delta U + f](x) \\ &+ \bigg( \int_{|y|<\frac{|x|}{2}} y^\bot \cdot f {\,{\rm d}}y - \alpha \int_{|y|<\frac{|x|}{2}} y^\bot \cdot \Delta U {\,{\rm d}}y\bigg) \frac{x^\bot}{4\pi |x|^2}\,. \end{split}\end{aligned}$$ To estimate the last term of , we have from the definition of $\alpha$ in and $\int_{{\mathbb{R}}^2} x^\bot \cdot \Delta U {\,{\rm d}}x =2$, $$\begin{aligned} \label{yokei} & \big | \int_{|y|<\frac{|x|}{2}} y^\bot \cdot f {\,{\rm d}}y - \alpha \int_{|y|<\frac{|x|}{2}} y^\bot \cdot \Delta U {\,{\rm d}}y \big| \nonumber \\ & \,=\, \frac12 \big| \int_{{\mathbb{R}}^2} y^\bot \cdot \Delta U {\,{\rm d}}y \ \int_{|y|<\frac{|x|}{2}} y^\bot \cdot f {\,{\rm d}}y - \int_{|y|<\frac{|x|}{2}} y^\bot \cdot \Delta U {\,{\rm d}}y\ \int_{{\mathbb{R}}^2} y^\bot \cdot f {\,{\rm d}}y \big| \nonumber \\ & \,=\, \frac12 \big| \int_{{\mathbb{R}}^2}y^\bot \cdot \Delta U {\,{\rm d}}y\ \bigg( \int_{|y|<\frac{|x|}{2}} y^\bot \cdot f {\,{\rm d}}y - \int_{{\mathbb{R}}^2} y^\bot \cdot f {\,{\rm d}}y \bigg) \nonumber \\ &\qquad- \bigg( \int_{|y|<\frac{|x|}{2}} y^\bot \cdot \Delta U {\,{\rm d}}y - \int_{{\mathbb{R}}^2} y^\bot \cdot \Delta U {\,{\rm d}}y \bigg) \ \int_{{\mathbb{R}}^2} y^\bot \cdot f {\,{\rm d}}y \big| \nonumber \\ & \,=\, \frac12 \big|-\int_{{\mathbb{R}}^2} y^\bot \cdot \Delta U {\,{\rm d}}y \int_{\frac{|x|}2\leq|y|} y^\bot \cdot f {\,{\rm d}}y + \int_{\frac{|x|}2\leq|y|} y^\bot \cdot \Delta U {\,{\rm d}}y \int_{{\mathbb{R}}^2} y^\bot \cdot f {\,{\rm d}}y \big| \nonumber \\ &\leq \frac{C_r}{(1+|x|)^{r}} (\|y^\bot\cdot f\|_{L^1} +\|f\|_{L^\infty_{3+r}} )\,, \qquad {\rm for}~~ |x|>1\,.\end{aligned}$$ Here the constant $C_r$ depends only on $r$. Note that the boundedness of $\|y^\bot\cdot f\|_{L^1}$ is always valid when $r>0$. When $|x|\leq 1$ it is easy to see $$\begin{aligned} \label{yokei'} \big| \int_{|y|<\frac{|x|}{2}} y^\bot \cdot f {\,{\rm d}}y - \alpha \int_{|y|<\frac{|x|}{2}} y^\bot \cdot \Delta U {\,{\rm d}}y \big| \leq C (\| y^\bot\cdot f\|_{L^1} + \|f\|_{L^\infty} )\,.\end{aligned}$$ Estimates and combined with imply that $$\begin{aligned} \label{nokori} \begin{split} \Phi[w](x) \,=\, & \mathcal{R}[-\alpha(U\cdot \nabla w + w\cdot \nabla U) - w\cdot \nabla w -\alpha \Delta U + f](x) \\ & + O\bigg (( \|y^\bot\cdot f\|_{L^1} +\|f\|_{L^\infty_{3+r}})(1+|x|)^{-1-r} \bigg )\,. \end{split}\end{aligned}$$ By and Theorems \[thm.main\] and \[lem.F\], we can verify that $$\begin{aligned} \label{saigo} \|\Phi[w] \|_{L^\infty_{1+r}} &\leq C \big (\frac{1}{1-r} + \frac{1+\log |a|}{|a|^\frac{r}{2}} \big ) \| \alpha (U \otimes w + w \otimes U) + \frac12 w\otimes w \|_{L^\infty_{2+r}} \nonumber \\ & \quad + C \big (\frac{1}{1-r} + \frac{1}{|a|^\frac{1+r}{2}} \big ) \|-\alpha \Delta U + f \|_{L^\infty_{3+r}} \nonumber \\ & \qquad + C_r \big ( \|y^\bot\cdot f\|_{L^1} +\|f\|_{L^\infty_{3+r}} \big) \nonumber \\ \begin{split} &\leq C \big (\frac{1}{1-r} + \frac{1+\log |a|}{|a|^\frac{r}{2}} \big ) \big ( |\alpha| \|w\|_{L^\infty_{1 + r}}+ \|w\|_{L^\infty_{1 + r}}^2 \big ) \\ & \quad + C_r \big ( \frac{1}{1-r} + \frac{1}{|a|^\frac{1+r}{2}} \big ) \big ( \| f \|_{L^\infty_{3+r}} + \|y^\bot\cdot f\|_{L^1} \big )\,, \end{split} \end{aligned}$$ where $C_r$ depends only on $r$, while $C$ is a numerical constant. We may take $C$ and $C_r$ larger than $1$, and note that $|\alpha|\leq 2^{-1}\| y^\bot \cdot f \|_{L^1}$. If $$0<\delta \leq \frac{1}{3C (\frac{1}{1-r}+\frac{1+\log |a|}{|a|^\frac{r}{2}})}$$ and $$\lambda(f) \,=\, \| f\|_{L^\infty_{3+r}} + \|y^\bot\cdot f\|_{L^1} \leq \frac{\delta}{3C_r (\frac{1}{1-r}+\frac{1}{|a|^\frac{1+r}{2}})}\,,$$ then we see that $\Phi[w]$ becomes a mapping from $X_{r,\delta}$ into $X_{r,\delta}$. Moreover, from there is a numerical constant $C'>0$ such that $$\begin{aligned} & \|\Phi[w_1] - \Phi[w_2]\|_{L^\infty_{1 + r}} \\ & \quad \,=\, \| \mathcal{R} \big[-\alpha \big\{ U\cdot \nabla (w_1 - w_2) + (w_1 - w_2)\cdot \nabla U \big\} - w_1\cdot \nabla w_1 +w_2\cdot \nabla w_2 \big] \|_{L^\infty_{1+ r}} \\ & \quad \leq C' \big (\frac{1}{1-r}+\frac{1+\log |a|}{|a|^\frac{r}{2}} \big ) \| -\alpha\big\{ (U\otimes (w_1 - w_2) + (w_1 - w_2)\otimes U) \big\} \\ & \qquad \qquad \qquad \qquad \qquad \qquad \quad - \frac12 (w_1 \otimes w_1 - w_2 \otimes w_2) \|_{L^\infty_{2+r}} \\ & \quad \leq C' \big (\frac{1}{1-r}+\frac{1+\log |a|}{|a|^\frac{r}{2}} \big ) (|\alpha| + \|w_1\|_{L^\infty_{1+ r}} + \|w_2\|_{L^\infty_{1+ r}}) \|w_1 - w_2\|_{L^\infty_{1 + r}} \\ & \quad \leq C' \big (\frac{1}{1-r} +\frac{1+\log |a|}{|a|^\frac{r}{2}} \big )\big(\frac{\lambda(f)}{2} +2\delta \big) \|w_1 - w_2\|_{L^\infty_{1 + r}} \\ & \quad \leq 3\delta C' \big (\frac{1}{1-r} +\frac{1+\log |a|}{|a|^\frac{r}{2}}\big ) \|w_1 - w_2\|_{L^\infty_{1 + r}} \\ & \quad \,=\, \tau \|w_1 - w_2\|_{L^\infty_{1 + r}}\,,\end{aligned}$$ for all $w_1,w_2 \in X_{r,\delta}$, where we have set $\tau = 3 \delta C'(\frac{1}{1-r}+\frac{1+\log |a|}{|a|^\frac{r}{2}})$. Hence, if $\delta$ (and thus, also $\lambda(f)$) is sufficiently small so that $\tau\in (0,1)$ is justified, then we can conclude that $\Phi$ is a contraction on $X_{r,\delta}$. By the fixed point theorem, there exists a fixed point $v$, which is unique in $X_{r,\delta}$, such that $$\begin{aligned} u(x) \,=\, \alpha U(x) + v(x)\,, \qquad v \in X_{r,\delta}\end{aligned}$$ is a unique solution to with the pressure $p$ defined by . Finally, the estimate follows from for the fixed point $v$ of $\Phi$ by virtue of the smallness of $|\alpha|$ and $\delta$. The proof of Theorem \[thm.main2\] is complete. [^1]: Department of Mathematics, Kyoto University; `mhigaki@math.kyoto-u.ac.jp` [^2]: Department of Mathematics, Kyoto University; `maekawa@math.kyoto-u.ac.jp` [^3]: Mathematical Institute, Tohoku University; `yuu.nakahara.t3@dc.tohoku.ac.jp`
--- abstract: 'We study the deformation theory of nonsigular projective curves defined over algebraic closed fields of positive characteristic. We show that under some assumptions the local deformation problem for automorphisms of powerseries can be reduced to a deformation problem for matrix representations. We study both equicharacteristic and mixed deformations in the case of two dimensional representations.' address: 'Max-Planck-Institut für Mathematik Vivatsgasse 7 53111 Bonn, Germany and $\;\;\;$ Department of Mathematics, University of the Ægean, 83200 Karlovassi, Samos, Greece ' author: - 'A. Kontogeorgis' title: 'Deformation of Curves with automorphisms and representations on Riemann-Roch spaces. ' --- Introduction ============ Let $X$ be a nonsingular projective curve of genus $g\geq 2$ defined over an algebraically closed field of characteristic $p>0$. The automorphism group $G:={{ \rm Aut }}(X)$ is known to be a finite group. The appearance of wild ramification in the cover $X \rightarrow X/{{ \rm Aut }}(X)$ makes the theory of such covers more difficult than the corresponding theory in characteristic zero. For a point $P \in X $ the decomposition group $G(P)=\{\sigma\in G: \sigma(P)=P\}$ is known to be cyclic in characteristic zero and a non-abelian solvable group admitting a ramification filtration [@SeL]. In [@KontoMathZ] the author defined a faithful representation of the $p$-part of the decomposition group at a wild ramified point $P$: $$\label{fai-rep} \rho: G_1(P) \rightarrow GL( L(mP)),$$ where $L(mP)=\{f \in k(X): \mathrm{div}(f) +mP \geq 0\} \cup \{ 0\}$. In this paper we would like to study the relation of two deformation theories, namely the deformation theory of representations of finite groups and the deformation theory of curves with automorphisms. We will treat both mixed characteristic and equicharacteristic deformations. For the mixed characteristic case we consider $\Lambda$ to be a complete Noetherian local ring with residue field $k$. Usually $\Lambda$ is an algebraic extension of the ring of Witt vector $W(k)$. For the equicharacteristic case we take $\Lambda=k$. Let $\mathcal{C}$ denote the category of local Artin $\Lambda$-algebras, which residue field $k$. Consider a subgroup $G$ of the group ${{ \rm Aut }}(X)$. A deformation of the couple $(X,G)$ over the local Artin ring $A$ is a proper, smooth family of curves $$\mathcal{X} \rightarrow {{\rm Spec}}(A)$$ parametrized by the base scheme ${{\rm Spec}}(A)$, together with a group homomorphism $G\rightarrow {{ \rm Aut }}_A(\mathcal{X})$, such that there is a $G$-equivariant isomorphism $\phi$ from the fibre over the closed point of $A$ to the original curve $X$: $$\phi: \mathcal{X}\otimes_{{{\rm Spec}}(A)} {{\rm Spec}}(k)\rightarrow X.$$ Two deformations $\mathcal{X}_1,\mathcal{X}_2$ are considered to be equivalent if there is a $G$-equivariant isomorphism $\psi$ that reduces to the identity in the special fibre and making the following diagram commutative: $$\xymatrix{ \mathcal{X}_1 \ar[rr]^{\psi} \ar[dr] & & \mathcal{X}_2 \ar[dl] \\ & {{\rm Spec}}A & }$$ The global deformation functor is defined: $${{D_{\rm gl}}}: \mathcal{C} \rightarrow \rm{Sets}, A \mapsto \left\{ \mbox{ \begin{tabular}{l} Equivalence classes \\ of deformations of \\ couples $(X,G)$ over $A$ \end{tabular} } \right\}$$ By the local-global theorems of J.Bertin and A. Mézard [@Be-Me] and the formal patching theorems of D. Harbater, K. Stevenson [@HarMSRI03], [@HarStevJA99], the study of the functor ${{D_{\rm gl}}}$ can be reduced to the study of the following deformation functors attached to each wild ramification point $P$ of the cover $X \rightarrow X/G$: $$\label{Bertin-Mezard-functor} D_P:\mathcal{C} \rightarrow {\rm Sets}, A \mapsto \left\{ \mbox{ \begin{tabular}{l} lifts $G(P)\rightarrow {{ \rm Aut }}(A[[t]])$ of $\rho$ mod- \\ulo conjugation with an element \\ of $\ker({{ \rm Aut }}A[[t]]\rightarrow k[[t]] )$ \end{tabular} } \right\}$$ The theory of automorphisms of formal powerseries rings is not as well understood as is the theory of automorphisms of finite dimensional vector spaces, i.e the theory of general linear groups. For a $k$-algebra $A$ with maximal ideal $m_A$, consider the multiplicative group $L_n(A)<GL_n(A)$, of invertible lower triangular matrices with entries in $A$, and invertible elements $\lambda$ in the diagonal, such that $\lambda-1 \in m_A$. We consider the following functor from the category $\mathcal{C}$ of local Artin $k$-algebras to the category of sets $$\label{Fdeformation} F: A \in Ob(\mathcal{C}) \mapsto \left\{ \begin{array}{l} \mbox{liftings of } \rho: G(P) \rightarrow L_n(k) \\ \mbox{to } \rho_A: G(P) \rightarrow L_n(A) \mbox{ modulo} \\ \mbox{conjugation by an element }\\ \mbox{of } \ker(L_n(A)\rightarrow L_n(k)) \end{array} \right\}$$ It is known that among the curves $X$ with automorphism group $G={{ \rm Aut }}(X)$ divisible by the characteristic, the curves so that $G_2(P)=\{1\}$ for all ramified points are the most simple. We will call these curves [*weakly ramified*]{}. Many intractable problems for the theory of curves with general automorphism group are solved for weakly ramified curves. For example the computation of the $G$-module structure of spaces of holomorphic differentials [@Koeck:04] or the computation of the deformation rings of curves with automorphisms [@CK]. In our representation perspective it seems that the simplest curves are those with two dimensional representations at all wild ramified points. Notice that if a two dimensional representation is attached at the wild point $P$, then the group $G_1(P)$ is elementary abelian and has conductor $m>1$ [@KontoMathZ example 3.]. In section \[localMAT\] we show how to attach a deformation of a matrix representation to every deformation of the couple $(X,G)$ over a complete local domain. Section \[MatrixDeformations\] is devoted to the deformations of matrix representations. We focus on the two dimensional case and we construct a hull for these deformations. The deformation theory of such representations is closely related to deformations of products of $\mathbb{G}_a$ group schemes in the equal characteristic case or to $\mathcal{G}^{(\lambda)}$ group schemes in mixed characteristic. In section \[SmallExt\] we try to analyze further the relation between the functors $F(\cdot)$ and $D(\cdot)$. A matrix representation allows us to express a deformation $\tilde{\rho}_\sigma$ given as a formal series $\tilde{\rho}_\sigma(t) \in A[[t]]$ in the form of a root od rational function of $t$. For the case of two dimensional representations, where $V=G_1(P)$ is an elementary abelian group, we are able to compute the image of elements in $F(\cdot)$ in the tangent space $D(k[\epsilon]/\epsilon^2)= H^1(V,{{\mathcal{T}_\mathcal{O} }})$, see proposition \[5.1\]. By combining these results to the computation of $H^1(V,{{\mathcal{T}_\mathcal{O} }})$ given by the author in [@KontoANT prop. 2.8] we are able to compute the Krull dimension of the hull’s attached to every wild ramified point. Finally, in section \[PRIES\] we restrict ourselves to to the equicharacteristic case and we relate two dimensional matrix deformations to the deformation functor of R. Pries. [**Acknowledgments**]{} The author would like to thank the participants of the conference in Leiden on [*Automorphisms of Curves*]{} for enlightening conversations and especially R. Pries and M. Matignon for their corrections and remarks. This paper was completed during the author’s visit at Max-Planck Institut für Mathematik in Bonn. The author would like to thank this insitution for its support and hospitality. Branch locus and liftings of matrix representations. {#localMAT} ==================================================== In this section we will show how the problem of deforming the representations attached at the wild ramified points give information on the problem of deformations of curves with automorphisms. Select a wild ramified point $P_i$ on every orbit of wild ramified points under the action of the group $G$. Define the functor $D_{loc}=\prod D_{P_i}$. J. Bertin and A. Mézard proved that there is a smooth morphism $\phi:D_{gl} \rightarrow D_{loc}$, and this morphism induces the following relation on the global deformation ring $R_{gl}$ and of the deformation rings $R_i$ of the deformation functors $D_{P_i}$. $$R_{gl} =(R_1\hat{\otimes} R_2 \hat{\otimes} \cdots \hat{\otimes} R_r)[[U_1,\ldots,U_N]],$$ where $N=\dim_k H^1(X/G,\pi_*^G({{\mathcal{T}}}_X))$, and $R_i$ is the deformation ring of $D_{P_i}$. For more information concerning this construction we refer to [@Be-Me]. For an exact formula for $N$ we refer to [@KontoANT sec. 3]. In the approach of Schlessinger [@Sch] one wants to build deformations of $(X,G)$ over Artin algebras, especially over the algebras $k[\epsilon]/\epsilon^n$, and study whether a deformation over ${{\rm Spec}}k[\epsilon]/\epsilon^n$ can be lifted to deformation over ${{\rm Spec}}k[\epsilon]/\epsilon^{n+1}$. More generaly a small extension $A'$ of $A$ is given by the the short exact sequence of local Artin algebras $$0\rightarrow \mathrm{ker}\pi \rightarrow A' \rightarrow A \rightarrow 0$$ such that $\mathrm{ker}\pi\cdot m_{A'}=0$, where $m_A$ is the maximal ideal of $A'$ respectively. We would like to know if a deformation in $D(A)$ can be lifted to a deformation in $D(A')$. The obstructions of such liftings are elements in $H^2(G,{{\mathcal{T}_\mathcal{O} }})$. If there are no obstructions then we can construct a family over the formal scheme $\mathcal{X}\rightarrow \mathrm{Spf} R$ for some complete domain $R$. The scheme $\mathrm{Spf} R$ is a formal scheme and does not posses a generic fibre. J. Bertin and A. Mézard in [@Be-Me] observed that an algebraization theorem of Grothendieck [@GroFGA] gives that the formal scheme representing $D_{gl}$ is algebraizable, and it corresponds to the formal completion of a proper smooth curve over ${{\rm Spec}}R$. This means that every unobstructed deformation over a formal affine scheme can be extended to the generic fibre. Assume that $\mathcal{X} \rightarrow {{\rm Spec}}R$ is a relative curve that is a solution to our deformation problem, where $R$ is a complete local domain. Let $\sigma \in G_1(P)$, $\sigma \neq 1$, and let $\tilde{\sigma}$ be a lift of $\sigma$ in $\mathcal{X}$. The scheme $\mathcal{X}$ is regular at $P$, and the completion of $\mathcal{O}_{\mathcal{X},P}$ is isomorphic to the ring $R[[T]]$. Weierstrass preparation theorem [@BourbakiComm prop. VII.6] implies that: $$\tilde{\sigma}(T)-T=g_{\tilde{\sigma}}(T) u_{\tilde{\sigma}}(T),$$ where $g_{\tilde{\sigma}}(T)$ is a distinguished Weierstrass polynomial of degree $m+1$ and $u_{\tilde{\sigma}}(T)$ is a unit in $R[[T]]$. The polynomial $g_{\tilde{\sigma}}(T)$ gives rise to a horizontal divisor that corresponds to the fixed points of $\tilde{\sigma}$. This horizontal divisor might not be reducible. The branch divisor corresponds to the union of the fixed points of any $\sigma \in G_1(P)$. Next lemma shows how to define a horizontal branch divisor for the relative curves $\mathcal{X} \rightarrow \mathcal{X}^G$ when $G$ is not a cyclic group. \[lemmaBRANCH\] Let $\mathcal{X} \rightarrow {{\rm Spec}}A$ be an $A$-curve, admitting a fibrewise action of the finite group $G$, where $A$ is a Noetherian local ring. Let $S={{\rm Spec}}A$, and $\Omega_{\mathcal{X}/S}$, $\Omega_{\mathcal{Y}/S}$ be the sheaves of relative differentials of $\mathcal{X}$ over $S$ and $\mathcal{Y}$ over $S$, respectively. Let $\pi:\mathcal{X} \rightarrow \mathcal{Y}$ be the quotient map. The sheaf $$\mathcal{L}(-D_{\mathcal{X}/\mathcal{Y}})= \Omega_{\mathcal{X}/S} ^{-1} \otimes_S \pi^* \Omega_{\mathcal{Y}/S}.$$ is the ideal sheaf the horizontal Cartier divisor $D_{\mathcal{X}/\mathcal{Y}}$. The intersection of $D_{\mathcal{X}/\mathcal{Y}}$ with the special and generic fibre of $\mathcal{X}$ gives the ordinary branch divisors for curves. We will first prove that the above defined divisor $D_{\mathcal{X}/\mathcal{Y}}$ is indeed an effective Cartier divisor. According to [@KaMa Cor. 1.1.5.2] it is enough to prove that - $D_{\mathcal{X}/\mathcal{Y}}$ is a closed subscheme which is flat over $S$. - for all geometric points ${{\rm Spec}}k \rightarrow S$ of $S$, the closed subscheme $D_{\mathcal{X}/\mathcal{Y}}\otimes_S k$ of $\mathcal{X} \otimes_S k$ is a Cartier divisor in $\mathcal{X} \otimes _S k/k$. We are interested in deformations of nonsingular curves. Since the base is a local ring and the special fibre is nonsingular, the deformation $\mathcal{X} \rightarrow {{\rm Spec}}A$ is smooth. (See the remark after the definition 3.35 p.142 in [@LiuBook]). The smoothness of the curves $\mathcal{X}\rightarrow S$, and $\mathcal{Y}\rightarrow S$, implies that the sheaves $\Omega_{\mathcal{X}/S}$ and $\Omega_{\mathcal{X}/S}$ are $S$-flat, [@LiuBook cor. 2.6 p.222]. On the other hand the sheaf $\Omega_{\mathcal{Y},{{\rm Spec}}A}$ is by [@KaMa Prop. 1.1.5.1] ${{\mathcal{O}}}_{\mathcal{Y}}$-flat. Thus, $\pi^*(\Omega_{\mathcal{Y}, {{\rm Spec}}A})$ is ${{\mathcal{O}}}_{\mathcal{X}}$-flat and therefore ${{\rm Spec}}A$-flat [@Hartshorne:77 Prop. 9.2]. Finally, observe that the intersection with the special and generic fibre is the ordinary branch divisor for curves according to [@Hartshorne:77 IV p.301]. [**Remark:**]{} Two horizontal branch divisors can collapse to the same point in the special fibre. For instance, this always happens if a deformation of curves from positive characteristic to characteristic zero with a wild ramification point is possible. For a curve $X$ and a branch point $P$ of $X$ we will denote by $i_{G,P}$ the order function of the filtration of $G$ at $P$. The Artin representation of the group $G$ is defined by $\mathrm{ar}_P(\sigma)=-f_P i_{G,P}(\sigma)$ for $\sigma\neq 1$ and $\mathrm{ar}_P(1)= f_P\sum_{\sigma\neq 1} i_{G,P}(\sigma)$ [@SeL VI.2]. We are going to use the Artin representation at both the special and generic fibre. In the special fibre we always have $f_P=1$ since the field $k$ is algebraically closed. The field of quotients of $A$ should not be algebraically closed therefore a fixed point there might have $f_P \geq 1$. The integer $i_{G,P}(\sigma)$ is equal to the multiplicity of $P\times P$ in the intersection of $\Delta .\Gamma_\sigma$ in the relative $A$-surface $\mathcal{X} \times_{{{\rm Spec}}A} \mathcal{X}$, where $\Delta$ is the diagonal and $\Gamma_\sigma$ is the graph of $\sigma$ [@SeL p. 105]. Since the diagonals $\Delta_0,\Delta_\eta$ and the graphs of $\sigma$ in the special and generic fibres respectively of $\mathcal{X}\times_{{{\rm Spec}}A} \mathcal{X}$ are algebraically equivalent divisors we have: \[bertin-gen\] Assume that $A$ is an integral domain, and let $\mathcal{X}\rightarrow {{\rm Spec}}A$ be a deformation of $X$. Let $\bar{P}_i$, $i=1,\cdots,s$ be the horizontal branch divisors that intersect at the special fibre, at point $P$, and let $P_{i}$ be the corresponding points on the generic fibre. For the Artin representations attached to the points $P,P_{i}$ we have: $$\mathrm{ar}_P(\sigma)=\sum_{i=1}^s \mathrm{ar}_{P_{i}}(\sigma).$$ This generalizes a result of J. Bertin [@BertinCRAS]. Moreover if we set $\sigma=1$ to the above formula we obtain a relation for the valuations of the differents in the special and the generic fibre, since the value of the Artin’s representation at $1$ is the valuation of the different [@SeL prop. 4.IV,prop. 4.VI]. This observetion is equivalent to claim 3.2 in [@MatignonGreen98] and is one direction of a local criterion for good reduction theorem proved in [@MatignonGreen98 3.4], [@KatoDuke87 sec. 5]. \[artin-lift0\] Assume that $V=G_1(P)$ is an elementary abelian group with more than one ${{\mathbb{Z}}}/p{{\mathbb{Z}}}$ components. If $V$ can be lifted to characteristic zero, then $\frac{|V|}{p} \mid m+1$. The group $V$ acts on the generic fibre, where the possible stabilizers of points are cyclic groups. Since $V$ is not cyclic it can not fix any point $P_i$ in the intersection of the branch locus with the generic fibre. Only a cyclic component of $V$ can fix a point $P_i$. Since $V$ act on the set of points $P_i$, each orbit has $|V|/p$ elements. For any element $\sigma \in V$ the Artin representation $\mathrm{ar}_{P_i}(\sigma)=1$ (no wild ramification at the generic fibre). Therefore proposition \[bertin-gen\] gives us that the number of $\{P_i\}$ is $m+1$ and the desired result follows. [**Remark:**]{} \[ex-cor\] Consider the case of equicharacteristic deformations of ordinary curves, together with a $p$-subgroup of the group of automorphisms. Then $|\mathrm{ar}_P(\sigma)|=2$ for all $\sigma \in G(P)=G_1(P), \sigma\neq 1$ [@Nak]. On the other hand the ramification at the points of the generic fibre is also wild and \[bertin-gen\] implies that there is only one horizontal branch divisor extending every wild ramification point $P$. [**Remark:**]{} The author finds amusing the following similarity to the theory of dynamical systems: It is known that autonomous (ordinary) differential equations on a manifold $M$ induce an action of $\mathbb{R}$ on $M$. The fixed locus of this action, called [*equilibrium locus*]{} in the realm of differential equations, can split as the integrated vector fields depend on parameters. The study of this splitting is the object of [*bifurcation theory*]{} [@HaleKocak]. Notice also that $\mathbb{R}$ is not compact and the representation theory of $\mathbb{R}$ shares many difficulties with the corresponding representation theory of groups of order divided by the characteristic, because of the absence of a Haar measure on them. \[main-free\] Let $R$ be a complete local regular integer domain. Let $\mathcal{X}\rightarrow {{\rm Spec}}R$ be a deformation of the couple $(X,G)$, and let $P$ be a wild ramified point of the special fibre $X$. Assume that there is a a $2$-dimensional representation $\rho:G_1(P) \rightarrow \mathrm{GL}_k(H^0(X,\mathcal{L}(mP)))$ attached to $P$. Assume also that there is a $G$-invariant horizontal divisor that intersects the special fibre with multiplicity $m$. Then, there is a free $R$-module $M$ of rank $2$ generated by $1,\tilde{f}$ so that $M:=\langle 1, \tilde{f} \rangle_R \subset H^0((\mathcal{X},\mathcal{L}(\alpha D))),$ where $1\leq \alpha \in \mathbb{N}$ and $M\otimes_R k=H^0(X,\mathcal{L}(mP))$. Moreover, the representation $\rho$ can be lifted to a representation $$\tilde{\rho}:G_1(P) \rightarrow \mathrm{GL}_R( \langle 1, \tilde{f} \rangle_R).$$ The elements $\tilde{\rho}_\sigma$ are lower triangular matrices. Moreover the basis element $\tilde{f}$ is of the form $$\label{F-form} \tilde{f}=\frac{1}{(T^m+a_{m-1} T^{m-1}+\cdots + a_1 T_1+a_0)}u(T),$$ where $a_0,\ldots,a_{m-1} \in m_R$ and $u(T)$ is a unit in $R[[T]]$ reducing to $1 {{\;\rm mod}}m_R$. Let us consider the sheaf $\mathcal{L}(D)$. The space of global sections $H^0(\mathcal{X},\mathcal{L}(D))$ has the structure of an $R$-module. For an arbitrary Cartier divisor $D$ on $\mathcal{X}$ and for all $i\geq 0$ there is a natural map [@Hartshorne:77 prop. III 12.5] $$\phi_i: H^i(\mathcal{X},\mathcal{L}(D)) \otimes _R k \rightarrow H^i(X_s,\mathcal{L}(D\otimes k)).$$ We are interested in global sections [*i.e.*]{}, for the zero cohomology groups, but in general $\phi_0$ can fail to be an isomorphism. Instead of looking at $D$ we will consider $a'D$, where $a'$ is a sufficiently large natural number. We will employ the Riemann-Roch theorem in both the special and the generic fibre and we can choose $a$ sufficiently big so that the index of speciality at both the generic and the special fibre is zero. P. Deligne - D. Mumford observed [@DelMum 4. 78], [@EGAIII1 chap.3 sec.7] that since $$H^1(\mathcal{X}_s,\mathcal{L}(a'D \otimes k))= H^1(\mathcal{X}_\eta,\mathcal{L}(a'D \otimes K))=0$$ the $R$-module $ H^0(\mathcal{X},\mathcal{L}(a'D))$ is free. We can then select an element $\tilde{f} \in H^0(\mathcal{X},\mathcal{L}(a'D))$ so that $\tilde{f} \equiv f {{\;\rm mod}}m_R$. Consider the least $a$ such that $\langle 1,\tilde{f} \rangle_R \subseteq H^0(\mathcal{X}, \mathcal{L}(aD))$ for some $1\leq a \leq a'$. Since $D$ is $G_1(P)$-invariant the $R$-module $H^0(\mathcal{X}, \mathcal{L}(aD))$ is equipped with a $G_1(P)$-action. The module $M$ might not be the whole $H^0(\mathcal{X}, \mathcal{L}(aD))$ but it is the $R$-free part of it. Therefore $G_1(P)$ acts on $M$ as well and the representation can be lifted: $$\tilde{\rho}:G_1(P) \rightarrow \mathrm{GL}_R(M),$$ as required. Since $\sigma\mid_R=\mathrm{Id}_A$ this representation is given by lower triangular matrices. The element $1/\tilde{f}$ is a holomorphic element in $R[[T]]$ reducing to $1/f=t^m$ modulo $m_R$. Thus, the reduced order of $1/\tilde{f}$ is $m$ and eq. (\[F-form\]) follows by Weierstrass preparation theorem [@BourbakiComm prop. VII.6]. We will now try to give conditions for the existence of a $G_1(P)$-invariant divisor intersecting the special fibre at $P$ with degree $m+1$. Let $T=\{\bar{P}_i\}_{i=1,\ldots,s}$ be the set of horizontal branch divisors that restricts to $P$ in the special fibre of $X$. This space is acted on by $G_1(P)$, since $\bar{P}_i$ are all components of the branch divisor. Each of the $\bar{P}_i$ is fixed by some element of $G$ but not necessarily by the whole group $G_1(P)$, unless of course $G_1(P)$ is isomorphic to ${{\mathbb{Z}}}/p{{\mathbb{Z}}}$. Let $O(T)$ be the set of orbits of $T$ under the action of the group $G_1(P)$, on $T$. A horizontal divisor $D$ supported on $T$, is invariant under the action of $G_1(P)$ if and only if, the divisor $D$ is of the form: $$\label{pp-oo13} D=\sum_{C\in O(T)} n_C \sum_{P\in C} P,$$ [ i.e.]{}, horizontal Cartier divisors that are in the same orbit of the action of $G_1(P)$ must appear with the same weight in $D$. If the semigroup $\sum_{C\in O(T)} n_C \# C$, $n_C \in \mathbb{N}$, contains the Weierstrass semigroup of the branch point $P$ of the special fibre, then we can select the desired $G_1(P)$-invariant divisor $D$ supported on $T$. If one orbit of $G_1(P)$ acting on $T$ is a singleton, i.e., there is a $\bar{P}_i$ fixed by the whole group $G_1(P)$, then the semigroup $$\sum_{C\in O(T)} n_C \# C, \;\;\; n_C \in \mathbb{N},$$ is the semigroup of natural numbers, and we are done. This is the case when the group $G_1(P)$ is cyclic. If $\#T \not\equiv 0 \mathrm{mod} p$ then there is at least one orbit that is a singleton. Indeed, if all orbits have more than one element then all orbits must have cardinality divisible by $p$, and since the set $T$ is the disjoint union of orbits it must also have cardinality divisible by $p$. \[lem2.5\] If $m$ is the first pole number that is not divisible by the characteristic, and $p\nmid m+1$ then there is an orbit that consists of only one element. By proposition \[bertin-gen\] the Artin representation at the special fibre equals the sum of the Artin representations at the generic fibre. Let $\sigma \in G_1(P)$. The Artin representation of $\sigma$ at the special fibre equals $m+1$. All $\bar{P}_i$ that are not fixed by $\sigma$ do not contribute in the sum of the Artin representations at the generic fibre. An element $\tau$ sends $\bar{P}_i$ which is fixed by $H \subset G_1(P)$ to $\tau \bar{P}_i$ which is fixed by $ \tau H \tau^{-1}$. Since the representation attached to $P$ is two dimensional the group $G_1(P)$ is abelian, and $\tau \bar{P}_i$ is fixed by $H=\tau H \tau^{-1}$. If we now consider $P_i$ that is fixed by $\langle \sigma \rangle$ then the above argument shows that the orbit of $P_i$ under the action of the group $G_1(P)$ has $p^a$ elements $0 \leq a$. If $a=0$ then $P_i$ is fixed by the whole group $G_1(P)$. If on the other hand for all $P_i$ fixed by $\sigma$ the coresponding orbit orders have more than one element then the set of $\bar{P}_i$ fixed by $\sigma$ has order divisible by $p$. This implies that the sum of the Artin representations at the generic fibre is divisible by $p$, a contradiction. We have thus obtained the following easy to apply \[cor2.6\] If $G_1(P)$ is cyclic or $p\nmid m+1$, then there is a horizontal branch divisor $D$, fixed under the action of $G_1(P)$, that intersects the special fibre at $mP$. In particular, the assumption of proposition \[main-free\] is satisfied and the two dimensional representation can be lifted. \[cycONE\] In the mixed characteristic case, if the elementary abelian group $G_1(P)$ has more than two cyclic components, then there is no horizontal $G_1(P)$-invariant divisor $D$ contained in the branch locus and intersecting the special fibre at $P$ with multiplicity $m$. Since the stabilizers of elements in the generic fibre are cyclic groups of order $p$, all orbits of elements are divisible by $p$. Therefore, a $G_1(P)$-invariant divisor should have degree divisible by $p$. This, can not happen since $(m,p)=1$. \[n1\] [*Lemma \[cycONE\] shows that our method can not be used for lifting curves with elementary abelian action to characteristic zero. However, M. Matignon proved that such liftings exist [@MatManusc].* ]{} We have seen how to relate a deformation of the couple $(X,G)$ to a deformation of a matrix representation. Now we will see the effect of considering equivalent deformations of couples. \[equiv2matrix\] Let $\phi$ be a map $\mathcal{O}_{\mathcal{X},P} \rightarrow \mathcal{O}_{\mathcal{X},P}$ making the extensions $\tilde{\rho}_\sigma, \tilde{\rho}'_\sigma$ equivalent. The corresponding matrix representations are conjugate by a $2 \times 2$ matrix of the form $ \begin{pmatrix} 1 & 0 \\ \mu & \lambda \end{pmatrix} $ where $\lambda\equiv 1 {{\;\rm mod}}m_A$ and $\mu \equiv 0 {{\;\rm mod}}m_A$. Conversely, every such matrix gives rise to a map $\phi:\mathcal{O}_{\mathcal{X},P} \rightarrow \mathcal{O}_{\mathcal{X},P}$ that reduces to the identity modulo $m_A$. Assume that there is a map $\phi:\mathcal{O}_{\mathcal{X},P} \rightarrow \mathcal{O}_{\mathcal{X},P}$ making the extensions $\tilde{\rho}_\sigma, \tilde{\rho}'_\sigma$ equivalent. The local-global principle of J.Bertin-A.Mézard implies that this map can be extended to a map $\phi':\mathcal{X} \rightarrow \mathcal{X}$ that makes the corresponding global deformations equivalent. Let $\tilde{f}$ be the generator given in proposition \[main-free\]. Then $\phi'(\tilde{f}) \in H^0(\mathcal{X},\mathcal{L}(aD))$, therefore $\phi'(\tilde{f}) =\lambda \tilde{f}+\mu$. This means that $\phi$ gives rise to a base change in $ H^0(\mathcal{X},\mathcal{L}(a'D))$, and two elements in $F(\cdot)$ are equivalent if they are conjugate by a $2\times 2$ matrix of the desired form. Conversely, assume that we have two equivalent matrix representations that are conjugate by a matrix $Q$ of the form $ \begin{pmatrix} 1 & 0 \\ \mu & \lambda \end{pmatrix} $ where $\lambda\equiv 1 {{\;\rm mod}}m_A$ and $\mu \equiv 0 {{\;\rm mod}}m_A$. Then $Q$ sends $\tilde{f} \mapsto \lambda \tilde{f} + \mu$, i.e. $$\frac{1}{\phi(T)^m+\sum_{\nu=0}^{m-1} a_\nu \phi(T)^nu }= \lambda \tilde{f}(T) + \mu.$$ A solution $\phi(T)$ of this polynomial equation exists by using Hensel’s lemma. This solution gives rise to the desired map $\phi$. Deformations of Linear groups {#secmatdef} ============================= \[MatrixDeformations\] We would like to represent the functor $F$ defined in Eq. (\[Fdeformation\]). We will employ the construction for universal deformation rings for matrix representations, explained by B. de Smit and H. W. Lenstra in [@SMLE:97]. Let $H$ be a $p$-group with identity $e$ and let $\rho: H \rightarrow L_n(k)$ be a faithful representation of $H$. Let $\Lambda[H,n]$ be the commutative $\Lambda$-algebra generated by $X_{ij}^g$ for $g\in H, 1\leq j \leq i \leq n$, such that $$X_{ij}^e=\begin{cases} 1 & \mbox{ if } i=j \\ 0 & \mbox{ if } i \neq j \end{cases}$$ $$\label{ostru-123} X_{ij}^{gh}= \sum_{l=1}^n X_{il}^g X_{lj}^h \mbox{ for } g,h \in H \mbox{ and } 1 \leq i,j \leq n.$$ and $$X_{ij}^g=0 \mbox{ for } i<j \mbox{ and for all } g\in H.$$ We will focus on representations on $L_n(A)$. For every $\Lambda$-algebra $A$ we have a canonical bijection $$\mathrm{Hom}_{\Lambda-\mathrm{Alg}} (\Lambda[H,n],A) \cong \mathrm{Hom} (H, L_n (A)),$$ where a $\Lambda$-algebra homomorphism $f:\Lambda[H,n]\rightarrow A$ corresponds to the group homomorphism $\rho_f$ that sends $g\in H$ to the matrix $(f(X_{ij}^g))$. The representation $\rho: H \rightarrow L_n(k)$ corresponds to a homomorphism $\Lambda[H,n]\rightarrow k$. Its kernel is a maximal ideal, which we denote by $m_\rho$. We take the completion $R(H)$ of $\Lambda[H,n]$ at $m_\rho$. The canonical map $\Lambda[H,n]\rightarrow R(H)$, gives rise to a map $\rho_{R(H)}:H \rightarrow L_n(R(H))$, such that the diagram: $$\xymatrix{ H \ar[r]^{\rho_{R(H)\;\;\;\;\;}} \ar[d]_{=} & L_n(R(H)) \ar[d] \\ H \ar[r]^{\rho} & L_n(k) }$$ is commutative. We have to distinguish two cases: $\bullet$ The case of equicharacteristic deformations, i.e., $R$ is a complete local domain so that $\mathrm{Quot}(R)$ is of characteristic $p$. Recall that in this case $\Lambda=k$. Since the generic fibre is of characteristic $p$ we have $X_{22}^g=1$ for all $1\leq i \leq n$. Moreover, if we fix elements $g_i$ generating $H$ as an ${{\mathbb{Z}}}/p{{\mathbb{Z}}}$-vector space and monomials $x_i=X_{21}^{g_i}-c(g_i)$ for each $g_i$ we easily see that $R(H)=k[[x_1,\ldots,x_n]]$. $\bullet$ The case of liftings to characteristic zero, i.e. $R$ is a complete local domain so that $\mathrm{Quot}(R)$ of characteristic $0$. Let us again fix elements $x_i,y_i$ for each generator $g_i$ of $H$, so that $x_i=X_{21}^{g_i}-c(g_i)$, and $y_i=X_{22}^{g_i}-1$. In this case we have the conditions: $$\label{e1} \left(X_{22}^g\right)^p=1,$$ $$\label{e2} X_{21}^g\sum_{\nu=0}^{p-1} \left(X_{22}^g\right)^\nu=0,$$ and the commuting relation: $(X_{21}^g-X_{21}^h + X_{22}^gX_{21}^h-X_{22}^hX_{21}^g)=0$. Observe that $X_{22}^g\neq 1$. Indeed, if $X_{22}^g=1$ then eq. (\[e2\]) will give us that $X_{21}^g=0$ and then the matrix is just the identity. Therefore, equations (\[e1\]) and (\[e2\]) reduce to the single equation $\sum_{\nu=0}^{p-1} \left(X_{22}^g\right)^\nu=0$. These conditions imply that: $$R(H)=\Lambda[[x_1,\ldots,x_n,y_1,\ldots,y_n]]/I,$$ where $I$ is the ideal $$I:=\left\langle \sum_{\nu=0}^p (1+y_i)^{\nu-1}, y_j(c(g_i)+x_i)-y_i(c(g_j)+x_j)\right\rangle.$$ The ring $R(H)$ defined above does not represent the deformation functor $F$, since $A$-equivalent deformations may correspond to different maps in $\mathrm{Hom}(R(H),A)$. If $n=2$, i.e., in the case of a two dimensional representation, the conjugation action given by lemma \[equiv2matrix\] is easy to handle. Considering the quotient of $R(H)$ in positive characteristic, for representations of dimension $\geq 3$ is a difficult problem since the ”trace“ argument of characteristic zero does not work. (Characters do not distinguish equivalent representations in modular representation theory). We focus now on the theory of two dimensional representations. This forces the group $H$ to be elementary abelian. We compute that $$\label{expconj} \begin{pmatrix} 1 & 0 \\ \mu & \lambda \end{pmatrix} \begin{pmatrix} 1 & 0 \\ x & y \end{pmatrix} \begin{pmatrix} 1 & 0 \\ \mu & \lambda \end{pmatrix}^{-1}= \begin{pmatrix} 1 & 0 \\ \mu+\lambda x-y\mu & y \end{pmatrix}.$$ We will consider the effect of the conjugation action given in eq. (\[expconj\]). The elements $y_i$ remain invariant while the elements $x_i \mapsto x_i+ \lambda_a c(g_i)+\lambda_ax_i-\mu y_i$, where $\lambda=1+ \lambda_a$, $\lambda_a,\mu \in m_A$. If $A=k[\epsilon]/\epsilon^2$, then $x_i=x_i+c(g_i)\lambda_a$ since $\lambda_a x_i \in m_A^2=0$. Let $A$ be an object in $\mathcal{C}$. An element in the set $F(A)$ is determined by the conjugation equivalence class of a function $f:R(H)\rightarrow A$. Such a function should be defined on the generators $x_i,y_j$ of the ring $R(H)$. Since $f(x_i)$ is equivalent to $f(x_i)+f(\lambda_a) c(g_i) {{\;\rm mod}}m_A^2$, if there is a ring representing the functor $F(\cdot)$ then this should be a subring of $R(H)$ and $f(x_i)=0$ for all generators $x_i$, as one sees by considering $\lambda_a=-x_i/c(g_i)$. Therefore the ring representing $F(\cdot)$ is the subring of $R(H)$ generated by $y_1,\ldots,y_n$ in the mixed characteristic case and is the zero ring in the equicharacteristic case. This is in accordance to remark \[remd\]. According to remark \[n1\] the case $n=1$ is the only case we can handle using our approach in the mixed characteristic situation. \[nosmooth\] [*We consider the subring $R$ of $R(H)=R(\mathbb{Z}/p\mathbb{Z})$ generated by $y$. The ring $R$ is singular. Indeed, by the infinitesimal lifting property [@Hartshorne:77 II. exer. 8.6], [@HarDef sec. 1.4] it is enough to provide a small extension $A'\rightarrow A \rightarrow 0$ and a homomorphism $h\in \mathrm{Hom}(R,A)$ that does not lift to a homomorphism to $\mathrm{Hom}(R,A')$. Let $m_\Lambda$ be the maximal ideal of $\Lambda$. Consider the natural map $\pi:R \rightarrow R/m_\Lambda R=k[[y]]/\langle y^{p-1} \rangle=:A$. Consider also the ring $A'$ given by $k[[y]]/\langle y^p \rangle$. Then $A' \rightarrow A$ is a small extension and there is no map $R \rightarrow A'$ lifting $\pi$. Indeed, every such homomorphism $R \rightarrow A'$ should factor through ${{\;\rm mod}}m_\Lambda$. Therefore we obtain a nontrivial homomorphism $A \rightarrow A'$, a contradiction.* ]{} [**Remark:**]{} In [@SOS91] T. Sekiguchi, F. Oort, N. Suwa introduced the group schemes $\mathcal{G}^{(\lambda)}$ in order to deform the additive group schemes $\mathbb{G}_a$ to the multiplicative group schemes $\mathbb{G}_m$ and they were able to give a unified Artin-Schreier-Kummer theory [@SeSu94],[@SeSu95]. Many articles devoted to the deformations of automorphism groups from positive to zero characteristic are based on this theory, see for example [@MatignonGreen98]. Observe that if $H=\mathbb{Z}/p\mathbb{Z}$, i.e. we have an elementary abelian group with just one component, then the ring homomorphism $$R(\mathbb{Z}/p\mathbb{Z}) \rightarrow A[[u,1/(\epsilon u+1)]]$$ sending $y/\epsilon$ to $ u$ gives rise to an injection of $\hat{\mathcal{G}}^{(\lambda)}\rightarrow {{\rm Spec}}R(\mathbb{Z}/p\mathbb{Z})$, where $\lambda=\epsilon$. Indeed, the diagonal elements $X_{22}^{g}=1+ \epsilon y/\epsilon=1+\epsilon u$ are multiplied as elements in ${\mathcal{G}}^{(\epsilon)}$. Relation to first order infinitesimal deformations {#SmallExt} ================================================== In this section we will relate the deformation functor of the two dimensional representations given in (\[Fdeformation\]) to the deformation functor of actions in formal powerseries rings in (\[Bertin-Mezard-functor\]). The advantage of this approach is that using the two dimensional representation we can contract the infinite powerseries representing the extended automorphism to a root of a rational function. Denote by $V$ the elementary abelian group $G_1(P)$. Assume that a two dimensional representation is attached on the wild ramification point $P$. By using the equation $$\sigma\left( \frac{1}{t^m}\right)=\frac{1}{t^m} +c(\sigma),$$ we can define the following representation of $V$ to automorphisms of formal powerseries rings: $$\rho:V \rightarrow {{ \rm Aut }}(k[[t]]),$$ $$\sigma \mapsto \rho_\sigma,$$ where $$\rho_\sigma(t)=\frac{t}{(1+c(\sigma)t^m)^{1/m}}= t\left( 1+ \sum_{\nu=1}^\infty \binom{-1/m}{\nu} c(\sigma)^\nu t^{\nu m} \right).$$ Let $$0\rightarrow \mathrm{ker}\pi \rightarrow A' \rightarrow A \rightarrow 0$$ be a small extension, i.e. $\mathrm{ker}\pi\cdot m_{A'}=0$, where $m_{A'},m_A$ are the maximal ideals of $A,A'$ respectively. Assume that we have the following data: A deformation of the two dimensional representation given by $C(\sigma)=c(\sigma)+ \delta(\sigma)$, $\lambda(\sigma)=1+ \lambda_1(\sigma)$, where $\delta(\sigma),\lambda_1(\sigma) \in m_{A'}$ and the element $\tilde{f}$ given in proposition \[main-free\] extending $f$. Write $\tilde{f}=f+ \Delta$, for some element $\Delta \in m_{A'}((t))$. Then we have: $$\tilde{\rho}_\sigma\left( f+ \Delta) \right)= \lambda(\sigma)(f+\Delta) +c(\sigma)+ \delta(\sigma).$$ This implies that ($f=1/t^m$): $$\tilde{\rho}_\sigma \left(\frac{1}{t^m} \right)= \frac{\lambda(\sigma)}{t^m}+c(\sigma)+ \big( \delta(\sigma)+ \lambda(\sigma)\Delta-\tilde{\rho}_\sigma \Delta \big),$$ or equivalently: $$\label{induction} \tilde{\rho}_\sigma(t)= \rho_\sigma(t)+t \left( \sum_{\nu=0}^\infty \binom{-1/m}{\nu} \sum_{k=1}^\nu \binom{\nu}{k} E^k c(\sigma)^{\nu-k} t^{m\nu} \right),$$ where $$E=\delta(\sigma)+\lambda(\sigma)\Delta -\tilde{\rho}_\sigma \Delta + \frac{\lambda_1(\sigma)}{t^m} \in m_{A'}((t)).$$ Suppose that we can extend $\rho_\sigma(t)$ to a homomorphism $\tilde{\rho}_{\sigma,A} \in {{ \rm Aut }}A[[t]]$. A further extension of $\rho_\sigma$ over $A'$ is then given by $$\tilde{\rho}_{\sigma,A'}(t)=\tilde{\rho}_{\sigma,A}(t)+\rho'_\sigma(t),$$ where $\rho'_\sigma(t)\in \mathrm{ker}\pi[[t]]$. Since $\Delta \in m_{A'}((t))$ and since $\mathrm{\ker}\pi\cdot m_{A'}=0$ $$\tilde{\rho}_{\sigma,A'}(\Delta)=\tilde{\rho}_{\sigma,A}(\Delta).$$ Thus, equation (\[induction\]) allows us to compute the value of $\tilde{\rho}_{\sigma,A'}(t)$ from the value of $\tilde{\rho}_{\sigma,A}(t)$. \[matreplift\] Let $\tilde{\rho}_{\sigma,A}=\{\tilde{\rho}_{\sigma,A}(t)\}_{\sigma \in V}$ be a representation of $V \rightarrow {{ \rm Aut }}A[[t]]$, and consider the corresponding element in $F(A)$. If this element in $F(A)$ can be lifted to an element in $F(A')$ then $\tilde{\rho}_{\sigma,A}$ can be lifted to a representation $V \rightarrow {{ \rm Aut }}A'[[t]]$. According to [@Be-Me 3.2] every obstruction in lifting a representation in $D(A)$ to $D(A')$ is group theoretic. Consider extensions of the homomorphisms $\tilde{\rho}_{\sigma,A'} \in {{ \rm Aut }}A'[[t]]$ for every $\sigma \in V$. The element $\tilde{\rho}_{\sigma,A'} \tilde{\rho}_{\tau,A'} \tilde{\rho}_{\sigma \tau,A'}^{-1}$ is a $2$-cocycle and gives rise to a cohomology class in $H^2(V,{{\mathcal{T}_\mathcal{O} }})$. In our case observe that if $\lambda_1(\sigma),\delta(\sigma)$ are functions $R(V) \rightarrow A'$ and therefore satisfy the $2\times 2$ multiplication relations, then there is no group theoretic obstruction in lifting $\tilde{\rho}_{\sigma,A}$ to $\tilde{\rho}_{\sigma,A'}$ since a simple computation shows that the lifts defined by eq. (\[induction\]) satisfy the relations $$\tilde{\rho}_{\sigma,A'} \circ \tilde{\rho}_{\tau,A'}= \tilde{\rho}_{\sigma\tau,A'}.$$ Therefore any obstruction to lifting $\{\tilde{\rho}_\sigma\}$ reduces to the corresponding obstruction of lifting the matrix representation in $F(A)$ to $F(A')$. Now we will focus on the small extension $k[\epsilon]/\epsilon^2 \rightarrow k$, and we will compute the image of matrix deformations in $H^1(V,{{\mathcal{T}_\mathcal{O} }})$. The general cocycle in $H^1(V,{{\mathcal{T}_\mathcal{O} }})$ is given by $d_1(t) \frac{d}{dt}$. In [@KontoANT] the author proved that the map $$\label{myiso} {{\mathcal{T}_\mathcal{O} }}\rightarrow k[[t]]/t^{m+1}$$ $$f(t) \frac{d}{dt} \rightarrow f(t)/t^{m+1}$$ is a $V$-equivariant isomorphism. \[5.1\] Assume that $P$ is a wild ramified point of $X$ with a two dimensional representation attached to it. An extension $\tilde{\rho}_\sigma$ gives rise to the following cocyle in $H^1(V,\frac{1}{t^{m+1}}k[[t]])$: $$\alpha(\sigma)=\frac{1}{m} \left( \frac{\lambda_1(\sigma)}{t^m} +\lambda_1(\sigma)c(\sigma) -\delta(\sigma) +\sum_{\mu=0}^{m-1}\frac{2m-\mu}{m} \frac{a_{\mu,1}c(\sigma)}{t^{m-\mu}} \right),$$ modulo elements in $A[[t]]$. We will compute the first order infinitesimal deformations of $\rho_\sigma$. We begin from $$\tilde{\rho}_{\sigma}(f)=\lambda(\sigma)f+ c(\sigma) + \delta(\sigma) + \lambda(\sigma)\Delta - \tilde{\rho}_\sigma\Delta.$$ Set $E_1=\frac{\delta(\sigma)}{\lambda(\sigma)}+ \Delta-\tilde{\rho}_\sigma \Delta \frac{1}{\lambda(\sigma)}-c(\sigma) \lambda_1(\sigma)$. Then $$\tilde{\rho}_{\sigma}\left(\frac{1}{t^m}\right)=\lambda(\sigma) \frac{1+t^m \frac{c(\sigma)}{\lambda(\sigma)} + t^m E'}{t^m}.$$ We compute $$\begin{aligned} \tilde{\rho}_{\sigma}(t) & = &\frac{\lambda(\sigma)^{-\frac{1}{m}}t} { \big(1+t^m c(\sigma)\big)^{1/m} \big( 1+\frac{ E_1 t^m}{1+c(\sigma)t^m} \big)^{1/m} } \nonumber \\ &= & \frac{\lambda(\sigma)^{-\frac{1}{m}}\rho_\sigma(t) }{\big( 1+\frac{ E_1 t^m}{1+c(\sigma)t^m} \big)^{1/m}} \nonumber \\ &=& \lambda(\sigma)^{-\frac{1}{m}}(\rho_\sigma(t) -\frac{1}{m} E_1 \rho_\sigma^{m+1}(t) ){{\;\rm mod}}\epsilon^2 \nonumber \\ &=& \rho_\sigma(t)-\frac{1}{m} E_1 \rho_\sigma^{m+1}(t) - \frac{1}{m}\lambda_1(\sigma) \rho_\sigma(t) {{\;\rm mod}}\epsilon^2 \label{12}.\end{aligned}$$ We compute that $$\tilde\rho_\sigma\circ \rho_\sigma^{-1}(t) =\frac{\tilde{\rho}_\sigma(t)}{(1-c(\sigma)\tilde{\rho}_\sigma(t)^m)^{\frac{1}{m}}}.$$ Since the derivative of the function $x\mapsto \frac{x}{(1+Ax^m)^{\frac{1}{m}}}$ is the function $x\mapsto (1+Ax^m)^{-\frac{m+1}{m}}$ we compute: $$\begin{aligned} \left. \frac{d}{d\epsilon} \tilde\rho_\sigma\circ \rho_\sigma^{-1} \right|_{\epsilon=0} &=& \frac{t^{m+1}}{\rho_\sigma(t)^{m+1}} \left. \frac{d}{d\epsilon} \tilde{\rho}_\sigma \right|_{\epsilon=0} \\ &= & -\frac{1}{m} t^{m+1} \left. E_1 \right|_{\epsilon=0} -\frac{1}{m} \lambda_1(\sigma) \frac{t^{m+1}}{\rho_\sigma(t)^m} \nonumber \\ &=& -\frac{1}{m} t^{m+1} \left. E_1 \right|_{\epsilon=0} -\frac{\lambda_1(\sigma)}{m} \left( t + t^{m+1} c(\sigma) \right). \label{der1com}\end{aligned}$$ We will now compute $(1-\lambda(\sigma)^{-1}\tilde{\rho}_\sigma)\Delta$. Write $T=t+\epsilon g_{1}(t){{\;\rm mod}}\epsilon^{2}A[[t]].$ Write $\tilde{f}=(T^m+\sum_{\mu=0}^{m-1}a_\mu T^\mu)^{-1}u$, where $a_\mu=\sum_{\nu\geq 1} a_{\mu,\nu} \epsilon^\nu$. We compute: $$\begin{aligned} \Delta=\tilde{f}-\frac{1}{t^m} &= &\frac{1}{T^{m}(1+\sum_{\mu=0}^{m-1}a_\mu T^{\mu-m})}-\frac{1}{t^{m}} \\ &=&\frac{1}{T^{m}}\left( 1-\epsilon \sum_{\mu=0}^{m-1}a_{\mu,1} T^{\mu-m} \right)-\frac{1}{t^{m}} {{\;\rm mod}}\epsilon^{2}A[[T]]\\ &=& \frac{1-m\epsilon g_1(t)^{m-1}}{t^m} \left( 1-\epsilon \sum_{\mu=0}^{m-1}a_{\mu,1} T^{\mu-m} \right)-\frac{1}{t^{m}} {{\;\rm mod}}\epsilon^{2}A[[T]]\\ &=& \epsilon mg_{1}(t)^{m-1}/t^{m}-\epsilon \sum_{\mu=0}^{m-1}a_{\mu,1}t^{\mu-2m} {{\;\rm mod}}\epsilon^{2}A[[T]].\end{aligned}$$ Consider the automorphism $\sigma$ given by $\sigma(t)=t(1+c(\sigma)t^{m})^{-1/m}.$ Observe that $$\sigma\left(\frac{1}{t^{k}}\right)=\frac{(1+c(\sigma)t^{m})^{\frac{k}{m}}}{t^{k}}=\frac{1}{t^{k}}+\sum_{\nu\geq1}\binom{\frac{k}{m}}{\nu}c(\sigma)^{\nu}t^{m\nu-k},$$ therefore$$(1-\lambda_1(\sigma)\epsilon)\sigma\left(\frac{1}{t^{k}} \right)-\frac{1}{t^{k}}=\frac{k}{m}c(\sigma)t^{m-k}+\sum_{\nu\geq2}\binom{\frac{k}{m}}{\nu}c(\sigma)^{\nu}t^{m\nu-k}-\frac{\epsilon \lambda_1(\sigma)}{t^k}.$$ This means that for $k\leq m$ the quantity $(1-\lambda(\sigma)^{-1}\tilde{\rho}_\sigma)(\epsilon t^{-k})$ is holomorphic in $t$ modulo $\epsilon^2$. Thus $(1-\lambda(\sigma)^{-1}\tilde{\rho}_\sigma)\frac{g_{1}(t)^{m-1}}{t^{m}}\in{{\;\rm mod}}A[[t]]$ and we arrive at: $$(\lambda(\sigma)^{-1}\sigma-1)\epsilon \Delta=\sum_{\mu=0}^{m-1}\frac{2m-\mu}{m} \frac{a_{\mu,1}c(\sigma)}{t^{m-\mu}} {{\;\rm mod}}\epsilon^2 +A[[t]].$$ This result combined with eq. (\[der1com\]) gives us $$\label{derfr2com} \left. \frac{d}{d\epsilon} \tilde{\rho}_\sigma\circ \rho_\sigma^{-1} \right|_{\epsilon=0}= \frac{t^{m+1}}{m} \left( \frac{\lambda_1(\sigma)}{t^m} +\lambda_1(\sigma)c(\sigma) -\delta(\sigma) +\sum_{\mu=0}^{m-1}\frac{2m-\mu}{m} \frac{a_{\mu,1}c(\sigma)}{t^{m-\mu}} \right)$$ modulo elements in $A[[t]]$. The desired result follows by applying the map given in eq.(\[myiso\]). \[4.3\] Assume that $G_1(P)=\mathbb{Z}/p\mathbb{Z}$. The $k$-vector space $H^1(\mathbb{Z}/p\mathbb{Z},k[[t]]/t^{m+1})$ is generated by the elements $\{b_i/t^i: b\leq i \leq m+1\}$ so that $\binom{i/m}{p-1}=0$ and $b=1$ if $p\mid m+1$ and $b=2$ if $p\nmid m+1$, and $b_i\in \mathrm{Hom}(\mathbb{Z}/p\mathbb{Z},k)$. This is proposition 2.7 in [@KontoANT] for $a=-m-1$. Consider the elementary abelian group $V=\oplus_{i=1}^s V_i$ where $V_i\cong \mathbb{Z}/p\mathbb{Z}$. The computation of the cohomology group $H^1(V,{{\mathcal{T}_\mathcal{O} }})$ seems complicated in the general case. However, under some mild assumptions we can prove the following: \[cohomologySplit\] Let $m+1=\sum_{i\geq 0} b_i p^i$ be the $p$-adic expansion of $m$. If ${\left\lfloor}\frac{2 b_0}{p} {\right\rfloor}={\left\lfloor}\frac{b_0+b_{\nu-1}}{p} {\right\rfloor}$ for all $2\leq \nu \leq s$, then the map $$\label{ontomap} \Psi:H^1(V,{{\mathcal{T}_\mathcal{O} }}) \rightarrow \bigoplus_{\nu=1}^s H^1(V_\nu,{{\mathcal{T}_\mathcal{O} }}),$$ sending $v \mapsto \sum_{\nu=1}^s \mathrm{res}_{V \rightarrow V_i}v$ is an isomorphism. Moreover $$\label{eeqq} H^1(V,{{\mathcal{T}_\mathcal{O} }})\cong \bigoplus_{i=2, \binom{i/m}{p-1}=0}^{m+1} b_i \frac{1}{t^i},$$ where $b_i \in \mathrm{Hom}(V,k)$. Consider the maps $c_i\in \mathrm{Hom}(V_i,k)$ and extend them to maps $\bar{c}_i\in \mathrm{Hom}(V,k)$, by setting $\bar{c_i}(\sigma)=0$ if $\sigma \not\in V_i$. The image of $\sum_{\nu=1}^s \bar{c}_i$ under the map $\Psi$ given in (\[ontomap\]) is $(c_1,\ldots,c_s)$, therefore the map $\Psi$ is onto and it is sufficient to prove that both spaces have the same dimension. For the dimension $h_1(V,{{\mathcal{T}_\mathcal{O} }})=\dim_k H^1(V,{{\mathcal{T}_\mathcal{O} }})$ the author has proved the following formula: $$\label{eq-el-ab} h_1(V,{{\mathcal{T}_\mathcal{O} }})=\sum_{i=1}^s \left( {\left\lfloor}\frac{(m+1)(p-1)+a_i}{p} {\right\rfloor}-{\left\lceil}\frac{a_i}{p} {\right\rceil}\right),$$ where $a_1=-(m+1)$, $a_i={\left\lceil}\frac{a_{i-1}}{p} {\right\rceil}$ [@KontoANT prop. 2.9]. Observe that $a_i=-{\left\lfloor}\frac{m+1}{p^{i-1}} {\right\rfloor}$. We compute that $$\label{pad1} \frac{m+1}{p^k}=\sum_{\nu=0}^{k-1} \frac{b_i}{p^{k-\nu}}+\sum_{\nu \geq k} b_\nu p^{\nu-k},$$ therefore $$\label{pad2} {\left\lfloor}\frac{m+1}{p^k} {\right\rfloor}= \sum_{\nu \geq k} b_\nu p^{\nu-k}.$$ Now we compute that $$\label{pad3} {\left\lfloor}\frac{m+1}{p}+\frac{1}{p} {\left\lfloor}\frac{m+1}{p^{i-1}} {\right\rfloor}{\right\rfloor}= {\left\lfloor}\frac{b_0+b_{i-1}}{p} {\right\rfloor}+ \sum_{\nu \geq 1} b_\nu p^{\nu-1}+\sum_{\nu\geq i} b_\nu p^{\nu-i}.$$ The desired result follows by plugging eq. (\[pad2\]),(\[pad3\]) into eq. (\[eq-el-ab\]). Equation (\[eeqq\]) folows by lemma \[4.3\]. [**Remark:**]{} Consider the curves defined by $$\sum_{\nu=0}^s a_n y^{p^n}=\sum_{\mu=0}^mb_\mu x^\mu,$$ so that $m\not\equiv 0 {{\;\rm mod}}p$, $a_s,a_0,b_0\neq 0$, $s\geq 1$, $mu\geq 2$ studied by H. Stichtenoth in [@StiII]. The representation attached to the unique place $P_\infty$ above the place $p_\infty$ of the function field $k(x)$ is two dimensional if and only if $m< p^s$ [@KontoMathZ]. In this case the assumptions of proposition \[cohomologySplit\] hold. \[obdt\] If $G_1(P)=\mathbb{Z}/p\mathbb{Z}$ or if the assumptions of proposition \[cohomologySplit\] hold then the tangent vector corresponding to $0\neq \frac{d}{dt} \in H^1(V,{{\mathcal{T}_\mathcal{O} }})$ is an obstructed deformation. The element $\frac{d}{dt}$ corresponds to $\frac{1}{t^{m+1}}\in H^1(V, \frac{1}{t^{m+1}}k[[t]])$. Using proposition \[cohomologySplit\] we see that it is impossible to obtain a vector in the direction of $\frac{1}{t^{m+1}}$ using a matrix representation, i.e. an element in $F(\cdot)$. Notice that since we have assumed that the representation attached to $P$ is two dimensional we have that $m>1$. \[obell-ab\] Assume that $p\nmid m+1$ and the assumptions of proposition \[cohomologySplit\] hold. Assume also that $V$ is an elementary abelian group with more than one component. Using the notation of eq. (\[eeqq\]) unubstructed deformations should satisfy $b_i(\sigma)=\lambda_i c(\sigma)$ for some element $\lambda_i \in k$. Condition $p\nmid m+1$ implies that every deformation is coming from a matrix representation \[cor2.6\] and condition follows by using proposition \[5.1\]. \[remd\] *We see that the data $\delta(\sigma)$ of the matrix representation deformation do not affect the corresponding element in $H^1(V,{{\mathcal{T}_\mathcal{O} }})$ since they appear as coefficients of $t^0$ in the cocylce expression of proposition \[5.1\] and are cohomologous to zero. What seems to affect the tangent elements is the coefficients of the distinguished Weierstrass polynomial of the function $\tilde{f}$ defined in eq. (\[F-form\]).* On the other hand in the case of liftings from characteristic $p$ to characteristic zero the diagonal element $\lambda_1$ appears as coefficient of the element $t\frac{d}{dt}$. This construction is similar to the one of J.Bertin and A. Mézard [@Be-Me lemme 4.2.2]. Following [@Be-Me th. 4.2.8] we can prove: \[generalizeBeMe\] If $R_P$ denotes the versal deformation ring at $P$, then there is a surjection $$\label{Rprime} R_P \rightarrow W(k)[[y]] \left/\left\langle \sum_{\nu=1}^p \binom{p}{\nu}y^{\nu-1} \right\rangle \right. :=R'$$ The ring $R_P$ is not smooth. We are in the mixed characteristic case so $V= \langle \sigma \rangle$. According to section \[secmatdef\], the ring $R'$ gives rise to deformation of the two dimensional representation given by $$\tilde{\rho}_\sigma= \begin{pmatrix} 1 & 0 \\ 0 & 1+y \end{pmatrix}$$ which in turn gives rise to the deformation $$\tilde{\rho}_{\sigma}(t)=\frac{(1+y)^{-\frac{1}{m}}} { \left(1+\frac{Et^m}{1+c(\sigma)t^m}\right)^{1/m}} \rho_\sigma(t),$$ for a suitable element $E$. The map $\mathrm{Hom}(R_P,\cdot)\rightarrow D(\cdot)$ is smooth (in the sence of Schlessinger [@Sch def. 2.2], [@MazDef p. 278]), therefore there is a map $\phi:R_P \rightarrow R'$. In order to prove that $R_P$ is not a smooth ring we proceed as follows: Consider the natural map $\pi:W(k) \rightarrow W(k)/p=k$. We obtain the following map $$\phi\circ \pi :R_P \rightarrow k[[y]]/\langle y^{p-1} \rangle:=A.$$ Consider the ring $A'=k[[y]]/\langle y^p \rangle$. Then $A' \rightarrow A$ is a small extension, and there is no map $R_P \rightarrow A'$ extending $R_P \rightarrow R' \stackrel{{{\;\rm mod}}p }{\longrightarrow} A$ by remark \[nosmooth\]. In this way we obtain an obstruction to the infinitesimal affine lifting for the affine scheme ${{\rm Spec}}R_P$ therefore $R_P$ is not smooth. Alternatively one can compute the obstruction as an element in $H^2(V,{{\mathcal{T}_\mathcal{O} }})$ following [@Be-Me lemme 4.2.3]. Assume that the hypotheses of proposition \[cohomologySplit\] hold. Consider the ring $R_1$ defined by $$R_1=\left\{ \begin{array}{ll} k & \mbox{ in the equicharacteristic case } \\ R' & \mbox{ in the mixed characteristic case (see eq. (\ref{Rprime}))} \end{array} \right.$$ Let $b=1$ if $p\mid m+1$ and $b=2$ if $p\nmid m+1$. Let $\Sigma$ be the subset of numbers $b\leq i \leq m$ so that $\binom{\frac{i}{m}}{p-1}=0$. Consider the ring $\bar{R}:= R_1[[X_i:i\in \Sigma]]$ and the $k$-vector space $W \subset H^1(V,{{\mathcal{T}_\mathcal{O} }})/\langle d/dt \rangle$ generated by elements $\lambda_i c(\sigma) t^{m+1-i}\frac{d}{dt}$. There is a surjection $R_P\rightarrow \bar{R}$ that induces an isomorphism $W\cong \mathrm{Hom}(\bar{R},k[\epsilon]/\epsilon^2)$. The Krull dimension of $R_P$ is equal to $\#\Sigma$. We have observed in corollary \[obdt\] that deformations in the direction of $d/dt$ are not coming from matrix representations. The elements $\frac{1}{t^i}$ for $i\in \Sigma$ are elements in $H^1\left(V, \frac{1}{t^{m+1}} k[[t]]\right)$ that give rise to elements $ t^{m+1-i} \frac{d}{dt} \in H^1(V,{{\mathcal{T}_\mathcal{O} }})$. Every deformation on these directions is unobstructed by lemma \[matreplift\]. Relation to Deformations of Artin-Schreier curves {#PRIES} ================================================= Let $P$ be a wild ramified point of the cover $\pi:X\rightarrow Y=X/G$ so that the corresponding representation is two dimensional. In this section we will examine the dependence of the Artin-Schreier extension $X \rightarrow X/G_1(P)$ on the form of matrix representation $\rho:G_1(P) \rightarrow GL_2(k)$. Then we will restrict to the germs $\mathcal{O}_{X,P} \rightarrow \mathcal{O}_{Y,\pi(P)}$, and we will study the relation to the deformation functor introduced in [@Pries:04] by R. Pries. The approach of R. Pries is to work with germs of curves and to deform the defining Artin-Schreier equation. Since the germs are living in local rings, that have only one maximal ideal, the effect of splitting the branch locus can not be studied. Therefore R. Pries considers only deformations that do not split the branch locus. According to proposition \[bertin-gen\] it is impossible to lift a wild ramified action to characteristic zero, without splitting the branch locus. We will now restrict ourselves to the equicharacteristic deformation case. Let $X$ be a curve that has a $2$-dimensional representation attached at a wild ramified point $P$. Denote by $\{1,f\}$ a basis of the 2-dimensional vector space $L(mP)$ where $m:=v_P(f)$ is the highest jump in the upper ramification filtration. We would like to write down an algebraic equation for the cover $X \rightarrow X/G_1(P)$. The representation $c=c_1:G_1(P)\rightarrow k$ is a faithful homomorphism of additive groups. We consider the action of $G_1(P)$ on $f$: Let $\Phi(Y)$ be the additive polynomial with set of roots $\{c_1(\sigma):\sigma \in G_1(P)\}$. The polynomial $\Phi(Y)$ can be computed as follows: The group $G_1(P)$ is by [@KontoMathZ sec. 3] elementary abelian so we express $G_1(P)$ as an $\mathbb{F}_p$ vector space with basis $\{\sigma_i\}$ such that $G_1(P)=\bigoplus_{i=1}^s \sigma_i \mathbb{F}_p$. Let $\Delta(x_{1},\ldots,x_{n})$ denote the Moore determinant:$$\Delta(x_{1},\ldots,x_{n})=\det\left(\begin{array}{cccc} x_{1} & x_{2} & \cdots & x_{n}\\ x_{1}^{p} & x_{2}^{p} & \cdots & x_{n}^{p}\\ \vdots & & & \vdots\\ x_{1}^{p^{n-1}} & x_{2}^{p^{n-1}} & \cdots & x_{n}^{p^{n-1}}\end{array}\right).$$ The additive polynomial $\Phi$ can be expressed in terms of the Moore determinant: $$\Phi(Y)=\frac{\Delta(c(\sigma_1),\ldots,c(\sigma_s),Y)}{\Delta(c(\sigma_1),\ldots,c(\sigma_s))},$$ see [@GossBook lemma 1.3.6],[@Elkies99 eq. 3.6]. Thus, the cover $X \rightarrow X/G_1(P)$ is given in terms of the generalized Artin-Schreier equation $$\Phi(Y)=\prod_{\sigma \in G_1(P)} \sigma f=N_{G_1(P)}(f).$$ We would like to represent the curve as a fibre product of Artin-Schreier curves and then using Garcia’s-Stichtenoth’s normalization [@GarciaSticht1991] to write the curve in the form $y^{p^s}-y=u$, where $u$ is an element in the function field of the curve $X/G_1(P)$. There are elements $y_j\in k(X)$ so that $\sigma_i(y_j)=y_j + \delta_{ij}$. Using this notation we can see that the function field $k(X)$ can be recovered as the function field of the fibre product of the curves $y_i^p-y_i=u_i$. The constant elements $u_i$ can be computed from the map $c:G_1(P) \rightarrow k$ as follows: Let $V_i=\bigoplus_{\nu=1,\nu\neq i}^s \sigma_\nu \mathbb{F}_p$. We compute an additive polynomial $\mathrm{ad}_i(Y)$ with roots the $\mathbb{F}_p$-vector space $V_i$ using the Moore determinant: $$\mathrm{ad}_i(Y):=\frac{\Delta(c(\sigma_1),\ldots, \widehat{c(\sigma_i)},\ldots,c(\sigma_s),Y)}{\Delta(c(\sigma_1),\ldots, \widehat{c(\sigma_i)},\ldots,c(\sigma_s))}.$$ These polynomials are invariants of the curve and the map $c$. Moreover we compute that $y_i':=\prod_{\sigma\in V_i} \sigma f =\prod_{v\in V_i} (f-v)=\mathrm{ad}_i(f)$. The element $y_i'$ is invariant under the action of $V_i$ and $\sigma_i(y_i')=y_i' +\mathrm{ad}(c(\sigma_i))$. We can normalize by setting $y_i=y_i'/\mathrm{ad}(c(\sigma_i))$. Then, $$\sigma_j(y_i)=y_i+\delta_{ij}.$$ Following [@GarciaSticht1991] we choose an ${{\mathbb{F}}}_p$ basis $\mu_1,\ldots,\mu_s$ of ${{\mathbb{F}}}_{p^s}$ and we set $y=\sum_{i=1}^s \mu_i y_i$. We observe that the function field can be recovered as the following extension of the field $k(X)^{G_1(P)}$: $$y^{p^s} -y = N_{G_1(P)} \left(\sum_{i=1}^s \frac{\mu_i \mathrm{ad}_i(f)}{\mathrm{ad}_i(c(\sigma_i))} \right)=:u.$$ The element $u\in k(X)^{G_1(P)}$ is an invariant of the action of $G_1(P)$ on $k(X)$. Observe that $$\label{trofod-det} \Delta\big(c(\sigma_1),\ldots,\widehat{c(\sigma_i)},c(\sigma_s),c(\sigma_i)\big)=(-1)^{s-i}\Delta(c(\sigma_1),\ldots,c(\sigma_s)).$$ Let $D$ be the operator sending $x \mapsto x^{p^s}-x$. Since $\mu_i \in \mathbb{F}_{p^s}$ we have $D(\mu_i x)=\mu_i D(x)$. The element $u$ can thus also be expressed by $$\label{u1} u=\sum_{i=1}^s \mu_i D \left(\frac{ \mathrm{ad}_i(f)}{\mathrm{ad}_i(c(\sigma_i))} \right)= \sum_{i=1}^s \mu_i (-1)^{s-i} D\left(\frac{ \Delta \big( c (\sigma_1),\ldots,\widehat{c (\sigma_i)},\ldots,c(\sigma_s),f \big)} { \Delta \big( c (\sigma_1),\ldots,c(\sigma_s) \big) } \right)$$ Equation (\[u1\]) allows us to express $u$ in terms of the following determinant: $$\label{u1det} u_1=\frac{1}{\Delta \big( c (\sigma_1),\ldots,c(\sigma_s) \big)} \det \begin{pmatrix} \mu_1 & \mu_2 & \cdots & \mu_s & 0 \\ c(\sigma_1) & c(\sigma_2)& \cdots & c(\sigma_s) & f \\ c(\sigma_1)^p & c(\sigma_2)^p& \cdots & c(\sigma_s)^p & f^p \\ \vdots & \vdots & & \vdots \\ c(\sigma_1)^{p^{s-1}} & c(\sigma_2)^{p^{s-1}}& \cdots & c(\sigma_s)^{p^{s-1}} & f^{p^{s-1}} \end{pmatrix},$$ $$u=D(u_1).$$ Notice that $u_1$ is a polynomial of $f$ of the form $$u_1(f)=\sum_{\nu=1}^{s} o_\nu f^{p^{\nu-1}},$$ where $o_\nu$ can be computed, in terms of the function $c$, as minor determinants of the above matrix. Then $u(f)$ is a polynomial of $f$ of the form $$u(f)=\sum_{\nu=1}^{2s} a_i f^{p^{\nu-1}},$$ where for $1\leq \nu$, $a_{\nu+s}=-a_\nu^{p^s}$. Now consider the relative situation: Consider the element $\tilde{f} \in A[[t]][t^{-1}]$ defined in proposition \[main-free\]. Given such an element $\tilde{f}$ and a deformation of the representation $\rho:G_1(P) \rightarrow GL_2( L(mP))$ we will construct a deformation $\mathcal{O}_{X,P}$ of the germ $\mathcal{O}_{X,P}$ with Galois group $G_1(P)$. We form again the additive polynomials: $$\mathrm{Ad}_i(Y)=: \frac{\Delta(C(\sigma_1),\ldots, \widehat{C(\sigma_i)},\ldots, C(\sigma_s),Y) }{\Delta(C(\sigma_1),\ldots, \widehat{C(\sigma_i)},\ldots, C(\sigma_s))}.$$ Using the previous normalization procedure we arrive at the following deformed Artin-Schreier curve: $$\begin{aligned} y^{p^s} -y &=& \sum_{i=1}^s \mu_i D\left( \frac{ \mathrm{Ad}_i(\tilde{f})}{\mathrm{Ad}_i(C(\sigma_i))} \right) =\\ &=& \sum_{i=1}^s \mu_i (-1)^{s-i} D \left( \frac{ \Delta \big( C(\sigma_1),\ldots,\widehat{C (\sigma_i)},\ldots,C(\sigma_s),\tilde{f} \big)} { \Delta \big( C (\sigma_1),\ldots,C(\sigma_s) \big) }\right) := U.\end{aligned}$$ Notice that similar to equation (\[u1det\]) we have: $$U_1=\frac{1}{\Delta \big( C (\sigma_1),\ldots,C(\sigma_s) \big)} \det \begin{pmatrix} \mu_1 & \mu_2 & \cdots & \mu_s & 0 \\ C(\sigma_1) & C(\sigma_2)& \cdots & C(\sigma_s) & \tilde{f} \\ C(\sigma_1)^p & C(\sigma_2)^p& \cdots & C(\sigma_s)^p & \tilde{f}^p \\ \vdots & \vdots & & \vdots \\ C(\sigma_1)^{p^{s-1}} & C(\sigma_2)^{p^{s-1}}& \cdots & C(\sigma_s)^{p^{s-1}} & \tilde{f}^{p^{s-1}} \end{pmatrix},$$ $$U=D(U_1).$$ The element $U \in A[[t]][t^{-1}]$ so that $U \equiv u {{\;\rm mod}}m_A$. Relation with equivalence class of Artin-Schreier extensions. ------------------------------------------------------------- In what follows we would like to consider isomorphism classes of Artin-Schreier curves. The following lemma identifies two Artin-Schreier extensions of the ring $A[[x]][x^{-1}]$, where $A$ is a $k$-algebra that gives rise to an irreducible affine scheme, i.e. $A/\mathrm{rad}(A)$ is an integral domain. \[RPlemma\] Consider the extensions $y_1^{p^s}-y_1 =g_1$ and $y_2^{p^s}-y_2=g_2$, where $g_1,g_2 \in A[[x]][x^{-1}]$. These extensions are isomorphic if and only if $g_1(x)=\zeta g_2(x)+ d^{p^s}-d$, for some $d \in A[[x]][x^{-1}]$, and $\zeta \in \mathbb{F}_{p^s}^*$. If $A$ is a field $k$ then this is classical result due to Hasse [@Hasse34]. For the general case we refer to [@Pries:05 lemma 2.4]. The Artin-Schreier curve $y^{p^s}-y=f(x)$ where $f(x) \in A[[x]][x^{-1}]$ is isomorphic to $y^{p^s}-y=f(x)+g(x)$, where $g(x) \in A[[x]]$. Following [@Pries:05 sec. 3] we observe that $g(x)=d^{p^s}-d$, where $d=\sum_{\nu=0}^\infty g(x)^{p^{s\nu}}$. The desired result follows by using lemma \[RPlemma\]. Let $m$ be the conductor, i.e. the highest jump in the upper ramification filtration. Since the group $H$ is elementary abelian this is equal to the highest jump in the lower ramification filtration [@KontoANT lemma 1.8]. \[holnotalter\] Consider an Artin-Schreier cover of $A[[x]][x^{-1}]$ given by: $$y^{p^s}-y =\sum_{\nu=0}^\lambda r_\nu(1/x)^{p^\nu},$$ where $r_\nu(T)\in A[T]$ are polynomials of degree $d_\nu$, so that $\gcd(d_\nu,p)=1$ . The conductor of the cover equals to $\max_{\nu} {d_\nu}$. R. Pries [@Pries:05]. D. Harbater in [@Harbater:80] (see also [@Be-Me sec. 5.1]) gave a parametrization of the classes of cyclic $\mathbb{Z}_p$-covers of a local fields branched above the maximal ideal. For the more general case of $\mathbb{F}_{p^s}$-covers the space of classes of covers of $k((t'))$ is parametrized by the quotient: $$\label{classesHarbater} C=\frac{k((t'))}{k[[t]]+D(k((t')))},$$ where $D$ denotes the map $x \mapsto x^{p^s}-x$. Indeed, by lemma \[RPlemma\] adding $D(a)$ does not alter the equivalence class of the Artin-Schreier curve and by lemma \[holnotalter\] the same holds for adding a holomorphic element. R. Pries gave a moduli interpretation of $p$-group covers of the projective line and she proposed two approaches: either transform (by extending the base ring $A$) an arbitrary Artin-Schreier extension of $A[[x]][x^{-1}]$ to a class in (\[classesHarbater\]) or define a fine moduli space by considering a category where all powers of the $q$-Frobenious maps are invertible elements. She introduced the following: Let $A_1,A_2$ be two $k$-algebras that give rise to irreducible affine schemes, i.e. $A_i/\mathrm{rad}(A_i),$ $ i=1,2$ are integral domains. Consider the Artin-Schreier relative $A$-curves $C_i:y_i^{p^s}-y_i=f_i(x)$, where $f_i(x) \in A_i$. The two curves are considered to be equivalent if and only if there is an algebra extension $A$ of both $A_i$, i.e. there are ring monomorphisms $A_i \hookrightarrow A$, so that the curves $C_i \times_{{{\rm Spec}}A_i} {{\rm Spec}}A$ are isomorphic covers of $A[[x]][x^{-1}]$. In general $U-u \in m_A [[t]][t^{-1}]$ and it is not an element in $m_A [[x]][x^{-1}]$. If the deformation $\tilde{\rho}_\sigma$ does not split the branch locus, then $U-u \in m_A [[x]][x^{-1}]$. After cutting the holomorphic part of $U-u$ and applying the transformation of lemma \[RPlemma\] we get an equivalence class of germs of Artin-Schreier curves given in eq. (\[classesHarbater\]) to an element in the deformation functor of Pries. Conversely for every Laurent polynomial $\Delta \in m_A((x))$ so that $n_0=v_x(\Delta)$, satisfies $(n_0,p)<m$ we can consider the extension of $A((x))$ defined as $$A((x))[y]/(y^{p^s}-y=f+\Delta).$$ This gives rise to an infinitesimal extension of the germ of $X$ at $P$ in the sense of Pries and according to the local-global theory developed by Harbater all this local deformations can be patched together to give a global deformation of the couple $(X,G)$. \[2\][ [\#2](http://www.ams.org/mathscinet-getitem?mr=#1) ]{} \[2\][\#2]{} [10]{} Jos[é]{} Bertin, *Obstructions locales au relèvement de revêtements galoisiens de courbes lisses*, C. R. Acad. Sci. Paris Sér. I Math. **326** (1998), no. 1, 55–58. Jos[é]{} Bertin and Ariane M[é]{}zard, *Déformations formelles des revêtements sauvagement ramifiés de courbes algébriques*, Invent. Math. **141** (2000), no. 1, 195–238. Nicolas Bourbaki, *Commutative algebra. [C]{}hapters 1–7*, Elements of Mathematics (Berlin), Springer-Verlag, Berlin, 1989, Translated from the French, Reprint of the 1972 edition. Gunther Cornelissen and Fumiharu Kato, *Equivariant deformation of [M]{}umford curves and of ordinary curves in positive characteristic*, Duke Math. J. **116** (2003), no. 3, 431–470. Bart de Smit and Hendrik W. Lenstra, Jr., *Explicit construction of universal deformation rings*, Modular forms and Fermat’s last theorem (Boston, MA, 1995), Springer, New York, 1997, pp. 313–326. P. Deligne and D. Mumford, *The irreducibility of the space of curves of given genus*, Inst. Hautes Études Sci. Publ. Math. (1969), no. 36, 75–109. Noam D. Elkies, *Linearized algebra and finite groups of [L]{}ie type. [I]{}. [L]{}inear and symplectic groups*, Applications of curves over finite fields (Seattle, WA, 1997), Contemp. Math., vol. 245, Amer. Math. Soc., Providence, RI, 1999, pp. 77–107. Arnaldo Garc[í]{}a and Henning Stichtenoth, *Elementary abelian [$p$]{}-extensions of algebraic function fields*, Manuscripta Math. **72** (1991), no. 1, 67–79. David Goss, *Basic structures of function field arithmetic*, Ergebnisse der Mathematik und ihrer Grenzgebiete (3) \[Results in Mathematics and Related Areas (3)\], vol. 35, Springer-Verlag, Berlin, 1996. Barry Green and Michel Matignon, *Liftings of [G]{}alois covers of smooth curves*, Compositio Math. **113** (1998), no. 3, 237–272. A. Grothendieck, *Éléments de géométrie algébrique. [III]{}. Étude cohomologique des faisceaux cohérents. [I]{}*, Inst. Hautes Études Sci. Publ. Math. (1961), no. 11, 167. Alexander Grothendieck, *Fondements de la géométrie algébrique. \[[E]{}xtraits du [S]{}éminaire [B]{}ourbaki, 1957–1962.\]*, Secrétariat mathématique, Paris, 1962. Jack K. Hale and H[ü]{}seyin Ko[ç]{}ak, *Dynamics and bifurcations*, Texts in Applied Mathematics, vol. 3, Springer-Verlag, New York, 1991. David Harbater, *Moduli of [$p$]{}-covers of curves*, Comm. Algebra **8** (1980), no. 12, 1095–1122. , *Patching and [G]{}alois theory*, Galois groups and fundamental groups, Math. Sci. Res. Inst. Publ., vol. 41, Cambridge Univ. Press, Cambridge, 2003, pp. 313–424. David Harbater and Katherine F. Stevenson, *Patching and thickening problems*, J. Algebra **212** (1999), no. 1, 272–304. Robin Hartshorne, *Algebraic geometry*, Springer-Verlag, New York, 1977, Graduate Texts in Mathematics, No. 52. , *Lectures on deformation theory*, available online on authors webpage, 2004. Helmut Hasse, *Theorie der relativ-zyklischen algebraischen [F]{}unktionenkörper, insbesondere bei endlichem [K]{}onstantenkörper*, J. Reine Angew. Math. **172** (1934). Kazuya Kato, *Vanishing cycles, ramification of valuations, and class field theory*, Duke Math. J. **55** (1987), no. 3, 629–659. Nicholas M. Katz and Barry Mazur, *Arithmetic moduli of elliptic curves*, Princeton University Press, Princeton, NJ, 1985. Bernhard K[ö]{}ck, *Galois structure of [Z]{}ariski cohomology for weakly ramified covers of curves*, Amer. J. Math. **126** (2004), no. 5, 1085–1107. Aristides Kontogeorgis, *The ramification sequence for a fixed point of an automorphism of a curve and the weierstrass gap sequence*, Mathematische Zeitschrift. , *On the tangent space of the deformation functor of curves with automorphisms*, Algebra Number Theory **1** (2007), no. 2, 119–161. Qing Liu, *Algebraic geometry and arithmetic curves*, Oxford Graduate Texts in Mathematics, vol. 6, Oxford University Press, Oxford, 2002, Translated from the French by Reinie Erné, Oxford Science Publications. Michel Matignon, *[$p$]{}-groupes abéliens de type [$(p,\cdots,p)$]{} et disques ouverts [$p$]{}-adiques*, Manuscripta Math. **99** (1999), no. 1, 93–109. B. Mazur, *Deformation theory of [G]{}alois representations (in [M]{}odular forms and [F]{}ermat’s last theorem)*, (1997), xx+582, Papers from the Instructional Conference on Number Theory and Arithmetic Geometry held at Boston University, Boston, MA, August 9–18, 1995. Sh[ō]{}ichi Nakajima, *$p$-ranks and automorphism groups of algebraic curves*, Trans. Amer. Math. Soc. **303** (1987), no. 2, 595–607. Rachel J. Pries, *Deformation of wildly ramified actions on curves*, arXiv:math.AG (2004), no. 0403056 v2. , *Equiramified deformations of covers in positive characteristic*, arXiv:math.AG/0403056 v3 (2005). Michael Schlessinger, *Functors of [A]{}rtin rings*, Trans. Amer. Math. Soc. **130** (1968), 208–222. T. Sekiguchi, F. Oort, and N. Suwa, *On the deformation of [A]{}rtin-[S]{}chreier to [K]{}ummer*, Ann. Sci. École Norm. Sup. (4) **22** (1989), no. 3, 345–375. Tsutomu Sekiguchi and Noriyuki Suwa, *Théories de [K]{}ummer-[A]{}rtin-[S]{}chreier-[W]{}itt*, C. R. Acad. Sci. Paris Sér. I Math. **319** (1994), no. 2, 105–110. , *Théorie de [K]{}ummer-[A]{}rtin-[S]{}chreier et applications*, J. Théor. Nombres Bordeaux **7** (1995), no. 1, 177–189, Les Dix-huitièmes Journées Arithmétiques (Bordeaux, 1993). Jean-Pierre Serre, *Local fields*, Springer-Verlag, New York, 1979, Translated from the French by Marvin Jay Greenberg. Henning Stichtenoth, *Über die [A]{}utomorphismengruppe eines algebraischen [F]{}unktionenkörpers von [P]{}rimzahlcharakteristik. [II]{}. [E]{}in spezieller [T]{}yp von [F]{}unktionenkörpern*, Arch. Math. (Basel) **24** (1973), 615–631.
--- abstract: 'The standard model accommodates, but does not explain, three families of leptons and quarks, while various extensions suggest extra matter families. The oblique corrections from extra chiral families with relatively light (weak-scale) masses, $M_{f} \sim \langle H \rangle $, are analyzed and used to constrain the number of extra families and their spectrum. The analysis is motivated, in part, by recent $N = 2$ supersymmetry constructions, but is performed in a model-independent way. It is shown that the correlations among the contributions to the three oblique parameters, rather than the contribution to a particular one, provide the most significant bound. Nevertheless, a single extra chiral family with a constrained spectrum is found to be consistent with precision data without requiring any other new physics source. Models with three additional families may also be accommodated but only by invoking additional new physics, most notably, a two-Higgs-doublet extension. The interplay between the spectra of the extra fermions and the Higgs boson(s) is analyzed in the case of either one or two Higgs doublets, and its implications are explored. In particular, the precision bound on the SM-like Higgs boson mass is shown to be significantly relaxed in the presence of an extra relatively light chiral family.' address: - '$^{a}$ The University of Texas at Austin, Austin, Texas 78712' - ' $^{b}$ Massachusetts Institute of Technology, Cambridge, Massachusetts 02139' - '$^{c}$ California Institute of Technology, Pasadena, California 91125 ' author: - '[Hong-Jian He]{}$\,^{a}$,   [Nir Polonsky]{}$\,^{b}$   and   [Shufang Su]{}$\,^{c}$' title: ' Extra Families, Higgs Spectrum and Oblique Corrections ' --- \#1\#2\#3[Nucl. Phys. B[**\#1**]{}, \#3 (19\#2)]{} \#1\#2\#3[Nucl. Phys. B[**\#1**]{}, \#3 (20\#2)]{} \#1\#2\#3[Nucl. Phys. Proc. Suppl. [**\#1**]{}, \#3 (19\#2)]{} \#1\#2\#3[Phys. Lett. B[**\#1**]{}, \#3 (19\#2)]{} \#1\#2\#3[Phys. Lett. B[**\#1**]{}, \#3 (20\#2)]{} \#1\#2\#3[Phys. Lett. [**\#1B**]{}, \#3 (19\#2)]{} \#1\#2\#3[Phys. Rev. D[**\#1**]{}, \#3 (19\#2)]{} \#1\#2\#3[Phys. Rev. D[**\#1**]{}, \#3 (20\#2)]{} \#1\#2\#3[Phys. Rev. Lett. [**\#1**]{}, \#3 (19\#2)]{} \#1\#2\#3[Phys. Rev. Lett. [**\#1**]{}, \#3 (20\#2)]{} \#1\#2\#3[Phys. Rep. [**\#1**]{}, \#3 (19\#2)]{} \#1\#2\#3[Ann. Rev. Astron. Astrophys. [**\#1**]{}, \#3 (19\#2)]{} \#1\#2\#3[Ann. Rev. Nucl. Part. Sci. [**\#1**]{}, \#3 (19\#2)]{} \#1\#2\#3[Mod. Phys. Lett. A[**\#1**]{}, \#3 (19\#2)]{} \#1\#2\#3[Z. Phys. C[**\#1**]{}, \#3 (19\#2)]{} \#1\#2\#3[Ap. J. [**\#1**]{}, \#3 (19\#2)]{} \#1\#2\#3[[Ann. Phys. ]{} [**\#1**]{}, \#3 (19\#2)]{} \#1\#2\#3[[Rev. Mod. Phys. ]{} [**\#1**]{}, \#3 (19\#2)]{} \#1\#2\#3[[Comm. Math. Phys. ]{} [**\#1**]{}, \#3 (19\#2)]{} \#1\#2\#3[JHEP [**\#1**]{}, \#3 (19\#2)]{} $${\left[} \def$$[\]]{} ${\left(} \def$[)]{} 1em[[U(1)\_[em]{}]{}]{} 5CL[[95%[C.L.]{}]{}]{} Introduction {#sec:intro} ============ The number of fermion generations is one of the unresolved puzzles within the Standard Model (SM) of electroweak and strong interactions. However, certain extensions of the standard model suggest particular family structures. $N=2$ supersymmetry constructions [@N2old; @N2new], for instance, enforce an even number of generations, which in practice implies three additional mirror families of chiral fermions (and sfermions) with fermion masses at the weak scale, $M_{f} \sim \langle H \rangle $, where $\langle H \rangle \simeq 174$GeV is the Higgs vacuum expectation value (VEV) responsible for the electroweak symmetry breaking. All fermion masses in $N=2$ supersymmetry originate at low-energy from effective Yukawa couplings, as shown in Ref.[@N2new], and are chiral. (Although the matter fermions are vector-like in the $N=2$ limit, gauge invariant mass terms are forbidden by a $Z_{2}$ mirror parity [@N2old; @N2new].) The mirror fermion spectrum is bounded from above by requiring perturbativity, and from below by direct collider searches. Hence, the natural mass range for the mirror fermions is roughly, $$\begin{aligned} ~\dis\f{m_Z}{2} ~\lesssim~ M_f ~\lesssim~ \CO(\langle H \rangle ) ~, \label{eq:mirrorF-MR}\end{aligned}$$ where $m_Z \simeq 91.19$GeV is the mass of the weak gauge boson $Z^0$. Here, the generic lower bound is given by the LEP $Z$-decays to heavy neutrinos and other charged fermions. The current direct bound on charged heavy leptons is about $100$GeV, while extra SM-like quarks $(t',b')$ should be heavier than $\sim\!100-200$GeV, depending on detailed assumptions regarding their mixing with $(t,\,b)$ and their decay modes of $t'\to b+W$ and $b'\to b+Z$, etc [@data]. For simplicity, we assume hereafter no mixing of the extra fermions among themselves and with the SM fermions (as the latter is suppressed by the mirror parity in $N=2$), and in particular, that the mass range (\[eq:mirrorF-MR\]) would apply. Eq. (\[eq:mirrorF-MR\]) provides a restrictive range which is quite different from the case of dynamical symmetry breaking scenarios, such as technicolor, where the strongly interacting techni-fermions are generally heavy, with masses around or above the TeV scale [@TC; @TCF; @STU]. The quantum oblique corrections, parameterized in terms of the $S$, $T$ and $U$ parameters [@STU], are extracted from the electroweak precision data[@data; @Osaka] and are known to exclude such extra heavy chiral-fermion generations[@EP]. For instance, one extra SM-like heavy family would contribute to the $S$-parameter by an amount of $$\begin{aligned} \Delta S&=& \dis\f{1}{3\pi}\sum_j N_{cj}[I_{3L}(j)-I_{3R}(j)]^2 =\f{2}{3\pi}\simeq 0.21 \,, \label{eq:DHF-S}\end{aligned}$$ in the degenerate limit[@STU; @EP], where $I_{3L,R}(j)$ is the third component of weak-isospin of the left (right) handed fermion $j$, and $N_{cj}= 3\,(1)$ denotes the color number of quarks (leptons). On the other hand, a nondegenerate heavy fermion doublet $(\psi_1,\,\psi_2)$ with masses $(M_1,M_2)$ can yield a sizable positive $T$ which, in the limit $|M_1-M_2|\ll M_{1,2}$, reads[@STU; @veltman] $$\begin{aligned} \Delta T&\simeq& \dis\f{N_{cj}}{12\pi s_W^2c_W^2}\(\f{M_1-M_2}{m_Z}\)^2 \,, \label{eq:NDHF-T}\end{aligned}$$ where $s_W=\sin\theta_W$ with $\theta_W$ being the weak angle. Such nondecoupling effects of heavy chiral fermions are due to the dependence of their masses on the Yukawa couplings, that necessarily violates the decoupling theorem[@Decouple]. The heavy (chiral) fermion corrections (\[eq:DHF-S\]) and (\[eq:NDHF-T\]) are inconsistent with electroweak data (when considered separately), and are often the basis for ruling out such heavy fermion scenarios[@EP]. (This is contrary to the case of vector-like fermions whose contributions to all oblique parameters decouple as $1/M^2$ and which play a crucial role, for instance, in the recent top-quark seesaw models with either vector singlet[@DH] or doublet[@He] heavy fermions.) One expects models with relatively light extra chiral fermions to also receive non-trivial constraints from the electroweak quantum corrections, though the nature of the constraints may be very different. In this work, we study the oblique corrections from the such relatively light new fermions \[cf. eq.(\[eq:mirrorF-MR\])\], as well as from the Higgs sector which generates the chiral fermion masses. Since the extra fermions under consideration are relatively light, they can have a sizable mass-splitting, such as $|M_1-M_2|\sim m_Z \not\ll M_{1,2}$, without causing an unacceptably large $T$. At the same time, the $S$-parameter may receive additional negative corrections. Interestingly, a single relatively heavy SM Higgs boson leads to a sizable negative contribution to $T$, and thus allows for a larger isospin breaking in the fermion sector. For one extra fermion family with a proper spectrum, a SM Higgs boson as heavy as 500GeV is found to be consistent with the precision electroweak data. Such an interplay is nontrivial, and as we will show, in order to accommodate up to three new families, an extended Higgs sector with two Higgs doublets (and with a highly constrained spectrum) has to be considered. We begin, in Sec.\[sec:oblique1\], with a summary of the definitions of the oblique parameters $\STU$ and their current experimental bounds, and examine in detail the contributions in the extra lepton-quark sector and the two-Higgs-doublet sector. We study the interplay between the fermion and Higgs sectors in Sec.\[sec:N2\], where $\STU$ bounds are imposed for deriving the allowed parameter space. This is done first in the simplest case with a single extra fermion family and the one Higgs doublet, and then in the case with three extra fermion families and the two Higgs doublets. Low energy $N=2$ supersymmetry, which provides an explicit theoretical framework in the latter case, is briefly reviewed as well. We conclude in Sec.\[sec:sum\]. The Appendix summarizes the complete formulae for the two-Higgs-doublet contributions to $\STU$. New Physics Corrections to Oblique Parameters {#sec:oblique1} ============================================= The Oblique Parameters and Current Bounds ----------------------------------------- The oblique $\STU$ parameters [@STU] can be defined as $$\begin{aligned} S&=&-16\pi\frac{\Pi_{3Y}(m_Z^2)-\Pi_{3Y}(0)}{m_Z^2} \,, \label{eq:S}\\[2mm] T&=&4\pi\frac{\Pi_{11}(0)-\Pi_{33}(0)}{s_W^2c_W^2m_Z^2} \,, \label{eq:T}\\[3mm] U&=&16\pi\frac{[\Pi_{11}(m_Z^2)-\Pi_{11}(0)] -[\Pi_{33}(m_Z^2)-\Pi_{33}(0)]} {m_Z^2} \,, \label{eq:U}\end{aligned}$$ where the weak-mixing angle $\theta_W$ is defined at the scale $\mu =m_Z$. In eqs. (\[eq:S\])-(\[eq:U\]), $\Pi_{11}$ and $\Pi_{33}$ are the vacuum polarizations of isospin currents, and $\Pi_{3Y}$ the vacuum polarization of one isospin and one hypercharge currents. The above definitions[^1] slightly differ from the original ones [@STU] for $(S,U)$ since we use the differences of $\Pi$-functions rather than their first derivatives (with higher powers of $q^2=m_Z^2$ truncated). Eqs.(\[eq:S\])-(\[eq:U\]) are more appropriate for our current analysis in which the scale of the relevant new fermions is relatively low. The new physics corrections to $\STU$ are defined relative to their SM reference point and are often denoted by $(S_{\rm new},\,T_{\rm new},\,U_{\rm new})$. To simplify the notation, we will omit these subscripts hereafter. In certain cases, three additional oblique parameters $(V,\,W,\,X)$[@VWX], which are generally less visible, may be further included in fitting the data. This more elaborated procedure is beyond the scope of the current work and is not expected to affect our main conclusions. \[The contributions of the new fermions to $(V,\,W,\,X)$ drop quickly as their masses increase beyond the $Z$-pole and become well below the dominant oblique corections [@VWX].\] Also, the absence of mixings between new fermions and the SM fermions implies no extra flavor-dependent vertex corrections to the fermionic $Z$-decay width, which makes the oblique corrections sufficient for describing the new physics in our case. The updated global fit of $\STU$ to the various precisely measured electroweak observables (such as the gauge boson masses $(m_Z,m_W)$, the $Z$-width $\Gamma_Z$, and the $Z$-pole asymmetries, etc) [@data; @Osaka] gives[^2]: $$\begin{aligned} S&=& -0.04\pm{0.11}\, (-0.09) \,, \nonumber \\ T&=& -0.03\pm{0.13}\, (+0.09) \,, \label{eq:STU-fit} \\ U&=&\ \ \, 0.18\pm{0.14}\, (+0.01) \,, \nonumber\end{aligned}$$ where the central values correspond to the SM Higgs mass reference point, $\mHsm=100$GeV, while the values given in the parentheses show the changes for $\mHsm=300$ GeV. The uncertainties in (\[eq:STU-fit\]) are from the inputs. The $S$ and $T$ parameters are strongly correlated as shown in the $95\%$C.L. contours of Fig.\[fig:stuexp\]. Variations in $U$ mainly shift the $S-T$ contour without affecting its shape and direction, and a larger positive $U$ tends to diminish the allowed regions of positive $(S,\,T)$. = 10 cm The “$\times$” symbols in Fig.\[fig:stuexp\] represent in SM Higgs contributions to $S$ and $T$ for different $\mHsm$ values relative to $\mHsm =100$ GeV. $U$ is insensitive to $\mHsm$ for $\mHsm\gtrsim 200$GeV. An important feature of the SM Higgs corrections is that as $\mHsm$ increases, $S$ becomes more positive while $T$ is driven to more negative values. As such, a SM Higgs with a mass $\mHsm \gtrsim 300$GeV is clearly outside the $95\%$C.L. $S-T$ contours for wide range of $U$ values.[^3] However, including certain types of new physics contributions to $\STU$ may drastically relax the upper bound on the Higgs mass, as long as the new corrections either [ *(i) decrease $S$*]{}, or [*(ii) lift up $T$,*]{} or [*(iii) achieve both*]{}. As we will show in the following sections, the extra fermions under consideration generally lead to a large positive $T$, and in many cases also to a sizable $S>0$. Hence, our analysis will mainly fall under Case [*(ii)*]{}. Lepton and Quark Sector ----------------------- For generality, we consider two fermions $(\psi_1,\psi_2)$, with masses $(M_1, M_2)$ and the following SM charges, $$\begin{array}{lccc} {\rm Fermions:} & \displaystyle \psi_L = \left( \begin{array}{c} \psi_{1L}\\ \psi_{2L} \end{array} \right)\,,~~~~ & \psi_{1R}\,,~~~~ & \psi_{2R}\,;\\[3mm] {\rm Hypercharge:} & Y\,,~~~~ & \displaystyle Y+\frac{1}{2},~~~~ & \displaystyle Y-\frac{1}{2}; \end{array} \label{eq:charge}$$ where the electric charge is given by $Q_j = I_{3j}+Y_j$ with $I_{3j}$ and $Y_j$ being the third component of weak-isospin and the hypercharge of the fermion $j$, respectively. For SM fermions, one has $Y=\f{1}{6}\,\(-\f{1}{2}\)$ in eq. (\[eq:charge\]) for quarks (leptons). For mirror fermions in the Minimal $N=2$ Supersymmetric SM (MN2SSM) [@N2new], one has $Y=-\f{1}{6}\,\(\f{1}{2}\)$ in eq. (\[eq:charge\]) for mirror quarks (mirror leptons). (For a review on the MN2SSM, see Sec. \[subsec:MN2SSM\].) Hence, the correspondence with eq.(\[eq:charge\]) is, $(M_1, M_2)\leftrightarrow{(M_{\nu}, M_\ell )}$ for leptons and $(M_1, M_2)\leftrightarrow{(M_{\ell'}, M_{{{\nu}^{\prime}}})}$ for mirror leptons, and similarly for the quarks and mirror quarks. Using eqs.(\[eq:S\])-(\[eq:U\]), we can compute the one-loop fermionic contributions to the oblique $\STU$ parameters as below, $$\begin{aligned} S_f&= & \frac{N_c}{6\pi} \left\{ 2(4Y+3)x_1+2(-4Y+3)x_2-2Y\ln\frac{x_1}{x_2} \right.\nonumber \\[1mm] &&\left. +\left[\left(\frac{3}{2}+2Y\right)x_1+Y\right] G(x_1) +\left[\left(\frac{3}{2}-2Y\right)x_2-Y\right] G(x_2) \right\}\,, \label{eq:Sfermion}\\[3mm] T_f&=&\frac{N_c}{8\pi{s}_W^2c_W^2}F(x_1,x_2) \,, \label{eq:Tfermion}\\[3mm] U_f&=&-\frac{N_c}{2\pi}\left\{\frac{x_1+x_2}{2}-\frac{(x_1-x_2)^2}{3} +\left[\frac{(x_1-x_2)^3}{6}-\frac{1}{2}\,\frac{x_1^2+x_2^2}{x_1-x_2}\right] \ln\frac{x_1}{x_2}\right. \nonumber\\[1mm] &&+\left.\frac{x_1-1}{6}f(x_1,x_1)+\frac{x_2-1}{6}f(x_2,x_2)+ \left[\frac{1}{3}-\frac{x_1+x_2}{6}- \frac{(x_1-x_2)^2}{6}\right]f(x_1,x_2)\right\} , \label{eq:Ufermion}\end{aligned}$$ where $x_i=(M_i/m_Z)^2$ with $i=1,2$ and the color factor $N_{c} = 3\,(1)$ for quarks (leptons). The functions $G(x)$, $F(x_1,x_2)$, and $f(x_1,x_2)$ are defined by eqs.(\[eq:Gfun\]), (\[eq:Ffun\]) and (\[eq:ffun\]), in the Appendix. We observe that for a given $(M_1,\,M_2)$, eq.(\[eq:Sfermion\]) is invariant under the exchanges of $Y\leftrightarrow{-}Y$ and $M_1\leftrightarrow{M}_2$, so that the fermions $(\psi_1,\,\psi_2)$ and their mirrors $(\psi_2,\,\psi_1)$ have the same expression for $S$. Therefore, we will not distinguish hereafter between a fermion and its mirror, but simply use $(M_1,\,M_2)$ to denote $(M_N,\, M_E)$ in the (mirror) lepton sector and $(M_U, \,M_D)$ in the (mirror) quark sector. It is instructive to consider the limit $M_{1,2}^2\gg{m_Z^2}$, under which the $S$ parameter approximately reads, $$S_f=\frac{N_c}{6\pi}\left[1-2Y\ln\left(\frac{M_1}{M_2}\right)^2+ \frac{1+8Y}{20}\left(\frac{m_Z}{M_1}\right)^2+ \frac{1-8Y}{20}\left(\frac{m_Z}{M_2}\right)^2 +O\left(\frac{m_Z^4}{M_i^4}\right)\right] . \label{eq:Sapp}$$ If the mass splitting  $|M_1 - M_2|/M_{1,2}$  is small, then all mass-dependent terms decouple and eq. (\[eq:Sapp\]) reduces to the positive constant term $N_c/6\pi$, which leads to the well-known result in eq.(\[eq:DHF-S\]). However, as long as $(M_1,\,M_2)$ are non-degenerate and not too large, additional negative corrections to the constant term $N_c/6\pi$ may arise, depending on the sign of hypercharge $Y$. = 12 cm The contributions to $S$ from one generation of either ordinary or mirror leptons and quarks are shown in Fig.\[fig:stufermion\](a), where the solid curves are for leptons and dotted curves for quarks. The mass range of the chiral fermions are chosen to be between 50 GeV and 300 GeV. (We note that after adding experimental bounds on the charged extra fermions, the lower end of their mass range would be shifted somewhat above $50$GeV, depending on the details of each particular model.) The lepton contribution to $S$ grows with an increasing $M_1$ ($M_N$) and with a decreasing $M_2$ ($M_E$), while the quark contribution behaves in the opposite way. This is due to their different signs of $Y$. The quark contribution is enhanced by the color factor, but is suppressed by the smaller Y. For $M_{1,2}^2 \gg{m}_Z^2, (M_1-M_2)^2$, $S$ should approach its asymptotic value $1/6\pi$ for leptons and $1/2\pi$ for quarks. This may be understood from Fig.\[fig:stufermion\](a) by examining the solid (dotted) curve with $M_2=300$GeV which already well approaches $\sim\!0.05~(0.16)$ for leptons (quarks) as $M_1$ increases to about $300$GeV. However, for quarks and leptons with masses $\sim\CO(m_Z)$, smaller and even negative values of $S$ can be obtained. Negative values of $S$ occur in the non-degenerate region of $M_E>M_N$ and $M_U>M_D$. For instance, $(M_N,\,M_E)=(50,\,300)$GeV gives $S_\ell =-0.18$. The contributions to $T$ and $U$ from chiral fermions are depicted in Fig.\[fig:stufermion\](b) and (c). The parameters $T$ and $U$ measure the weak-isospin violation in the ${\rm SU}(2)_L$ doublet and thus are nonvanishing only for $M_1\neq M_2$. The more $M_1$ and $M_2$ split, the larger their contributions to $(T_f,\,U_f)$ become. Furthermore, the $(T_f,\,U_f)$ formulae eqs.(\[eq:Tfermion\]) and (\[eq:Ufermion\]) are invariant under the exchange $M_1\leftrightarrow{M}_2$ and are always positive, unlike the contributions of the Higgs boson (cf., Fig.1). While ${U}_f$ is relatively small, ${T}_\ell$, for example, could be as large as 0.68 for $(M_N,\, M_E)=(100,300)$ GeV. Since $(T_f,\,U_f)$ depend only on isospin-breaking and are symmetric under $M_1\leftrightarrow M_2$, their $M_{1,2}$-dependence is the same for fermions and mirror fermions. The quark contributions to $(T_f,\,U_f)$ are again enhanced by their color factor. In order to accommodate new fermion families, the up- and down-type (mirror) quarks have to be sufficiently degenerate to avoid a too large positive $T$. Unfortunately, this renders $S$ positive in most of the parameter space. A non-degenerate pair of (mirror) leptons could help to satisfy the $S$ constraint, but it also contributes positively to $T$ (though more moderately comparing to quark). A positive contribution to $U$ can better fit the data, but it is numerically less significant, as shown in Fig.\[fig:stufermion\](c). Clearly, the nontrivial correlations among lepton and quark contributions to all three oblique parameters (rather than to any particular one) provide the most significant constraints. = 10 cm = 10 cm In order to compare the theoretical predictions with the current experimental constraints shown in Fig.\[fig:stuexp\], it is very instructive to depict the above fermionic oblique corrections eqs. (\[eq:Sfermion\])-(\[eq:Ufermion\]) in the $S-T$ plane for given values of $U$. This corresponds to a set of “$U$-contours” in the theoretically allowed regions of the $S-T$ plane, which should be directly compared to the experimental bounds of Fig.\[fig:stuexp\]. In Figs.\[fig:stulepton\] and \[fig:stuquark\], we plot various $U$-contours in the $S-T$ plane for one family of leptons and of quarks, respectively. For leptons, $S_\ell$ can be negative in large regions of the parameter space. For quarks, $S_q>0$ in most of the parameter space as to avoid a too large contribution to $T_q$. Although a positive $U_{f}$ is consistent with the data, $T_{f}$ provides a very strong constraint when combined with $S_{f}$. Nevertheless, comparing with the $S-T$ fits in Fig.\[fig:stuexp\], one finds that one extra chiral family is viable, even without additional new physics contributions. This is consistent with the recent study in Ref.[@previous], where a similar conclusion was reached. Ref.[@previous] used an unconventional formalism for analyzing the oblique corrections and a detailed comparison is difficult. Our analysis, based on the standard $\STU$ formalism[@STU], is transparent and can be readily applied to a given model. In what follows, we focus on the interplay between extra families and the Higgs sector. We aim at accommodating up to three chiral families (as theoretically motivated by our recent $N=2$ constructions [@N2new]), which requires to extend the Higgs sector with two-doublets. Henceforth, our study substantially differs from Ref.[@previous]. Finally, we note that it should be straightforward to translate above Figs.\[fig:stulepton\] and \[fig:stuquark\] to any number $N_{g}$ of extra generations, i.e., for $N_g>1$, the same curves represent the oblique parameters with the values $\STU/N_g$, if one assumes that these new generations are degenerate in mass with each other. However, it is extremely difficult to accommodate more than one extra generation with the data. We will return to this issue in Sec. \[sec:N2\]. Two Higgs Doublet sector {#sec:2hdm} ------------------------ The exact corrections to $\STU$ in a general two-Higgs-doublet model (2HDM) have been computed in Ref. [@haber]. We will denote these contributions by $S_H$, $T_H$, and $U_H$, respectively. Their explicit formulae are lengthy and are summarized in the Appendix for completeness. For $N=1$ supersymmetry, and in particular the $N=1$ minimal supersymmetric extension of the SM (MSSM) (with high-scale supersymmetry breaking), the Higgs contributions are generally small due to the tree-level constraints among the masses of the light and heavy CP-even, the CP-odd, and the charged Higgs bosons, ($m_h,\, m_H,\, m_A,\, m_{H^{\pm}}$, respectively). However, for a two-Higgs-doublet sector with a general Higgs mass spectrum, significant contributions can arise in large regions of the parameter space. Such non-MSSM-like Higgs spectrum may be realized for a $N=1$ or $N=2$ supersymmetry scenario with a sufficiently low scale of supersymmetry breaking [@hard]. The contribution $T_H$ could be either positive or negative, depending on the spectrum of the Higgs masses and on the difference between the two rotation angles ($\beta-\alpha$), where $\tan\beta = \langle H_2\rangle/ \langle H_1\rangle$ \[with $H_{1}$ ($H_{2}$) being the Higgs doublet of negative (positive) hypercharge\] and $\alpha$ is the rotation angle for obtaining the CP-even mass-eigenstates $(h^0,\,H^0)$. The $T$-contours in the $(m_h,m_H)$ plane for $m_A=1000$GeV and $m_A=100$GeV are shown in Figs.\[fig:Thiggs1000\] and \[fig:Thiggs100\] for $\beta-\alpha=\pi$ (solid line), $\f{3\pi}{4}$ (dash-dotted line) and $\f{\pi}{2}$ (dotted line), where $m_{H^{\pm}}$ is chosen as to minimize $T_H$. = 10 cm = 10 cm A negative contribution to $T_H$ can always be achieved with an appropriately chosen $m_{H^{\pm}}$. (This was also noted in Ref.[@grant].) For some values of $m_{H^{\pm}}$, $T_H$ could be positive and large, however, we will concentrate hereafter only on the more interesting regions with negative $T_H$. The regions which correspond to a sizable negative $T_H$ can be classified as follows: - [Large $m_A$: (Ia) $m_h\ll m_{H^{\pm}}\ll{m}_A$,  $\beta-\alpha\sim\pi$;\ (Ib) $m_h\sim{m}_H \ll m_{H^{\pm}}\ll{m}_A$;]{} - [Small $m_A$: (IIa) $m_A\ll{m}_{H^{\pm}}\ll m_H$,  $\dis\beta-\alpha\sim\f{\pi}{2}$;\ (IIb) $m_A\ll{m}_{H^{\pm}}\ll m_h\sim{m}_H$;]{} where the minimum value for $T_H$ is achieved for $m_{H^{\pm}}\simeq{0.6}\ m_{\rm heavy}$ and $m_{\rm heavy}=\max(m_H,\,m_A)$. This can be understood by examining the approximate formula for $T_H$ in the limit $m_{\rm Higgs}^2\gg{m_Z}^2$ [@grant]: $$\begin{aligned} T_H&=&\frac{1}{16\pi{s}^2_Wm^2_W}\left\{\cos^2({\beta-\alpha}) \[F(m_{H^{\pm}}^2, m_h^2)+F(m_{H^{\pm}}^2, m_A^2,)-F(m_{A}^2, m_h^2,)\]\right. \nonumber \\ &&\hspace*{19.3mm}+\left.\sin^2({\beta-\alpha}) \[F(m_{H^{\pm}}^2, m_H^2)+F(m_{H^{\pm}}^2, m_A^2)-F(m_{A}^2, m_H^2)\]\right\}, \label{eq:thiggsapp}\end{aligned}$$ where $F(x_1,x_2)$ is defined in eq.(\[eq:Ffun\]). \[The approximate formulae for ($S_H,\,U_H$) are given in the Appendix for completeness.\] Terms inside the first (second) bracket are symmetric in $m_{h(H)}$ and $m_A$, and could obtain large negative values if there is a large split between $m_{h(H)}$ and $m_A$ and $m_{\rm light}\ll{m}_{H^{\pm}}\ll{m}_{\rm heavy}$. For $\beta-\alpha=\pi \,[{\pi\over 2}]$, we have $\sin^2({\alpha-\beta})=0 \,[\cos^2({\alpha-\beta})=0]$, so that only the first (second) bracket contributes, which is independent of $m_H$ ($m_h$). This is the case in region (Ia) and (IIa). For general values of $\beta-\alpha$, $m_h$ and $m_H$ have to be sufficiently close in order for $T_{H}$ to be large and negative. This is the case in regions (Ib) and (IIb). We also notice that in Figs. \[fig:Thiggs1000\] and \[fig:Thiggs100\], each set of $T_H$-contours approach the same point at the boundary of $m_h = m_H$. This is because the dependence on $\beta - \alpha$ disappears under this limit \[see eqs. (\[eq:Thiggs\]) and (\[eq:thiggsapp\])\]. We note that the parameter $T_H$ can be as negative as $-2.5$, and could cancel large positive contributions from the quark and lepton sector when more than one extra family is included. $S_H$ and $U_H$ are relatively small in these two regions, where one has an almost positive $S_H<0.1$ and a negative $U_H$ with $|U_H|<0.02$. In Case-(Ia), a sizable positive $S_H\sim{0.16}$ and a slightly negative $U_H\sim{-0.05}$ are also possible. Clearly, the Higgs spectrum in these two regions is very different from that of the conventional $N = 1$ MSSM. Even in the case of a more general supersymmetry breaking scenario[@hard], it requires some fine-tuning of the mass parameters and the quartic couplings. In principle, such relations are easier to realize in models with more than two Higgs doublets (such as $N=2$ supersymmetry), where more Higgs states can exist at the scale $m_{\rm heavy}$ or above and thus considerably expand the parameter space. The correlations between the spectra of the minimal one- or two-Higgs-doublet sector and the additional chiral families via the precision $\STU$ constraints will be systematically analyzed in the next section. Other Super and Mirror Particles -------------------------------- The contributions of the $N=1$ sparticles, with a typical mass scale $M_{\rm SUSY}$, to the oblique parameters are generally small in the decoupling region $M_{\rm SUSY}\gg{m}_Z, m_t$, which we will assume in our analysis for simplicity. In practice, this only requires $M_{\rm SUSY} \gtrsim 300$ GeV, as shown in Refs. [@haber; @stop; @sparticle]. Aside from sfermions and mirror sfermions, there could also be visible contributions from Majorana fermions, such as gauginos, Higgsinos, and, in $N=2$, mirror gauginos and Higgsinos. In general, contributions from Majorana fermions to $S$ could have either sign [@previous; @majorana]. In our current study we concentrate on the contributions of the Higgs bosons and of (mirror) quarks and leptons. For simplicity, the effects from sfermions and Majorana fermions are assumed to be negligible. This is indeed the case in the decoupling regime $M_{\rm SUSY}\gg{m}_{Z},m_t$ under consideration. Clearly, an arbitrary spectrum of sparticles and/or mirror gauginos will add more degrees of freedom to fit the data and thus further relax the correlations derived in the next section. A more elaborate analysis including these complications is left for future work. Spectra of Extra Fermions and Higgs Bosons:\ The Interplay {#sec:N2} ============================================ Interplay of Extra Fermions and One-Higgs-Doublet Sector -------------------------------------------------------- = 12 cm We begin by considering the simplest case with one extra (mirror) family and one (SM) Higgs doublet. We display in Fig.\[fig:oneg\] (a) and (b), the $M_1-M_2$ plane, where each point represents an experimentally viable four-generation model, and dots and circles represent leptons and quarks, respectively. The initial sample consists of 10000 models. We choose, for illustration, a light SM Higgs with mass $\mHsm=100$GeV \[cf., Fig.\[fig:oneg\](a)\] and a heavy SM Higgs with mass $\mHsm=500$GeV \[cf., Fig.\[fig:oneg\](b)\]. Large regions of the parameter space are allowed, where the preferred regions are given by $M_2>M_1$ for leptons and $M_1>M_2$ for quarks. For a heavy Higgs boson $\mHsm=500$GeV, the leptons and quarks occupy different mass regions, while in the case of a very light Higgs boson they largely overlap. Future discoveries of light extra lepton/quark spectra can provide important information about the Higgs boson mass range, and vice versa. Figs.\[fig:oneg\](c) and (d) display the corresponding points in the $S-T$ plane with the $\95CL$ experimental bounds superimposed for $\mHsm=100$GeV and 500GeV, respectively. From Fig.\[fig:oneg\](d), we see that for one extra chiral family, a heavy SM Higgs with $\mHsm=500$GeV can be accommodated via the scenario of large fermionic $T_f>0$. We note in passing that, after the completion of this work, Ref. [@PeskinNew] analyzed the limits on a heavy SM Higgs boson in the case of TeV-scale heavy technifermions which generate a large positive contribution to $T$. Our study has solely focused on relatively light extra chiral families with masses significantly below $\CO ({\rm TeV})$, as motivated by $N=2$ supersymmetry constructions[@N2old; @N2new]. The relaxation of the Higgs mass limits derived from precision electroweak data could be significant in either case. Minimal $N=2$ Supersymmetric SM and Mirror Families {#subsec:MN2SSM} --------------------------------------------------- Before proceeding to discuss the case with three extra chiral families and the two-Higgs-doublets, a review of the theoretical framework which motivates this scenario is in place. As mentioned earlier, this spectrum arises in constructions of low-energy $N=2$ supersymmetry. Low-energy realizations of $N=2$ supersymmetry and its related phenomenology were recently investigated in Ref.[@N2new]. In the minimal $N=2$ supersymmetric SM, for each of the ordinary quark (lepton) and its squark (slepton) superpartner of the $N=1$ extension, there is also a conjugate [*mirror*]{} quark ([*mirror*]{} lepton) and its mirror squark (mirror slepton) superpartner. For each gauge boson and gaugino, there is also a [*mirror*]{} gauge boson and a [*mirror*]{} gaugino. The Higgs and Higgsino are also accompanied by their mirrors. In particular, three additional mirror generations of chiral fermions are predicted in the MN2SSM. The mirror quarks and leptons do not obtain gauge-invariant vectorial mass terms (which would mix the mirror and ordinary sectors) due to a $Z_{2}$ mirror parity [@N2new]. Instead, their masses arise from effective Yukawa interactions and are thus proportional to the relevant Higgs VEVs of electroweak symmetry breaking (EWSB). As such, their mass range is constrained to be at the weak scale \[cf., eq.(\[eq:mirrorF-MR\])\]. In order to realize $\CO(1)$ effective Yukawa couplings at low energies, supersymmetry itself is broken at a low scale. The large Yukawa couplings also imply that mirror fermion/sfermion loops can significantly modify the CP-even Higgs spectrum at one loop. (This is similar to the usual top/stop sector, but now all three mirror families may contribute). The MN2SSM Higgs sector is less constrained than that of the MSSM or other $N=1$ frameworks. In particular, any one of the four Higgs doublets which appear in MN2SSM [@N2new] could participate in EWSB. Even when assuming for simplicity a MSSM-like Higgs structure with two-doublets participating in the EWSB, the $N=2$ two-Higgs-doublet spectrum could be quite different from that of the MSSM. This is because the tree-level Higgs quartic couplings $\lambda$ arise not only from supersymmetric terms $\lambda \sim g^{2}$, for $g$ being the gauge coupling, as in the MSSM, but also from hard supersymmetry breaking operators (whose generation goes hand in hand with that of the effective Yukawa couplings) $\lambda_i \sim (g^{2}) + \kappa_i$ [@hard], where $\kappa_i$ is the contribution from higher order operators in the Kähler potential. Therefore, the usual MSSM relations among the Higgs mass eigenvalues $m_h\ll{m}_H\sim{m}_A\sim{m}_{H^\pm}$ (assuming $m_A\gg{m}_Z$) no longer hold, and the physical Higgs mass spectrum is somewhat arbitrary. This observation is generic to any theory with low-energy supersymmetry breaking where $\kappa \sim {\cal{O}}(1)$ is realized [@hard]. We note in passing that models with higher dimensions often lead after compactification to an effective $N=2$ structure in four-dimensions. Therefore, our analysis of $N=2$ models and of the associated mirror families may be applied in certain cases to theories with large extra dimensions. Interplay of Extra Fermions and Two-Higgs-Doublet Sector -------------------------------------------------------- It was shown above that one extra chiral generation ($N_g = 1$) can be accommodated by the precision data with the SM Higgs mass up to about $500$GeV. This is not the case for $N_{g} = 2$ and $N_{g} = 3$. In fact, the $N_g =3$ case, as predicted in the MN2SSM, requires additional new physics contributions (beyond that of a single Higgs doublet) to the oblique parameters. The minimal version of such an extension is to invoke the two-Higgs-doublet sector. For generality (and being consistent with the $N=2$ framework described above), we will consider a general 2HDM. Thus, our analysis is valid for any given model which contains two Higgs doublets together with extra families, and our constraints on the parameter space can be readily applied to any such model. The two-Higgs-doublet sector can lead to a large negative $T_H$ (cf. Sec.\[sec:2hdm\]) which will cancel to a large part the three-family fermionic $T_f$, and render the sum $T=T_f+T_H$ consistent with the experimental bounds over certain regions of the parameter space. For simplicity, we will assume the second and third families to have the same mass spectrum as the first family. This interplay is explored in Fig.\[fig:threeg\], which is based on an initial sample of 50000 models. Allowed models are determined by imposing the $\95CL$ bounds of $\STU$. Figs.\[fig:threeg\](a) and (b) display the extra fermions and the Higgs bosons spectra, respectively. We choose, for illustration, a typical set of Higgs inputs $(m_h,\,m_H,\,m_A,\,m_{H^\pm}) = (115,\,120,\,1000,\,580)$GeV and $\beta-\alpha=3\pi/4$ in (a), and a set of fermionic inputs $(M_N,\,M_E)=(60,\,250)$GeV, $(M_U,\,M_D)=(250,\,200)$GeV in (b), where three values of $\beta-\alpha$ are shown. Figs.\[fig:threeg\](c) and (d) display the allowed points, with the same inputs as (a) and (b), respectively, in the $S-T$ plane for comparison with the experimental bounds. Variation of $m_h=115$GeV in the $\sim\! 100-200$GeV range does not change the results. Also, for clarity, only $\beta-\alpha =\pi$ is shown in (d), but similar results are obtained for the case of $\beta-\alpha =\pi/2$, or, $3\pi/4$. = 11 cm The choice of the above Higgs inputs in Figs.\[fig:threeg\](a) and (c) corresponds to a small allowed region in the $M_1 - M_2$ plane, i.e., the mirror leptons (dots) are highly non-degenerate, while the mirror quarks (circles) exhibit much smaller isospin breaking. Similar results could be obtained for other choices of Higgs masses, where the Higgs contribution $T_H$ is sizable and negative. From Fig.\[fig:threeg\](b), one observes that the allowed regions are quite distinct for three choices of $\beta -\alpha \in (\pi/2,\,3\pi/4,\,\pi)$. For $\beta -\alpha=\pi$, $m_H$ could vary in a wide range \[corresponding to case (Ia)\] for $m_{H^\pm}\sim 800$GeV. In all other cases, the heavier neutral Higgs $H^0$ has to be generally much lighter than 1TeV. It is interesting to note that for $m_{H^0} \lesssim 200$GeV (i.e., slightly heavier than $h^0$), the charged Higgs mass $m_{H^\pm}$ is confined into two very narrow regions around either $350-450$GeV or $750-800$GeV, for a sizable range of $\beta-\alpha$. Finally, Figs.\[fig:threeg\](c) and (d) indicate that the relevant viable parameter space typically corresponds to $0\lesssim S \lesssim 0.2$ and $-0.1 \lesssim T \lesssim 0.2$. In comparison with the scenario of one-generation and one-Higgs-doublet \[cf., Fig.\[fig:oneg\](c)-(d)\], the viable region in the $S-T$ plane of Fig.\[fig:threeg\](c)-(d) has a smaller $T=T_f+T_H$. This is due to the more negative $T_H$ values contributed by the two-Higgs-doublet sector. Clearly, there are strong correlations among the allowed Higgs and the fermion mass ranges in the $N_g=3$ scenario. This renders the model highly restrictive in its parameter space, and it is thus instructive and encouraging for the relevant experimental tests at the upcoming colliders, such as the Tevatron Run-II, the LHC and the future lepton colliders. Collider signatures, however, merit a dedicated study and will not be discussed here. Before concluding this subsection, we note that in the above we did not address explicitly the less difficult case of $N_g<3$. We expect that $N_g=1,\,2$ can be accommodated over larger regions of the 2HDM parameter space. Conclusions {#sec:sum} =========== In summary, we have demonstrated that one extra generation of relatively light non-degenerate chiral fermions in the mass range, $m_Z/2 \lesssim M_f \lesssim \CO(\langle H \rangle)$, can be consistent with current precision electroweak data without requiring additional new physics source. Sizable mass splitting between up- and down-type fermions can lead to a large positive $T$ without significantly increasing $S>0$. This can largely relax the upper bound from precision data on the mass of a SM-like Higgs boson, as shown in Fig.\[fig:oneg\]. The case of three extra chiral families was shown to be viable when invoking extra new physics, most notably, a two-Higgs-doublet extension. In order to remain model-independent, we performed the analysis for three extra families with a general two-Higgs-doublet sector. We found, after imposing the oblique precision bounds, a highly restrictive mass spectrum for either the fermion sector or Higgs sector (cf., Fig.\[fig:threeg\]), which can lead to various distinct collider signatures. The importance of the two-Higgs-doublet sector is in providing a negative contribution to $T$, and thus allowing for a large isospin violation in the three family fermion sector. We have used weak-scale $N=2$ supersymmetry[@N2old; @N2new] as an explicit theory framework to motivate our study and to define the relevant mass range for the extra chiral families under consideration \[cf., (\[eq:mirrorF-MR\])\], as well as to define the Higgs sector. We note that such an effective four-dimensional $N=2$ structure can be a consequence of the compactification of certain extra-dimensional theories. Possible extensions of our study may include: $(i)$ a more exhaustive parameter scan of the two-Higgs-doublet sector, allowing for flavor-dependent fermion masses and family mixings; $(ii)$ an extended Higgs sector with more than two doublets generating EWSB, which is possible in $N=2$ theories[@N2new]; $(iii)$ oblique corrections from relatively light sfermions (and mirror sfermions) and Majorana fermions such as gauginos, Higgsinos, and their mirrors; and $(iv)$ the considerations of $Z - Z^{\prime}$ mixing in extra $U(1)^{\prime}$ models[@extraZ]. Each of these extensions can affect, in principle, the constraints on $N_{g}$, the two-Higgs-doublet spectrum, and their correlations. However, these are highly model-dependent avenues which are left for future works. In addition, our study may be further extended for a six parameter analysis including $(S,\,T,\,U,\,V,\,W,\,X)$ [@VWX] together, which may be relevant for the region of $M_f \lesssim m_{Z}$. It is our pleasure to thank Jens Erler for various discussions on precision data and for his comments on the manuscript. We also thank Howard E. Haber for conversations on the oblique corrections in the two-Higgs-doublet model and Duane A. Dicus for discussions. H.J.H. is supported by the US Department of Energy (DOE) under grant DE-FG03-93ER40757; N.P. is supported by the DOE under cooperative research agreement No. DF–FC02–94ER40818; and S.S. is supported by the DOE under grant DE-FG03-92-ER-40701 and by the John A. McCone Fellowship.\ \ Higgs Contributions to Oblique Parameters ========================================= We consider general 2HDM where the Higgs bosons $(h^0,\,H^0,\,A^0,\,H^\pm)$ have masses $(m_h,\,m_H,\,m_A,\,m_{H^\pm})$, respectively. After subtracting the SM Higgs corrections to $\STU$ with reference choice $(m_{H}^{\rm sm})_{\rm ref}=m_h$, the one-loop Higgs contributions to $\STU$ read[@haber], $$\begin{aligned} S_H&=&\frac{1}{\pi{m}_Z^2}\left\{\sin^2(\beta-\alpha) {\cal B}_{22}(m_Z^2; m^2_H,m^2_A)- {\cal B}_{22}(m_Z^2; m^2_{H^{\pm}},m^2_{H^{\pm}})\right.\nonumber \\ &&+\left.\cos^2(\beta-\alpha)\left[{\cal B}_{22}(m^2_Z; m^2_h,m^2_A)+ {\cal B}_{22}(m^2_Z;m^2_Z,m^2_H)-{\cal B}_{22}(m^2_Z;m^2_Z,m^2_h) \right.\right.\nonumber \\ &&-\left.\left.m^2_Z{\cal B}_0(m^2_Z;m^2_Z,m^2_H)+m^2_Z {\cal B}_0(m^2_Z;m^2_Z,m^2_h)\right]\right\}, \label{eq:Shiggs}\\ T_H&=&\frac{1}{16\pi{m}^2_Ws^2_W} \left\{F(m^2_{H^{\pm}},m^2_A)+\sin^2(\beta-\alpha)\left[ F(m^2_{H^{\pm}},m^2_H)-F(m^2_A,m^2_H)\right]\right.\nonumber\\ &&+ \cos^2(\beta-\alpha)\left[F(m^2_{H^{\pm}},m^2_h)-F(m^2_A,m^2_h) +F(m^2_W,m^2_H)-F(m^2_W,m^2_h)\right. \nonumber\\ && \left.\left. -F(m^2_Z,m^2_H)+F(m^2_Z,m^2_h) +4m^2_Z \ov{B}_0(m^2_Z,m^2_H,m_h^2) -4m^2_W \ov{B}_0(m^2_W,m^2_H,m_h^2) \]\right\}, \label{eq:Thiggs}\\ U_H&=&-S_H+\frac{1}{\pi{m_Z}^2}\left\{{\cal B}_{22}(m_W^2;m^2_A,m^2_{H^{\pm}}) -2{\cal B}_{22}(m^2_W;m^2_{H^{\pm}},m^2_{H^{\pm}})\right.\nonumber \\ &&+\left.\sin^2(\beta-\alpha){\cal B}_{22}(m^2_W;m^2_H,m^2_{H^{\pm}}) +\cos^2(\beta-\alpha)\left[{\cal B}_{22}(m^2_W;m^2_h,m^2_{H^{\pm}}) \right.\right.\nonumber \\ &&+\left.\left.{\cal B}_{22}(m^2_W;m^2_W,m^2_H) -{\cal B}_{22}(m^2_W;m^2_W,m^2_h)\right.\right.\nonumber \\ &&-\left.\left.m^2_W{\cal B}_0(m^2_W;m^2_W,m^2_H) +m^2_W{\cal B}_0(m^2_W;m^2_W,m^2_h)\right]\right\}, \label{eq:Uhiggs}\end{aligned}$$ where we have explicitly worked out the finite part of ${\cal B}$-functions: $$\begin{aligned} {\cal B}_{0}(q^2;m^2_1,m^2_2)&=&1+\frac{1}{2}\left[ \frac{x_1+x_2}{x_1-x_2}-(x_1-x_2)\right]\ln\frac{x_1}{x_2} +\frac{1}{2}f(x_1,x_2)\,, \\[1.5mm] &\dis\stackrel{m_1=m_2}{\Longrightarrow}& \dis 2 -2\sqrt{4x_1-1}\arctan\f{1}{\sqrt{4x_1-1}} \,, \\[2mm] \ov{B}_0(m_1^2,m_2^2,m_3^2) &\equiv& B_{0}(0;m^2_1,m^2_2)-B_{0}(0;m^2_1,m^2_3) \nonumber\\[1.5mm] &=& \frac{m^2_1\ln{m^2_1}-m^2_3\ln{m^2_3}}{m^2_1-m^2_3}- \frac{m^2_1\ln{m^2_1}-m^2_2\ln{m^2_2}}{m^2_1-m^2_2} \,,\\[2mm] {\cal B}_{22}(q^2;m^2_1,m^2_2) &\equiv& B_{22}(q^2;m^2_1,m^2_2)-B_{22}(0;m^2_1,m^2_2) \nonumber\\ &=&\frac{q^2}{24}\left\{ 2\ln{q^2}+\ln(x_1x_2) +\left[(x_1-x_2)^3-3(x_1^2-x_2^2) \right.\right.\nonumber \\ &&+\left.\left.3(x_1-x_2)\right]\ln\frac{x_1}{x_2} -\left[2(x_1-x_2)^2-8(x_1+x_2)+\frac{10}{3}\right] \right.\nonumber \\ &&-\left.\left[(x_1-x_2)^2-2(x_1+x_2)+1\right]f(x_1,x_2) -6F(x_1,x_2)\right\},\\ &\dis\stackrel{m_1=m_2}{\Longrightarrow}& \frac{q^2}{24}\left[2\ln{q^2} + 2\ln{x_1} +\left(16x_1-\frac{10}{3}\right)+ \left(4x_1-1\right)G(x_1)\right], $$ $$\begin{aligned} F(x_1,x_2)&=&\dis\frac{x_1+x_2}{2}-\frac{x_1x_2}{x_1-x_2}\ln\frac{x_1}{x_2} \,, \label{eq:Ffun} \\[2mm] G(x)&=&\dis -4\sqrt{4x-1}\,\arctan\frac{1}{\sqrt{4x-1}} \,, \label{eq:Gfun} \\[2mm] f(x_1,x_2)&=&\left\{ \begin{array}{ll} -2\sqrt{\Delta}\left[\dis\arctan\frac{x_1-x_2+1}{\sqrt{\Delta}} -\arctan\frac{x_1-x_2-1}{\sqrt{\Delta}}\right]\,, &~~(\Delta>0)\,, \\[1.5mm] 0\,, &~~(\Delta=0)\,,\\[1.5mm] \sqrt{-\Delta}\dis\ln\frac{x_1+x_2-1+\sqrt{-\Delta}}{x_1+x_2-1-\sqrt{-\Delta}}\,, &~~(\Delta<0)\,, \end{array} \right. \label{eq:ffun}\\[3mm] \Delta&=&2(x_1+x_2)-(x_1-x_2)^2-1 \,,\end{aligned}$$ with  $\dis x_i \equiv \f{m_i^2}{q^2}$. The various expressions are simplified in the limit of $m_{\rm Higgs}^2\gg{m}_Z^2$. The approximate formula for $T_H$ in this limit has already been given in eq. (\[eq:thiggsapp\]). Similarly, eqs. (\[eq:Shiggs\]) and (\[eq:Uhiggs\]) reduce in this limit to $$\begin{aligned} S_H&=&\frac{1}{12\pi}\left(\cos^2(\beta-\alpha)\[ \ln\frac{m_H^2}{m_h^2}+g(m_h^2,m_A^2)-\ln\frac{m_{H^{\pm}}^2}{m_hm_A}\] \right.\nonumber \\ &+&\left.\sin^2(\beta-\alpha)\[g(m_H^2,m_A^2)-\ln\frac{m_{H^{\pm}}^2}{m_Hm_A} \]\right),\\ U_H&=&\frac{1}{12\pi}\left(\cos^2(\beta-\alpha)\[g(m_h^2,m_{H^{\pm}}^2) +g(m_A^2,m_{H^{\pm}}^2)-g(m_h^2,m_A^2)\] \right.\nonumber \\ &+&\left.\sin^2(\beta-\alpha)\[g(m_H^2,m_{H^{\pm}}^2)+g(m_A^2,m_{H^{\pm}}^2) -g(m_H^2,m_A^2)\]\right),\end{aligned}$$ where $$g(x_1,x_2)=-\frac{5}{6}+\frac{2x_1x_2}{(x_1-x_2)^2}+ \frac{(x_1+x_2)(x_1^2-4x_1x_2+x_2^2)}{2(x_1-x_2)^3}\ln\frac{x_1}{x_2}.$$ F. Del Aguila, M. Dugan, B. Grinstein, L. Hall, G.G. Ross, and P. West, . N. Polonsky and S. Su, \[hep-ph/0006174\]. Particle Data Group, D. E. Groom [*et al.*]{}, European Physical Journal C[**15**]{}, 1 (2000), http://pdg.lbl.gov and references therein; LEP Electroweak Working Group, http://lepewwg.web.cern.ch; M. L. Swartz, talk given at [*XIX International Symposium on Lepton and Photon Interactions at High Energies,*]{} August 9-14, 1999 \[hep-ex/9912026\]. S. Weinberg, ; L. Susskind, ; S. Dimopoulos and L. Susskind, ; E. Eichten and K. Lane, ; E. Farhi and L. Susskind, Phys. Rep. [**74**]{}, 277 (1981). J. A. Bagger, A. F. Falk, and M. Swartz, , \[hep-ph/9908327\]. M. E. Peskin and T. Takeuchi, ; ; W. J. Marciano and J. L. Rosner, ; D. Kennedy and P. Langacker, ; ; B. Holdom and J. Terning, ; M. Golden and L. Randall, ; G. Altarelli and R. Barbieri, . G. Altarelli, R. Barbieri, and S. Jadach, . A. Gurtu, [*Precision Tests of the Electroweak Gauge Theory*]{}, presentation at XXXth International Conference on High Energy Physics, Osaka, Japan, July27 - August2, 2000. For recent reviews, see: J. Erler and P. Langacker, European Physical Journal C[**15**]{}, 1 (2000), pp.95; P. Langacker, talk given at [*LEP Fest 2000*]{}, October 2000, CERN \[hep-ph/0102085\]; J. Erler, talk given at the [*Symposium in Honor of Alberto Sirlin*]{}, October 2000, NYU, NY, \[hep-ph/0102143\]. M.Veltman, Act. Phys. Pol. B[**8**]{}, 475 (1977); . T. Appelquist and J. Carrazone, . B. A. Dobrescu and C. T. Hill, \[hep-ph/9712319\]; R. S. Chivukula, B. A. Dobrescu, H. Georgi, and C. T. Hill, \[hep-ph/9809470\]. H.-J. He, T.Tait and C.-P.Yuan, (R), \[hep-ph/9911266\]; M.B.Popovic, hep-ph/0102027. I. Maksymyk, C. P. Burgess, and D. London, ; C. P. Burgess, [*et al.*]{}, ; A. Kundu and P. Roy, Int. J. Mod. Phys. A[**12**]{}, 1511 (1997). J.Erler, contribution to [*Workshop of QCD and Weak Boson Physics,*]{} Batavia, Illinois, June 1999 \[hep-ph/0005084\]; http://www.physics.upenn.edu/$\sim$erler/electroweak/ GAPP.html. E. Tournefier, [*Electroweak Results and Fit to the Standard Model*]{}, presentation at XXXVIth Rencontres de Moriond, Les Arcs, France, March, 2001. M. Maltoni, V.A. Novikov, L.B. Okun, A.N. Rozanov, and M.I. Vysotsky, . H. E. Haber, hep-ph/9306207, presented at the Theoretical Advanced Study Institute (TASI 92), Boulder, CO, June, 1992; H. E. Haber and H. E. Logan, , \[hep-ph/9909335\]. N. Polonsky and S. Su, MIT-CTP-3031 \[hep-ph/0010113\]. C. D. Froggatt, R. G. Moorhouse, I. G. Knowles, ; L. Lavoura and L.-F. Li, ; A. K. Grant, . M. Drees, K. Hagiwara, ; A. Djouadi, P. Gambino, S. Heinemeyer, W. Hollik, C. Jnger, G. Weiglein, ; . J. Erler and D.M.Pierce, \[hep-ph/9801238\]; G.C. Cho and K. Hagiwara, Nucl. Phys. B[**574**]{}, 623 (2000) \[hep-ph/9912260\]. H. Georgi, ; M.J. Dugan and L. Randall, ; E. Gates and J. Terning, . M.EPeskin and J.D.Wells, hep-ph/0101342. E.g., J. Erler and P. Langacker, , and references therein. [^1]: The $\STU$ definitions used in Ref.[@EP] are equivalent to the above eqs.(\[eq:S\])-(\[eq:U\]) though the former are defined in term of the gauge boson mass eigenstates instead of the weak eigenstates. [^2]: Our global fit analysis is based on the GAPP package in Ref.[@erler1], including the data update reported in Ref.[@Osaka]. The newest update in Ref.[@Moriond] has no significant effect on our fit and thus does not affect our conclusions. [^3]: The best fit for a pure SM Higgs boson with $\STU=0$ gives a similar but somewhat stronger bound, $34\,{\rm GeV}\,\leq m_{H}^{\rm sm} \,\leq 202$GeV, at 95$\%$C.L.
--- author: - | Andreea Bobu\ UC Berkeley\ abobu@berkeley.edu Andrea Bajcsy\ UC Berkeley\ abajcsy@berkeley.edu Jaime F. Fisac\ UC Berkeley\ jfisac@berkeley.edu Anca D. Dragan\ UC Berkeley\ anca@berkeley.edu title: Learning under Misspecified Objective Spaces ---
--- abstract: 'We propose monolayer epitaxial graphene and hexagonal boron nitride ($h$-BN) as ultimate thickness covalent spacers for magnetoresistive junctions. Using a first-principles approach, we investigate the structural, magnetic and spin transport properties of such junctions based on structurally well defined interfaces with (111) fcc or (0001) hcp ferromagnetic transition metals. We find low resistance area products, strong exchange couplings across the interface, and magnetoresistance ratios exceeding 100% for certain chemical compositions. These properties can be fine tuned, making the proposed junctions attractive for nanoscale spintronics applications.' author: - 'Oleg V. Yazyev' - Alfredo Pasquarello title: Magnetoresistive junctions based on epitaxial graphene and hexagonal boron nitride --- INTRODUCTION ============ Graphene, a recently discovered two-dimensional form of carbon, has attracted unrivaled attention due to its unique physical properties and potential applications in electronics.[@Katsnelson07; @Geim07] This nanomaterial is particularly promising for the field of spintronics, which exploits both the spin and the charge of electrons.[@Son06; @Tombros07; @Yazyev08; @Munoz-Rojas09; @Yazyev08c; @Yazyev09; @Yazyev08b] One fundamental spintronic effect is the magnetoresistance, the change in electric resistance as a function of the relative orientation, either parallel or antiparallel, of the magnetization of two ferromagnetic layers separated by a nonmagnetic spacer layer.[@Heiliger06] Achieving [*high*]{} magnetoresistance ratios while keeping reasonably [*low*]{} electric resistance is crucial for many technological applications.[@Chappert07] However, reaching this goal is currently hindered by material-specific restrictions such as the inability of producing well-ordered ferromagnet/spacer interfaces.[@Yuasa04; @Heiliger06b] Semimetallic graphene and its insulating counterpart, isostructural hexagonal boron nitride ($h$-BN), are promising spacers as epitaxial monolayers of these materials can be grown by means of chemical vapor deposition (CVD) on a broad variety of metallic substrates.[@Oshima97; @Berner07; @Coraux08; @deParga08; @Sutter08; @Martoccia08] The quality of such epitaxial monolayers is very high and the covalent bonding network of both graphene and $h$-BN is perfectly preserved upon the bonding to the substrate. Moreover, the growth of graphene and $h$-BN on fcc(111) and hcp(0001) surfaces of ferromagnetic Co and Ni results in commensurate epitaxial layers due to the closely matching lattice constants.[@Oshima97] This has led to a theoretical prediction of perfect spin filtering and, thus, to extremely high magnetoresistance ratios in such junctions based on multilayer graphene ($\ge$4 layers) and graphite.[@Karpan07] However, the CVD growth on crystalline surfaces is self-inhibiting, that is only one epitaxial layer can be grown. The deposition of ferromagnetic nanoparticles on top of epitaxial $h$-BN has also been demonstrated.[@Auwarter02; @Zhang08] These interfaces further offer the opportunity of fine tuning their properties through the intercalation of other metals, such as Fe,[@Dedkov08] Cu[@Dedkov01] and Au.[@Varykhalov08] In this work, we suggest the use of monolayer graphene and $h$-BN as [*covalently*]{} bonded spacer layers of minimal thickness in magnetoresistive junctions. Through first principles calculations we study the structural, magnetic and spin transport properties of such junctions based on first-row ferromagnetic transition metals: natural hcp and fcc Co, fcc Ni, as well as intercalated fcc Fe. We show that the proposed magnetoresistive junctions realize low electric resistances, strong interlayer exchange couplings, and magnetoresistance ratios exceeding 100% for certain chemical compositions. This paper is organized as follows. In Sec. \[sec2\] we describe our computational methodology, including the first-principles approach to electronic transport. In Sec. \[sec3\] we report the atomic structure and electronic properties of the considered magnetoresistive junctions. Particular attention is devoted to the interlayer exchange couplings. The results of electronic transport calculations are discussed in Sec. \[sec4\]. Section \[sec5\] concludes our work. COMPUTATIONAL METHODS {#sec2} ===================== The electronic and atomic structure calculations were performed using the <span style="font-variant:small-caps;">pwscf</span> plane-wave pseudopotential code of the <span style="font-variant:small-caps;">quantum-espresso</span> distribution.[@QE] To achieve a good description of atomic structures, interlayer exchange couplings and spin transport properties, we chose the Perdew-Burke-Ernzerhof exchange-correlation density functional.[@Perdew96] Ultrasoft pseudopotentials were used to describe core-valence interactions.[@Vanderbilt90] The valence wave functions and the electron density were described by plane-wave basis sets with kinetic energy cutoffs of 25 Ry and 250 Ry, respectively.[@Pasquarello92] The atomic structure of the magnetoresistive junctions considered in our work is illustrated in Fig. \[fig1\](a). Our investigation is restricted to only symmetric junctions, i.e. with the same metal on both sides of the spacer layer. Each ferromagnetic layer consisted of six atomic planes. The solutions for parallel and antiparallel relative spin orientations of these two layers were obtained by specifying appropriate initial orientations of the magnetic moments. The lateral unit cell of the studied interfaces is shown in Fig. \[fig1\](b). We considered bound configurations and determined the lowest-energy structures through the relaxation of atomic positions. For these configurations, we performed quantum transport calculations in the current perpendicular to plane configuration using the <span style="font-variant:small-caps;">pwcond</span> code[@Smogunov04] of the same package. The scattering region included the spacer monolayer and three adjacent monolayers of metal on both sides. We use the optimistic definition of the magnetoresistance ratio: $${\rm MR} = \frac{G_{\uparrow\uparrow}+G_{\downarrow\downarrow}-2G_{\uparrow\downarrow}}{2G_{\uparrow\downarrow}}\times 100\%.$$ The spin-resolved quantum conductances $G_\sigma$ for parallel ($\sigma = \uparrow\uparrow, \downarrow\downarrow$ for majority and minority spins, respectively) and antiparallel ($\sigma = \uparrow\downarrow$) configurations were calculated by integrating the corresponding ${\mathbf k}_{||}$-dependent transmission probabilities $T^\sigma_{{\mathbf k}||}$ evaluated on a uniform grid of 64$\times$64 ${\mathbf k}$-points in the two-dimensional Brillouin zone. ATOMIC AND ELECTRONIC STRUCTURE {#sec3} =============================== ![\[fig1\] (Color online) (a) Representation of the atomic structure of magnetoresistive junctions based on epitaxial monolayer graphene and $h$-BN. (b) Top-view of graphene on (111) surface of fcc Co or Ni. The two-dimensional unit cell is indicated by dotted lines and the principal atomic positions are labeled. (c)–(e) Side-views along the longest unit cell diagonal of the lowest-energy interfaces formed by Co (hcp and fcc) and Ni (fcc) in combination with either monolayer graphene or $h$-BN. ](fig1.eps){width="8.5cm"} To determine the lowest energy structures of the junctions, we carried out structural relaxations for all possible stacking orders of the atomic planes in the vicinity of the spacer layer. The corresponding structures are shown in Fig. \[fig1\] for Co and Ni based junctions and summarized in Table \[tab1\] for all investigated chemical compositions. We find that both graphene (GR) and $h$-BN bound to the transition metals (TM) display short metal-carbon and metal-nitrogen distances (2.19–2.45 Å) comparable to the sum of the corresponding covalent bond radii. The thickness of the spacer layer is thus comparable to that of a single atomic plane of the ferromagnetic metal. For the Ni$|$GR$|$Ni(fcc) junction we find a Ni–C distance of 2.19 Å, which is close to 2.18 Å  calculated for the graphene chemisorbed on the Ni(111) surface (i.e. Ni$|$GR system). The latter value is in good agreement with the experimental value of 2.16$\pm$0.07 Å.[@Oshima97] ![\[fig2\] (Color online) Spin-resolved projected density of states (PDOS) onto the atoms of the spacer layer, either (a) graphene or (b) $h$-BN, in the fcc Co junctions in their parallel (P) and antiparallel (AP) configurations. The majority and minority spin labels refer to the parallel configuration; in the antiparallel arrangement the spin channels are equivalent. ](fig2.eps){width="8.5cm"} We now turn to the electronic structure of these magnetoresistive junctions. Figure \[fig2\] shows the spin-resolved projected density of states (PDOS) onto the light atoms (B, C and N) of the fcc Co junctions. One can see that the characteristic “Dirac cone” density of states of the free-standing graphene is not preserved upon the formation of the Co$|$GR$|$Co(fcc) interface \[Fig. \[fig2\](a)\]. This is consistent with the theoretically predicted[@Karpan07] and experimentally observed[@Gruneis08] strong hybridization between the electronic states of graphene and of the TM surface. Similarly, both B and N centered states fill the band gap of the insulating $h$-BN \[Fig. \[fig2\](b)\]. In both cases, we very similarly find significant contributions of the epitaxial layer states to the density of states at the Fermi level. In the parallel (antiparallel) configuration of the graphene based junction, the induced magnetic moments on the carbon atoms in the unit cell are $-$0.005$\mu_{\rm B}$ (0.081$\mu_{\rm B}$ and $-$0.081$\mu_{\rm B}$). In the parallel configuration of the $h$-BN junction, the induced magnetic moments of N and B atoms are 0.029$\mu_{\rm B}$ and $-$0.065$\mu_{\rm B}$, respectively. In the antiparallel configuration both vanish by symmetry. ------------------- --------------- ------------ ------------------------ ---------------------------- -------------------------- -----     junction stacking $\Delta E$ $G_{\uparrow\uparrow}$ $G_{\downarrow\downarrow}$ $G_{\uparrow\downarrow}$ MR order (meV) ($e^2/h$) ($e^2/h$) ($e^2/h$) (%) Fe$|$GR$|$Fe(fcc) $cba$$|$$bac$ 79 0.334 0.440 0.240 61 Fe$|$BN$|$Fe(fcc) $cba$$|$$abc$ 63 0.256 0.297 0.111 149 Co$|$GR$|$Co(fcc) $bca$$|$$bca$ 91 0.317 0.427 0.232 60 Co$|$BN$|$Co(fcc) $bca$$|$$acb$ 46 0.263 0.268 0.210 26 Ni$|$GR$|$Ni(fcc) $bca$$|$$bca$ $-$18 0.352 0.587 0.402 17 Ni$|$BN$|$Ni(fcc) $bca$$|$$acb$ $-$3 0.207 0.722 0.299 55 Co$|$GR$|$Co(hcp) $aca$$|$$aca$ 29 0.241 0.278 0.140 86 Co$|$BN$|$Co(hcp) $aca$$|$$aca$ 44 0.222 0.241 0.140 66 ------------------- --------------- ------------ ------------------------ ---------------------------- -------------------------- ----- The interlayer exchange coupling, the difference $\Delta E = E_{\rm P}-E_{\rm AP}$ between the energies of parallel and antiparallel configurations, is a manifestation of the superexchange mechanism. It achieves rather high values \[cf. Table \[tab1\]\] due to the ultimate thickness of the spacer layer. For Fe and Co, the antiparallel configuration is energetically favored. On the contrary, the parallel configuration is preferred for Ni. This intriguing crossover provides opportunity for fine tuning the interlayer exchange by varying the chemical composition of the ferromagnetic layers. ELECTRONIC TRANSPORT {#sec4} ==================== Role of spacer material ----------------------- ![\[fig3\] (Color online) ${\mathbf k}_{||}$-Resolved conductance per unit cell (in units of $e^2/h$) through (a) bulk hcp Co along the (0001) direction, (b) Co$|$GR$|$Co(hcp) and (c) Co$|$BN$|$Co(hcp) junctions, and (d) a vacuum layer of equivalent thickness. The columns correspond to majority and minority spin channels of the parallel configuration, and to one of the equivalent spin channels of the antiparallel configuration, respectively. Labels indicate the total conductances per unit cell area. ](fig3.eps){width="8.5cm"} To understand the calculated quantum conductances and the resulting magnetoresistance ratios \[cf. Table \[tab1\]\], we analyzed the ${\mathbf k}_{||}$-resolved transmission probabilities. First, we studied the effect of the spacer layer in hcp Co junctions which have the same lowest energy structure for both graphene and $h$-BN \[Fig. \[fig3\]\]. We found that both systems show strikingly similar ${\mathbf k}_{||}$-resolved transmission probability maps \[compare Figs. \[fig3\](b) and \[fig3\](c)\] and consequently quantum conductances. The $T^{\uparrow\uparrow}_{{\mathbf k}||}$ and $T^{\downarrow\downarrow}_{{\mathbf k}||}$ maps reveal major features of the projected hcp Co Fermi surfaces for the free-electron-like majority spin and mostly $d$-symmetry minority spin electrons \[Fig. \[fig3\](a)\], which are relevant to the quantum conductances of bulk metals.[@Schep98; @Zwierzycki08] The total transmission probabilities of the junctions in the parallel configurations constitute $\sim$40% and $\sim$20% of the Sharvin conductances[@Sharvin65] of bulk hcp Co along the (0001) direction. The quantum conductances in the antiparallel configuration are mostly determined by the overlap of $T^{\uparrow\uparrow}_{{\mathbf k}||}$ and $T^{\downarrow\downarrow}_{{\mathbf k}||}$. Their values are consequently lower ($G_{\uparrow\downarrow}=0.140$ $e^2/h$ per unit cell for both spacer materials). The resulting magnetoresistance ratios are 86% and 66% for graphene and $h$-BN junctions. Thus, in the regime of ultimate thickness the transport properties are largely independent of the electronic structure differences of the two spacer materials. The role of a single layer of covalent spacer material consists in fixing a certain stacking order at the interface and in providing a medium for the abrupt change of magnetization in the antiparallel configuration. Due to the metallic nature of the spacer layers \[cf. Fig. \[fig2\]\] such junctions possess low resistance area products ($<$3$\times$10$^{-15}$ $\Omega$m$^2$) which makes them suitable for nanoscale spintronics applications such as the magnetic random access memories and spin transfer nano-oscillators. We classify the present systems as [*giant magnetoresistance*]{} (GMR) junctions. This contrasts to the spin transport through a vacuum gap of the same thickness which shows $T^\sigma_{{\mathbf k}||}$ decaying with $|{\mathbf k}_{||}|$ \[Fig. \[fig3\](d)\], a characteristic feature of tunneling.[@Belashchenko04] The magnetoresistance ratio is about twice smaller (38%) in the case of tunneling through a vacuum gap. This allows us to conclude that in the limit of ultimate thickness the GMR effect is more efficient than the tunneling magnetoresistance. Role of ferromagnetic layers ---------------------------- ![\[fig4\] (Color online) ${\mathbf k}_{||}$-Resolved conductance per unit cell (in units of $e^2/h$) through fcc (a) Fe, (b) Co and (c) Ni junctions based on monolayer $h$-BN. The columns correspond to majority and minority spin channels of the parallel configuration, and to one of the equivalent spin channels of the antiparallel configuration, respectively. Labels indicate the corresponding total conductances per unit cell area. ](fig4.eps){width="8.5cm"} Next, we studied the dependence of transport properties on the ferromagnetic metal by considering fcc Fe, Co and Ni junctions in combination with $h$-BN. For all three metals, the majority spin transmission in the parallel configuration, $T^{\uparrow\uparrow}_{{\mathbf k}||}$, undergoes little change along the Fe-Co-Ni series \[Fig. \[fig4\]\]. This behavior stems from the similarity of the corresponding majority spin Fermi surfaces of the bulk metals, which are formed by partially filled $s$ bands. However, much larger differences are found for $T^{\downarrow\downarrow}_{{\mathbf k}||}$ involving the minority electrons. These reflect the drastically different Fermi surfaces resulting from the interplay between $s$ and $d$ states. The increase of $G_{\downarrow\downarrow}$ along the series can be attributed to the decrease of hybridization between $s$ and $d$ electrons upon the increase of $d$ band filling:[@Mazin99] in general, the free-electron-like $s$ states show higher transmission probabilities. The $G_{\uparrow\downarrow}$ values are again determined by the overlap of $T^{\uparrow\uparrow}_{{\mathbf k}||}$ and $T^{\downarrow\downarrow}_{{\mathbf k}||}$ and tend to increase along the series. For the Fe$|$BN$|$Fe(fcc) junction we find a magnetoresistance ratio of 150%, the largest value among the compositions studied. Further search of magnetoresistive junctions with improved characteristics may consist in exploring asymmetric junctions and the intercalation of some other chemical elements at the interfaces. We here demonstrate the second possibility. It has been suggested that the incorporation of submonolayer quantities of Cu at the TM$|$GR$_n$ interface would reduce undesired hybridization between the states of graphene and of the metal surface at the price of substantially decreasing the magnetoresistance ratios.[@Karpan07] However, we find that the decoupling of the spacer layer from the metal surface does not necessarily imply the loss of magnetoresistance. This can be achieved by intercalating the metals from the middle of the transition metals series, e.g. Mn, which show reduced binding to carbon $\pi$ systems.[@Pandey01] Indeed, in the intercalated CoMn(1 ML)$|$GR$|$Mn(1 ML)Co(hcp) junction the Mn–C distance increases to 2.95 Å and the interlayer exchange coupling decreases to 10 meV (to be compared with 29 meV for Co$|$GR$|$Co(hcp), cf. Table \[tab1\]). Concurrently, the magnetoresistance ratio raises from 86% to 127%. The Mn layer is strongly spin polarized and antiferromagnetically coupled to hcp Co. CONCLUSIONS {#sec5} =========== In conclusion, we propose epitaxially grown monolayer graphene and $h$-BN as ultimate thickness covalent spacers in transition metal based magnetoresistive junctions. Such junctions display well-ordered interfaces and can be produced through existing manufacturing processes. Their physical properties can be fine tuned in a broad range by varying the chemical composition. These systems show low resistance area products and typical GMR behavior with magnetoresistance ratios exceeding 100% for certain compositions. Both ferromagnetic and antiferromagnetic interlayer exchange couplings are found. These properties make the proposed junctions attractive for spintronics applications such as the magnetic random access memories and spin transfer nano-oscillators. ACKNOWLEDGMENT {#acknowledgment .unnumbered} ============== We acknowledge fruitful discussions with H. Brune, P. J. Kelly and S. Rusponi. We would like to thank A. Smogunov for his help with the <span style="font-variant:small-caps;">pwcond</span> code. The calculations were performed at the CSCS. [33]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , ****, (). , ****, (). , , , , , ****, (); N. Tombros, S. Tanabe, A. Veligura, C. Jozsa, M. Popinciuc, H. T. Jonkman, and B. J. van Wees, Phys. Rev. Lett. [**101**]{}, 046601 (2008). , , , ****, (). , ****, (). , , , ****, (). , ****, (); , ****, (). , , , ****, (); , , , , ****, (); , , , , ****, (). , ****, (). , , , ****, (). , , , ****, (). , , , , , ****, (). , , , , ****, (). , ****, (). , , , , , , , , , , , , , and , ****, (). , , , , ****, (). , , , , , , , ****, (). , , , ****, (). , , , , , , , , , , , , and , ****, (). , , , , , , , , , ****, (); , , , , , , , , , ****, (). , , , , ****, (). , , , , , , , , , ****, (). , , , , ****, (). , , , , , , , ****, (). , , , , , , , , ****, (). P. Giannozzi [*et al.*]{}, http://www.quantum-espresso.org , , , ****, (). , ****, (). A. Pasquarello, K. Laasonen, R. Car, C. Lee, and D. Vanderbilt, Phys. Rev. Lett. [**69**]{}, 1982 (1992); K. Laasonen, A. Pasquarello, R. Car, C. Lee, and D. Vanderbilt, Phys. Rev. B [**47**]{}, 10142 (1993). , , , ****, (); , ****, (). , ****, (). , , , ****, (). , , , , , , , , , , , , ****, (). Yu. V. Sharvin, Zh. Eksp. Teor. Phys. [**48**]{}, 984 (1965); Sov. Phys. JETP [**21**]{}, 655 (1965). , , , , , , ****, (). , ****, (). , , , , ****, ().
--- abstract: 'In large-scale multi-agent systems, the large number of agents and complex game relationship cause great difficulty for policy learning. Therefore, simplifying the learning process is an important research issue. In many multi-agent systems, the interactions between agents often happen locally, which means that agents neither need to coordinate with all other agents nor need to coordinate with others all the time. Traditional methods attempt to use pre-defined rules to capture the interaction relationship between agents. However, the methods cannot be directly used in a large-scale environment due to the difficulty of transforming the complex interactions between agents into rules. In this paper, we model the relationship between agents by a complete graph and propose a novel game abstraction mechanism based on two-stage attention network (G2ANet), which can indicate whether there is an interaction between two agents and the importance of the interaction. We integrate this detection mechanism into graph neural network-based multi-agent reinforcement learning for conducting game abstraction and propose two novel learning algorithms GA-Comm and GA-AC. We conduct experiments in Traffic Junction and Predator-Prey. The results indicate that the proposed methods can simplify the learning process and meanwhile get better asymptotic performance compared with state-of-the-art algorithms.' author: - | Yong Liu, ^1^[^1] Weixun Wang, ^2^Yujing Hu, ^3^ Jianye Hao, ^2,4^Xingguo Chen, ^5^ Yang Gao^1^\ ^1^[National Key Laboratory for Novel Software Technology, Nanjing University]{}\ ^2^[Tianjin University]{}, ^3^[NetEase Fuxi AI Lab]{}, ^4^[Noah’s Ark Lab, Huawei]{}\ ^5^[Jiangsu Key Laboratory of Big Data Security & Intelligent Processing,]{}\ [Nanjing University of Posts and Telecommunications]{}\ lucasliunju@gmail.com, {wxwang, jianye.hao}@tju.edu.cn\ huyujing@corp.netease.com, chenxg@njupt.edu.cn, gaoy@nju.edu.cn\ bibliography: - 'reference.bib' title: 'Multi-Agent Game Abstraction via Graph Attention Neural Network' --- Introduction ============ Multi-agent reinforcement learning (MARL) has shown a great success for solving sequential decision-making problems with multiple agents. Recently, with the advance of deep reinforcement learning (DRL) [@mnih2016asynchronous; @schulman2017proximal], the combination of deep learning and multi-agent reinforcement learning has also been widely studied [@foerster2018counterfactual; @sunehag2018value; @rashid2018qmix]. Recent work has focused on multi-agent reinforcement learning in large-scale multi-agent systems [@yang2018mean; @chen2018factorized], in which the large number of agents and the complexity of interactions pose a significant challenge to the policy learning process. Therefore, simplifying the learning process is a crucial research area. Earlier work focuses on loosely coupled multi-agent systems, and adopt techniques such as game abstraction and knowledge transfer to help with speeding up multi-agent reinforcement learning[@GuestrinLP02; @KokV04; @de2010learning; @MeloV11; @hu2015learning; @yu2015multiagent; @yong2019ijcai]. However, in a large multi-agent environment, agents are often related to some other agents rather than independent, which makes the previously learnt single-agent knowledge has limited use. Recent work focuses on achieving game abstraction through pre-defined rules (e.g., the distance between agents) [@yang2018mean; @jiang2018graph]. However, it is difficult to define the interaction relationship between agents through pre-defined rules in large-scale multi-agent systems. In this paper, we propose to automatically learn the interaction relationship between agents through end-to-end model design, based on which game abstraction can be achieved. ![Game Abstraction based on two-stage attention mechanism and Graph Neural Network (GNN).[]{data-label="Fig: idea"}](paper-idea.pdf){width="3.3in"} The key to game abstraction is learning the relationship between agents. Recent work uses soft-attention mechanism to learn the importance distribution of the other agents for each agent [@jiang2018learning; @iqbal2018actor]. However, the final output softmax function indicates that the importance weight of each agent still depends on the weight of the other agents. That is to say, these methods cannot really learn the relationship between agents and ignore irrelevant agents to simplify policy learning. As shown in Figure \[Fig: idea\], we represent all agents as a complete graph and propose a novel multi-agent game abstraction algorithm based on two-stage attention network (G2ANet), where hard-attention is used to cut the unrelated edges and soft-attention is used to learn the importance weight of the edges. In addition, we use GNN to obtain the contribution from other agents, which includes the information of the other agents to achieve coordination, and apply the mechanism into several algorithms. We list the main contributions as follows: - We propose a novel two-stage attention mechanism G2ANet for game abstraction, which can be combined with graph neural network (GNN). - By combining G2ANet with a policy network and $Q$-value network respectively, we propose a communication-based MARL algorithm GA-Comm and an actor-critic (AC) based algorithm GA-AC. - Experiments are conducted in Traffic Junction and Predator-Prey. The results show that our methods can simplify the learning process and meanwhile get better asymptotic performance compared with state-of-the-art algorithms. Background ========== We review some key concepts in multi-agent reinforcement learning and related work in this section. Markov Game and Game Abstraction -------------------------------- Markov game, which is also known as stochastic game, is widely adopted as the model of multi-agent reinforcement learning (MARL). It can be treated as the extension of Markov decision process (MDP) to multi-agent setting. \[def: Markov Game\] An n-agent (n $\ge 2$) Markov game is a tuple $\langle N, S, \{A_{i} \}^{n}_{i=1}, \{R_{i}\}^{n}_{i=1}, T \rangle$, where N is the set of agents, S is the state space, $A_{i}$ is the action space of agent i(i=1,...,n). Let $A=A_{1} \times A_{2} \times \cdots \times A_{n}$ be the joint action space. $R_i: S \times A \rightarrow \mathfrak{R}$ is the reward function of agent $i$ and $T: S\times A \times S \rightarrow [0,1]$ is the transition function. In a markov game, each agent attempts to maximize its expected sum of discounted rewards, $E\{\sum_{k=0}^{\infty}\gamma^{k}r_{i,t+k} \}$, where $r_{i,t+k}$ is the reward received $k$ steps into the future by agent $i$ and $\gamma$ is the discount factor. Denote the policy of agent $i$ by $\pi_i: S \times A_i \rightarrow [0,1]$ and the joint policy of all agents by $\pi = (\pi_{1}, \dots , \pi_{n})$. The state-action value function of an agent $i$ under a joint policy $\pi$ can be defined as: $$Q_{i}^{\pi}(s,\vec a) = E_{\pi} \left \{\sum_{k=0}^{\infty}\gamma^{k}r_{i}^{t+k}|s_{t}=s, \vec a_{t} = \vec a \right \},$$ where $\vec a \in A$ represents a joint action and $r_{i}^{t+k}$ is the reward received by agent $i$ at time step $(t + k)$. However, since $Q_{i}^{\pi}$ depends on the actions of all agents, the concept of optimal policy should be replaced with joint policy. ### Game Abstraction The main idea of game abstraction is to simplify the problem model of multi-agent reinforcement learning (Markov game) to a smaller game, so as to reduce the complexity of solving (or learning) the game equilibrium policy. Attention Mechanism ------------------- Attention is widely used in many AI fields, including natural language processing [@bahdanau2014neural], computer vision [@wang2018non], and so on. Soft and hard attention are the two major types of attention mechanisms. ### Soft-Attention Soft attention calculates a importance distribution of elements. Specially, soft attention mechanism is fully differentiable and thus can be easily trained by end-to-end back-propagation. Softmax function is a common activation function. However, the function usually assigns nonzero probabilities to unrelated elements, which weakens the attention given to the truly important elements. ### Hard-Attention Hard attention selects a subset from input elements, which force a model to focus solely on the important elements, entirely discarding the others. However, hard attention mechanism is to select elements based on sampling and thus is non-differentiable. Therefore, it cannot learn the attention weight directly through end-to-end back-propagation. Deep Multi-Agent Reinforcement Learning --------------------------------------- With the development of deep reinforcement learning, recent work in MARL has started moving from tabular methods to deep learning methods. In this paper, we select communication-based algorithms CommNet [@sukhbaatar2016learning], IC3Net [@singh2018learning], Actor-Critic-based algorithms MADDPG [@lowe2017multi] and MAAC [@iqbal2018actor] as baselines. ### CommNet CommNet allows communication between agents over a channel where an agent is provided with the average of hidden state representations of the other agents as a communication signal. ### IC3Net IC3Net can learn when to communicate based on a gating mechanism. The gating mechanism allows agents to block their communication and can be treated as a simple hard-attention. ### MADDPG MADDPG adopts the framework of centralized training with decentralized execution, which allows the policies to use extra information at training time. It is a simple extension of actor-critic policy gradient methods where the critic is augmented with extra information for other agents, while the actor only has access to local information. ### MAAC MAAC learns a centralized critic with an soft-attention mechanism. The mechanism is able to dynamically select which agents to attend to at each time step. Our Method ========== In this section, we propose a novel game abstraction approach based on two-stage attention mechanism (G2ANet). Based on the mechanism, we propose two novel MARL algorithms (GA-Comm and GA-AC). G2ANet: Game Abstraction Based on Two-Stage Attention ----------------------------------------------------- We construct the relationship between agents as a graph, where each node represents a single agent, and all nodes are connected in pairs by default. We define the graph as Agent-Coordination Graph. \[Def:ACG\] (Agent-Coordination Graph) The relationship between agents is defined as an undirected graph as $G=(N,E)$, consisting of the set $N$ of nodes and the set $E$ of edges, which are unordered pairs of elements of $N$. Each node represents the agent entry, and the edge represents the relationship between the two adjacent agents. In large scale multi-agent systems, the number of agents is large, and not all agents need to interact with each other. In this paper, we try to identify unrelated agents by learning the relationship between agents, and perform game abstraction according to the learnt relationship. The simplest way of game abstraction is to design some artificial rules. @yang2018mean proposed mean-field based multi-agent learning algorithm, where each agent has its own vision and just needs to interact with the agents in its vision [@yang2018mean]. However, such mean-field MARL algorithm requires strong prior knowledge of the environment and may not be suitable for application in complex environments. In a large-scale MAS, the interaction between agents is more complicated, the pre-defined rules are difficult to obtain and it cannot dynamically adjust based on the state transition. Inspired by attention mechanism [@bahdanau2014neural; @ba2014multiple; @mnih2014recurrent; @xu2015show; @vaswani2017attention], we firstly propose the two-stage attention game abstraction algorithm called G2ANet, which learns the interaction relationship between agents through hard-attention and soft-attention mechanisms. Recent work has tried to combine MARL with the attention mechanism [@jiang2018learning; @iqbal2018actor]. However, the main idea is to use the soft-attention mechanism to learn the importance distribution of all other agents to the current agent through softmax function: $$w_{k}=\frac{exp(f(T,e_{k}))}{\sum_{i=1}^{K}exp(f(T,e_{i}))},$$ where $e_{k}$ is the feature vector of agent $k$, $T$ is the current agent feature vector, and $w_{k}$ is the importance weight for agent k. However, the output value of the softmax function is a relative value and cannot really model the relationship between agents. In addition, this method cannot directly reduce the number of agents that need to interact since the unrelated agents will also obtain an importance weight. In addition, the softmax function usually assigns small but nonzero probabilities to trivial agents, which weakens the attention given to the few truly significant agents. In this paper, we propose a novel attention mechanism based on two-stage attention (G2ANet) to solve the above problems. ![Two-Stage Attention Neural Network.[]{data-label="Fig: Two-Stage-Attention"}](two-stage-attention.pdf){width="3.3in"} We consider a partially observable environment, where at each time-step $t$, each agent $i$ receives a local observation $o_i^t$, which is the property of agent $i$ in the agent-coordination graph $G$. The local observation $o_i^t$ is encoded into a feature vector $h_i^t$ by MLP. Then, we use the feature vector $h_i^t$ to learn the relationship between agents by attention mechanism. We know that the hard attention model can output a one-hot vector. That is, we can get whether the edge between node $i$ and $j$ exist in the graph $G$ and which agents each agent needs to interact with. In this way, the policy learning is simplified to several smaller problems and preliminary game abstraction can be achieved. In addition, we find that each agent plays a different role for a specific agent. That is, the weight of each edge in the graph $G$ is different. Inspired by [@vaswani2017attention], we train a soft-attention model to learn the weight of each edge. In this way, we can get a sub-graph $G_{i}$ for agent $i$, where agent $i$ is only connected with the agents that need to interact with, and the weight on edge describes the importance of the relationship. For sub-graph $G_{i}$, we can use Graph Neural Network (GNN) to obtain a vector representation, which represents the contribution from other agents to agent $i$. Moreover, G2ANet has a good generality which can combine with communication-based algorithms [@sukhbaatar2016learning; @singh2018learning] and AC-based algorithms [@lowe2017multi; @iqbal2018actor]. We will discuss it in the next subsection. The two-stage attention mechanism is shown in Figure \[Fig: Two-Stage-Attention\]. First, we use the hard-attention mechanism to learn the hard weight $W_{h}^{i,j}$, which determines whether there is interaction relationship between agent $i$ and $j$. In this paper, we use a LSTM network to achieve it, where each time-step output a weight (0 or 1) for agent $i,j$, where $j \in \{1,...,n\}$ and $i \neq j$. For agent $i$, we merge the embedding vector of agent $i,j$ into a feature $(h_{i},h_{j})$ and input the feature into LSTM model: $$h_{i,j} = f(LSTM(h_{i},h_{j})),$$ where $f(\cdot)$ is a fully connected layer for embedding. However, the output of traditional LSTM network only depends on the input of the current time and the previous time but ignores the input information of the later time. That is to say, the order of the inputs (agents) plays an important role in the process and the output weight value cannot take advantage of all agents’ information. We think that is short-sighted and not reasonable. In this paper, we select a Bi-LSTM model to solve it. For example, the relationship weight between agent $i$ and agent $j$ also depends on the information of agent $k$ in the environment, where agent $k \in \{1,...,n\}$ and agent $k$ is not in $\{i,j\}$. In addition, the hard-attention is often unable to achieve back-propagation of gradients due to the sampling process. We try to use gumbel-softmax [@gumbel] function to solve it: $$W_{h}^{i,j} = gum(f(LSTM(h_{i},h_{j}))),$$ where $gum(\cdot)$ represents gumbel-softmax function. By hard-attention mechanism, we can get a sub-graph $G_{i}$ for agent $i$, where agent $i$ just connected with the agents that need to coordinate. Then we use soft-attention to learn the weight of each edge in $G_{i}$. As shown in Figure \[Fig: Two-Stage-Attention\], the soft-attention weight $W_{s}^{i,j}$ compares the embedding $e_{j}$ with $e_{i}$, using the query-key system (key-value pair) and passes the matching value between these two embeddings into a softmax function: $$W_{s}^{i,j} \propto exp(e_{j}^{T}W_{k}^{T}W_{q}e_{i}W_{h}^{i,j}),$$ where $W_{k}$ transforms $e_{j}$ into a key, $W_{q}$ transforms $e_{i}$ into a query and $W_{h}^{i,j}$ is the hard-attention value. Finally, the soft-attention weight value $W_{s}^{i,j}$ is the final weight of the edge, which is defined as $W^{i,j}$. Learning Algorithm Based on Game Abstraction -------------------------------------------- Through the two-stage attention model, we can get a reduced graph in which each agent (node) is connected only to the agent (node) that needs to interact with. For example, in Figure \[Fig: idea\], we can get a sub-graph $G_i$ for agent $i$, where the center node is agent $i$ (node $i$). As we all know, GNN has powerful encoding ability. If each node represents the agent’s encoding in the sub-graph $G_i$, we can use GNN to get a joint encoding for agent $i$, which defines the contribution of all other agents for the current agent $i$. By the joint vector encoding, our method can make better decisions. As mentioned earlier, our two-stage attention-based game abstraction method is a general mechanism. In this paper, we combine G2ANet with policy network and $Q$-value network respectively, and propose two learning algorithms: (1) **Policy network in communication model (GA-Comm)**: Each agent considers the communication vectors of all other agents when making decisions; (2) **Critic network in actor-critic model (GA-AC)**: Critic network of each agent considers the state and action information of all other agents when calculating its $Q$-value in AC-based methods. ![Communication model based on Game Abstraction[]{data-label="Fig: GA-Comm"}](GA-IC3Net.pdf){width="3.5in"} ### Policy Network Based on Game Abstraction {#GA-Comm} As we all know, much related work focus on learning multi-agent communication [@sukhbaatar2016learning; @singh2018learning], most of which achieve communication through aggregation function, which can access all other agents’ communication vector (e.g., average function, maximum function) into one vector and pass it to each agent. In this way, each agent can receive all agent’s information and achieve communication. However, there is no need for each agent to communicate with all other agents in most environments. That means the frequent communication will cause high computing cost and increase the difficulty of policy learning. In this paper, we combine the novel game abstraction mechanism G2ANet with policy network and propose a novel communication-based MARL learning algorithm GA-Comm. As shown in Figure \[Fig: GA-Comm\], $o_{i}$ represents the observation of agent $i$, its policy takes the form of: $$a_{i} = \pi(h_{i},x_{i}),$$ where $\pi$ is the action policy of an agent, $h_{i}$ is the observation feature for agent $i$ and $x_{i}$ is the contribution from other agents for agent $i$. In this paper, we use a LSTM layer to extract the feature: $$h_{i},s_{i} = LSTM(e(o_{i}),h_{i}^{'},s_{i}^{'}),$$ where $o_{i}$ is the observation of agent $i$ at time-step $t$, $e(\cdot)$ is an encoder function parameterized by a fully-connected neural network. Also, $h_{i}$ and $s_{i}$ are the hidden and cell states of the LSTM. As for the contribution for agent $i$ from other agents, we firstly use two-stage attention mechanism to select which agents the agent $i$ need to communicate and obtain its importance: $$W_{h}^{i,j} = M_{hard}(h_{i},h_{j}), \\ W_{s}^{i,j} = M_{soft}(W_{h},h_{i},h_{j}),$$ where $W_{h}^{i,j}$ is the hard-attention value and $W_{s}^{i,j}$ is the soft-attention value calculated by hidden state $h_{i}$, $h_{j}$. $M_{hard}$ is the hard-attention model and $M_{soft}$ is the soft-attention model. In this way, we can get the contribution $x_{i}$ from other agents by GNN. We use a simple method to calculate, which is a weighted sum of other agents’ contribution by two-stage attention mechanism: $$x_{i} = \sum_{j \ne i}w_{j}h_{j} = \sum_{j \ne i}W_{h}^{i,j}W_{s}^{i,j}h_{j}.$$ Finally, we can get the action $a_{i}$ for agent $i$. During the training process, we train the policy $\pi$ with REINFORCE [@williams1992simple]. ### Actor-Critic Network Based on Game Abstraction {#GA-AC} Inspired by MAAC [@iqbal2018actor], we propose a novel learning algorithm based on G2ANet. To calculate the $Q$-value $Q_i(o_{i},a_{i})$ for agent $i$, the critic network receives the observations $o=(o_{1},...,o_{N})$ and actions, $a=(a_{1},...,a_{N})$ for all agents. $Q_{i}(o_{i},a_{i})$ is the value function for agent $i$ : $$Q_{i}(o_{i},a_{i}) = f_{i}(g_{i}(o_{i},a_{i}),x_{i}),$$ where $f_{i}$ and $g_{i}$ is a multi-layer perception (MLP), $x_{i}$ is the contribution from other agents, which is computed by GNN. In this paper, we use a simple method, which is a weighted sum of each agent’s value based on our two-stage attention mechanism: $$x_{i} = \sum_{j \ne i}w_{j}v_{j} = \sum_{j \ne i}w_{j}h(Vg_{j}(o_{j},a_{j})),$$ where the value, $v_j$ is an embedding of agent $j$, encoded with an embedding function and then transformed by a shared matrix $V$ and $h(\cdot)$ is an elementwise nonlinearity. The attention weight $w_{j}$ is computed by the two-stage attention mechanism, which compares the embedding $e_{j}$ with $e_{i} = g_{i}(o_{i},a_{i})$ and passes the relation value between these two embeddings into a softmax function: $$w_{j} = W_{h}^{i,j}W_{s}^{i,j} \propto exp(h(BiLSTM_{j}(e_{i},e_{j}))e_{j}^{T}W_{k}^{T}W_{q}e_{i}),$$ where $W_q$ transforms $e_i$ into a query and $W_k$ transforms $e_j$ into a key. In this way, we can obtain the attention weight $w_{j}$ and calculate the $Q$ value for each agent. ![Actor-Critic model based on Game Abstraction[]{data-label="Fig: GA-AC"}](GA-AC.pdf){width="3.7in"} ![image](Exp_Traffic_Junction.pdf){width="7in"} Experiments =========== In this section, we evaluate the performance of our game abstraction algorithms in two scenarios. The first one is conducted in Traffic Junction [@singh2018learning], where we use policy based game abstraction algorithm GA-Comm and baselines are CommNet and IC3Net. The second is the Predator-Prey in Multi-Agent Particle environment [@lowe2017multi], where we use $Q$-value based game abstraction algorithm GA-AC and the baselines are MADDPG and MAAC. Traffic Junction ---------------- The simulated traffic junction environments from [@singh2018learning] consists of cars moving along pre-assigned potentially interesting routes on one or more road junctions. “Success” indicates that no collisions occur at a time-step. We can calculate the success rate according to the number of time steps and collisions (failures) in each episode. The total number of cars is fixed at $N_{max}$ and new cars get added to the environment with probability $p_{arrive}$ at every time-step. The task has three difficulty levels which vary in the number of possible routes, entry points and junctions. Fallowing the same setting in IC3Net [@singh2018learning], the number of agents in the easy, medium, and hard environments is 5, 10, and 20, respectively. We make this task harder by always setting vision to zero in all the three difficulty levels, which means that each agent’s local observation only has its position information and each agent need to obtain other agents’ information to achieve coordination via communication mechanism. The action space for each car is gas and break, and the reward consists of a linear time penalty $-0.01\tau$, where $\tau$ is the number of time-steps since the car has been active, and a collision penalty $r_{collision} = -10$. Figure \[Fig: Exp\_TJ\_Result\] illustrates the success rate per episode attained by various methods on TJ, where GA-Comm is our communication model based on G2ANet and IC3Net is a communication method based on one-stage hard attention. Table 1 shows the success rates on different levels (easy, medium, and hard), which is the average success rate of 10 runs and the variance of the 10 repeated experiments can be obtained from the shaded area in Figure \[Fig: Exp\_TJ\_Result\]. Our proposed approach based on game abstraction is competitive when compared to other methods. As the setting in IC3Net [@singh2018learning], we use the method of curriculum learning to train the model, gradually increase the number of agents in the environment, and further simplify the learning of the model. As shown in Figure \[Fig: Exp\_TJ\_Result\], GA-Comm performs better than all baseline methods in all modes. Our approach is not only high in success rate but also more stable. In addition, as the difficulty of the environment gradually increases (the number of junctions increases) and the number of agents gradually increases, the effect is more obvious. We can find that the success rate of our method is about 6%, 7% and 11% higher than IC3Net in the three levels (easy, medium and hard), which verifies that our method is more effective (6-7-11) as the difficulty of environment gradually increases. This further illustrates the applicability of our game abstraction mechanism in large-scale multi-agent systems. ![Agents with the same color represent a group, and each agent just need to interact with the agents in the group.[]{data-label="Fig: Exp_TJ_Display"}](Game_Relation.pdf){width="3.7in"} ![image](Result_TJ.pdf){width="7in"} At different time steps in an episode, the relationship between agents is constantly changing. In our method, we can learn the adaptive and dynamic attention value. In order to analyze the influence of the game abstraction mechanism on the learning results, the game relationship between agents is showed in Figure \[Fig: Exp\_TJ\_Display\](a), which only describes the attention values of one certain time-step. Each agent has its color (e.g., green, blue, yellow, red, and purple), and the same color agents represent a group. It is observed that each agent can select their partners and form a group (purple is the independent agent), and ignores the unrelated agents. For example, all agents are mainly divided into four groups, each group mainly gathers near the junction. For agent $a$, the green agents are its teammates, which concentrate on one junction, and it can ignore other agents when making a decision. In addition, for each agent, the importance is different for the agents in the group. Figure \[Fig: Exp\_TJ\_Display\](b-c) shows the final attention value distribution for agent $a$ (left) and agent $k$. Agent $a,c,d,e$ in the same group and the importance of agent $c$ and agent $d$ is larger than agent $e$ for agent $a$. Similarly, the importance of agent $l,m$ is larger than agent $n,o$ for agent $k$. We can conclude that the game abstraction that first ignores unrelated agents, and then learns an important distribution in a smaller number of environments. In this way, we can avoid learning the importance distribution of all agents directly in a larger-scale MAS, and the final value is more accurate. Algorithm Easy Medium Hard ----------- ----------- ----------- ----------- -- CommNet 93.5% 78.8% 6.5% IC3Net 93.2% 90.8% 70.9% GA-Comm **99.7%** **97.6%** **82.3%** : Success Rate in the Traffic Junction \[Tab: Exp\_TJ\] Multi-Agent Particle Environment -------------------------------- The second scenario in this paper is the Multi-Agent Particle Environment. As shown in Figure \[Fig: Exp\_PP\_Result\](a), we choose $predator-prey$ as the test environment, where the adversary agent (red) is slower and needs to capture the good agent (green), and the good agent is faster and needs to escape. In this paper, we fix the policy (DQN) of the good agents. As the setting in MADDPG, adversary agents receive a reward of +10 when they capture good agents. ![Experimental result in Predator-Prey.[]{data-label="Fig: Exp_PP_Result"}](Exp_Result_PP_1.pdf){width="\linewidth"} We trained the model in the setting of $N_{a} = 5$ and $N_{g} = 2$ for 1500 episodes, where $N_{a}$ is the number of adversary and $N_{g}$ is the number of good agents. Similarly, adversary agents need to achieve multiple groups to capture all the good agents. Figure \[Fig: Exp\_PP\_Result\] shows the learning curves of each agent’s average reward, where MADDPG is an algorithm proposed by @lowe2017multi and MAAC is a soft-attention based algorithm proposed by @iqbal2018actor. GA-AC outperforms all the baselines in terms of mean reward. It is observed that our method is slower to learn (Compared with the soft-attention method MAAC) in the early stage. We think that is because the architecture of our two-stage attention network is more complex. The final better performance verifies the effectiveness of our game abstraction mechanism. ![Attention value distribution. (a) is the attention contribution for agent 1, (b) is the attention distribution for agent 4.[]{data-label="Fig: PP_Attention"}](Exp_Attention.pdf){width="\linewidth"} As shown in Figure \[Fig: Exp\_PP\_Result\], it’s observed that five adversary agents are divided into two groups to chase two good agents. Each agent just needs to interact with the agents in the same group, which can effectively avoid the interference of the unrelated agents. The final result also shows that our game abstraction mechanism based algorithm GA-AC has learned a reasonable combination form. In Figure \[Fig: PP\_Attention\], we can obtain the attention value distribution for agent $1$ (Figure \[Fig: PP\_Attention\](a)) and agent $4$ (Figure \[Fig: PP\_Attention\](b)). Agents $1,2,3$ are in the same group, and the importance of agent $2$ and agent $3$ is larger than that of agents 4 and 5 for agent $1$. Similarly, the importance of agent $5$ is larger than agent that of agents 1, 2, and 3 for agent $4$. We can conclude that the game abstraction method proposed in this paper can well model the game relationship between agents, avoid the interference of unrelated agents and accelerate the process of policy learning. Conclusions =========== In this paper, we focus on the simplification of policy learning in large-scale multi-agent systems. We learn the relationship between agents and achieve game abstraction by defining a novel attention mechanism. At different time steps in an episode, the relationship between agents is constantly changing. In this paper, we can learn the adaptive and dynamic attention value. Our major contributions include the novel two-stage attention mechanism G2ANet, and the two game abstraction based learning algorithms GA-Comm and GA-AC. Experimental results in Traffic Junction and Predator-Prey show that with the novel game abstraction mechanism, the GA-Comm and GA-AC algorithms can get better performance compared with state-of-the-art algorithms. Acknowledgments =============== This work is supported by Science and Technology Innovation 2030 –“New Generation Artificial Intelligence” Major Project No.(2018AAA0100905), the National Natural Science Foundation of China (Nos.: 61432008, 61702362, U1836214, 61403208), the Collaborative Innovation Center of Novel Software Technology and Industrialization. [^1]: Equal contribution, corresponding author. This work is partially done when Weixun Wang was intern in NetEase Fuxi AI Lab.
--- author: - | Joseph Don <span style="font-variant:small-caps;">Parker</span>$^{1}$, Masahide <span style="font-variant:small-caps;">Harada</span>$^{2}$, Hirotoshi <span style="font-variant:small-caps;">Hayashida</span>$^{1}$, Kosuke <span style="font-variant:small-caps;">Hiroi</span>$^{2}$,\ Tetsuya <span style="font-variant:small-caps;">Kai</span>$^{2}$, Yoshihiro <span style="font-variant:small-caps;">Matsumoto</span>$^{1}$, Takeshi <span style="font-variant:small-caps;">Nakatani</span>$^{2}$, Kenichi <span style="font-variant:small-caps;">Oikawa</span>$^{2}$,\ Mariko <span style="font-variant:small-caps;">Segawa</span>$^{2}$, Takenao <span style="font-variant:small-caps;">Shinohara</span>$^{2}$, Yuhua <span style="font-variant:small-caps;">Su</span>$^{2}$, Atsushi <span style="font-variant:small-caps;">Takada</span>$^{3}$, Taito <span style="font-variant:small-caps;">Takemura</span>$^{3}$,\ Tomoyuki <span style="font-variant:small-caps;">Taniguchi</span>$^{3}$, Toru <span style="font-variant:small-caps;">Tanimori</span>$^{3}$, and Yoshiaki <span style="font-variant:small-caps;">Kiyanagi</span>$^{4}$ title: 'Development of Energy-Resolved Neutron Imaging Detectors at RADEN' --- Introduction ============ The Energy-Resolved Neutron Imaging System, RADEN [@shinohara16], located at beam line BL22 of the Materials and Life Science Experimental Facility (MLF) at J-PARC in Japan, is designed to take full advantage of the high-intensity, pulsed neutron beam of the MLF to perform not only conventional radiography/tomography, but also more recently developed [*energy-resolved*]{} neutron imaging techniques. These energy-resolved techniques enable observation of the macroscopic distribution of microscopic properties within bulk materials [*in situ*]{}, including crystallographic structure and internal strain (Bragg-edge transmission [@sato10]), nuclide-specific density and temperature distributions (neutron resonance absorption [@sato09]), and internal/external magnetic fields (pulsed, polarized neutron imaging [@shinohara11]), by analysis of the energy-dependent neutron transmission point-by-point over a sample. Utilizing the low-divergence, pulsed neutron beam at RADEN, we combine advanced, two-dimensional neutron detectors featuring fine time resolution with the determination of neutron energy by the time-of-flight method to allow observation of the energy-dependent transmission simultaneously at all points in a single measurement. The quantitative nature of these techniques and potentially short measurement times make energy-resolved neutron imaging at intense, pulsed neutron sources very attractive for both scientific and industrial applications. To carry out such measurements in the high-rate, high-background environment at a pulsed spallation neutron source such as the J-PARC MLF requires detectors with sub-$\mu$s time and sub-mm spatial resolutions, excellent background rejection, and high rate capability. At RADEN, we use cutting-edge detector systems, which have been developed in Japan, employing micro-pattern detectors or fast Li-glass scintillators coupled with high-speed, FPGA (Field Programmable Gate Array)-based data acquisition systems. As opposed to conventional CCD camera detectors, these [*event-type*]{} detectors measure each individual neutron event to provide the necessary time resolution and event-by-event background rejection. Furthermore, the micro-pattern detectors, by virtue of sub-mm strip pitches, are able to operate at Mcps (mega-counts-per-second) rates and provide spatial resolutions on par with conventional CCD camera systems, while the fast decay time of about 100 ns for Li-glass scintillator potentially allows high overall count rates on the order of 100 Mcps. In this paper, we introduce the event-type neutron imaging detectors in use at RADEN and discuss our ongoing detector development activities, including results of tests carried out at RADEN. Event-Type Detectors at RADEN ============================= The event-type detector systems currently available at RADEN include two micro-pattern detectors, the $\mu$NID (Micro-pixel chamber based Neutron Imaging Detector) [@parker13a; @parker13b] and nGEM (boron-coated Gas Electron Multiplier) [@uno12] developed at Kyoto University and KEK, respectively, along with a pixelated Li-glass scintillator detector, the LiTA12 ($^6$Li Time Analyzer, model 2012) [@satoh15] from KEK. The main features of these detectors are listed in Table \[tab:dets\]. The micro-pattern detectors have a detection area of $10 \times 10$ cm$^2$ and are based on gaseous time projection chambers with charge amplification provided by a micro-pixel chamber ($\mu$PIC) [@ochi01] in the case of the $\mu$NID and multiple, thin-foil Gas Electron Multipliers (GEMs) [@gemref] for the nGEM. The $\mu$NID incorporates $^3$He in the gas mixture for 26% efficiency at a neutron energy of $E_n = 25.3$ meV. The nGEM, on the other hand, uses a 1.2-$\mu$m thick $^{10}$B coating ($>$98% purity) deposited on the aluminum drift cathode and both sides of one GEM foil to achieve 10% efficiency at $E_n = 25.3$ meV. The LiTA12 is comprised of a $16 \times 16$ array of $^6$Li-impregnated glass scintillator pixels (type GS20) matched to a Hamamatsu H9500 multi-anode photomultiplier with a 3 mm anode pitch and a total area of $4.9 \times 4.9$ cm$^2$. The Li-glass scintillator provides a thermal neutron efficiency of more than 48% per pixel at $E_n = 25.3$ meV, with an overall efficiency of 23% when including dead space between the pixels. All detectors feature fast, all-digital FPGA-based data acquisition with data transfer over Gigabit Ethernet to provide for the necessary time resolution and high-rate operation required at the intense pulsed neutron source of the MLF. In preliminary testing reported in Refs. [@parker15] and [@parker16], as well as in subsequent testing, the expected spatial resolution of each detector was confirmed at RADEN. The $\mu$NID and nGEM were evaluated using a Gd test target designed at RADEN [@segawa17], while that of the LiTA12 was confirmed using a simple shape made from Cd plate. The rate performance of each system was also studied using adjustable B$_4$C slits to vary the incident neutron intensity. To characterize the rate performance, we determined two quantities: [*peak count-rate capacity*]{}, which indicates the absolute maximum instantaneous neutron rate measured over the whole detector, and [*effective peak count rate*]{}, which indicates the instantaneous peak rate achievable over the whole detector with good linearity in count rate versus incident intensity (where [*good*]{} is defined here as less than 2% event loss). Both of these rates are what are referred to as [*global instantaneous peak rates*]{} [@stefanescu]. With the strongly-peaked neutron time-of-flight spectrum at the MLF, it is these global instantaneous rates which limit the performance of the detector systems at RADEN. The results of these studies are listed in Table \[tab:dets\]. Detector $\mu$NID nGEM LiTA12 ---------------------------------- ----------------------- ----------------------- ------------------------- Type Micro-Pattern Micro-Pattern Pixelated Scintillator Neutron converter $^3$He $^{10}$B $^6$Li Area $10 \times 10$ cm$^2$ $10 \times 10$ cm$^2$ $4.9 \times 4.9$ cm$^2$ Time resolution 0.25 $\mu$s 15 ns 40 ns Spatial resolution 0.1 mm 1 mm 3 mm Efficiency (at $E_n = 25.3$ meV) 26% 10% 23% Peak count-rate capacity 8 Mcps 4.6 Mcps 8 Mcps Effective peak count rate 1 Mcps 180 kcps 6 Mcps : Features of event-type detectors available at RADEN are listed below. The values for the spatial resolution, peak count-rate capacity, and effective peak count rate were confirmed at RADEN. (The terms [*peak count-rate capacity*]{} and [*effective peak count rate*]{} are defined in the text.) []{data-label="tab:dets"} We are also actively developing the LiTA12 and $\mu$NID at RADEN in order to optimize their characteristics, including spatial resolution, rate performance, and detection efficiency, for energy-resolved neutron imaging. (While the nGEM is extensively used at RADEN, it is not currently being developed by our group.) Specifically, the LiTA12 is being optimized for neutron resonance absorption measurements by replacing the scintillator pixels with a single, flat scintillator with increased thickness in order to increase detection efficiency for epithermal neutrons (i.e., those in the resonance energy region above $E_n \simeq 1$ eV) [@kai17]. The single scintillator also allows calculation of the centroid of multiple anodes for an improvement in spatial resolution to less than 1 mm [@kai17; @segawa17]. Additionally, for the $\mu$NID, we have upgraded the data acquisition hardware and optimized the gas mixture for improved spatial resolution and rate performance [@parker16], with further development underway. For the remainder of this paper, we will discuss ongoing development of the $\mu$NID in detail. Development of the $\mu$NID at RADEN ==================================== The $\mu$NID, shown in Fig. \[fig:unid\](a), uses a time projection chamber with a drift length of 2.5 cm and a 10 cm $\times$ 10 cm readout plane consisting of a micro-pixel chamber ($\mu$PIC), coupled to a modular, FPGA-based data acquisition system [@mizumoto15]. The $\mu$PIC is a micro-pattern detector with a 400 $\mu$m pitch, two-dimensional strip readout, which through its unique microstructure, illustrated in Fig. \[fig:unid\](b), achieves both charge amplification and analog strip readout. To facilitate neutron detection, a CF$_4$–iC$_4$H$_{10}$–$^3$He gas mixture (mixing ratio 45:5:50) at 2 atm total pressure is used, providing a detection efficiency of 26% at $E_n = 25.3$ meV (on par with conventional CCD camera systems). In this gas mixture, the tracks of the reaction products are less than 5 mm. A drift field of 1,600 V/cm is used (50 $\mu$m/ns drift velocity, 0.5 $\mu$s maximum drift time), and the $\mu$PIC readout is operated at an anode voltage around 650 V for a gain factor of 100 to 150. Following a neutron-$^3$He interaction, the three-dimensional track and energy deposition (estimated via time-over-threshold) of the resultant proton-triton pair are recorded in the FPGA-based data encoder modules and sent to PC via Gigabit Ethernet. This detailed tracking information allows the $\mu$NID to achieve a fine spatial resolution of 0.1 mm and a low gamma sensitivity of less than 10$^{-12}$. The $\mu$NID also features a time resolution of 0.25 $\mu$s, a peak count-rate capacity of 8 Mcps, and an effective peak count rate of 1 Mcps. ![A photograph of a $\mu$NID system is shown in (a) with the aluminum pressure vessel, entrance window, and FPGA-based data encoder modules indicated. An illustration of the time-projection chamber showing the drift plane and structure of the $\mu$PIC readout is shown in (b) (drift plane–$\mu$PIC separation not to scale). \[fig:unid\]](uNID_detector.300dpi.eps){width="145mm"} In our initial development described in Ref. [@parker16], we performed an upgrade of the data acquisition hardware and replaced the original Ar–C$_2$H$_6$–$^3$He (63:7:30 at 2 atm) gas mixture [@parker13a] with the current CF$_4$-based mixture. For the hardware, the data output port of the FPGA-based data encoder modules was upgraded from 100BASE-T to Gigabit Ethernet, improving the throughput of the data acquisition hardware by roughly a factor of nine. The CF$_4$-based gas mixture provided a more than two times faster drift velocity for shorter charge evacuation times, nearly two times the stopping power for reduced event sizes, and a more than three-fold reduction in the electron diffusion for improved event localization as compared to the Ar-based mixture. Furthermore, the higher stopping power of CF$_4$ allowed us to increase the $^3$He fraction while maintaining smaller event sizes. Taken together, these detector improvements provided an increase in the peak count-rate capacity from 0.6 to 8 Mcps and an increase in the detection efficiency from 18% to the current 26%. The updated encoder modules and new gas mixture have been thoroughly tested at RADEN and are now part of the standard setup for our $\mu$NID system. Ongoing development efforts to improve the spatial resolution and rate performance are described below, including optimization of data analysis algorithms, development of a new $\mu$PIC readout plane with reduced pitch, and testing of a $\mu$NID with $^{10}$B-based neutron converter. Optimization of data analysis algorithms ---------------------------------------- The digital data produced by the $\mu$NID consists of a stream of hits comprised of strip number, hit time, and a flag indicating whether the analog signal was rising or falling when it crossed the discriminator threshold. It is the job of the offline data analysis to match rising and falling hits and calculate the time-over-threshold, group individual hits into neutron events (referred to as [*clustering*]{}), and determine the precise neutron interaction point (referred to as [*position reconstruction*]{}). By optimizing the clustering and position reconstruction algorithms, we have recently been able to improve the effective peak count rate and maximize the spatial resolution of the $\mu$NID. ### Clustering algorithm After the hardware upgrade of Ref. [@parker16], the $\mu$NID achieved a peak count-rate capacity of 8 Mcps with good linearity up to this maximum when considering only raw hits. The lower effective peak count rate arises mostly from the clustering of the offline analysis. The original clustering algorithm was based on a simple, single-linkage clustering with hits grouped solely by the distance between them (i.e., all hits whose inter-hit separation was within a specified cut-off were considered to come from the same event). While this simple algorithm worked well at low neutron rates, event pile-up was seen to become significant at global peak rates above 400 kcps. This is illustrated in Fig. \[fig:neff\](a), where the [*neutron reconstruction efficiency*]{}, defined as the ratio of reconstructed neutron events to the expected number of neutron events (as derived from the number of raw hits), is plotted as a function of neutron time-of-flight (TOF) for global peak rates up to 5.6 Mcps. The clear dip at the peak of the TOF distribution, shown by the dashed line in Fig. \[fig:neff\](a), indicates event loss due to pile-up, which increases with the peak rate. The observed event loss is about 2% at a global peak rate of 400 kcps. ![Neutron reconstruction efficiency for the $\mu$NID versus neutron time-of-flight is shown for a range of incident neutron intensities for: a) the original single-linkage clustering algorithm, and b) the improved density-based clustering algorithm with explicit event pile-up resolution described in the text. For reference, a typical neutron TOF spectrum is shown as the dashed line in each plot (with arbitrary scale). \[fig:neff\]](recon_efficiency.bw.eps){width="140mm"} To address this poor performance, we are now developing a new algorithm employing density-based clustering (based on DBSCAN [@dbscan]), followed by explicit event pile-up resolution. For our initial study, clusters that overlap in time were grouped, and a mismatch in the number of clusters on each of the perpendicular strip planes was taken as a signal of event pile-up. Then, in the case that the number of clusters differs by one, the largest cluster was assumed to be a pile-up event and was allowed to pair with two clusters from the opposite strip orientation. Even with this simple method, the improvement in the neutron reconstruction efficiency is clearly visible in Fig. \[fig:neff\](b), where event loss was reduced to 2% at 1 Mcps global peak rate. The neutron reconstruction efficiency is expected to improve further as we increase the sophistication of the event pile-up resolution algorithm. We will also study the effect of the new clustering algorithm on the spatial resolution. ### Position reconstruction algorithm In the offline analysis, neutron position is determined event-by-event via a fit to the time-over-threshold (TOT) distributions for each strip orientation, as described in Ref. [@parker13b]. The fits are carried out using [*template*]{} distributions generated with a GEANT4 [@geant4_1; @geant4_2] simulation of the $\mu$NID system, where the templates are indexed by the proton-triton track unit vector. This fitting procedure allows the clean separation of the proton and triton (with $<$5% observable contamination from misidentified events), facilitating the fine spatial resolution achieved by this detector. In the original template selection algorithm, the unit vector was determined from the fully reconstructed three-dimensional track, requiring input of three adjustable parameters (i.e., [*x*]{}, [*y*]{}, and [*z*]{} offsets) and several calculation steps. We have recently developed a simplified template selection algorithm that uses the track projections, which are measured directly, and one adjustable parameter, namely, the average proton-triton track length. Images of the Gd test target taken at RADEN are shown in Fig. \[fig:sres\](a) and \[fig:sres\](b), reconstructed from the same data using the original and simplified template selection algorithms, respectively. Also, projections of the line pairs within the dashed regions in Figs. \[fig:sres\](a) and \[fig:sres\](b) are shown in Figs. \[fig:sres\](c) and \[fig:sres\](d), respectively. From Figs. \[fig:sres\](c) and \[fig:sres\](d), the spatial resolution was evaluated as 200 and 100 $\mu$m, respectively, at a Modulation Transfer Function (MTF) value of 10%. These results show clearly that the simplified template selection algorithm provides both improved spatial resolution and image uniformity, indicating better matching of templates to the measured TOT distributions. ![Images of a gadolinium test target taken with the $\mu$NID are shown as reconstructed from the same data using: a) the original template selection algorithm, and b) the improved template selection algorithm described in the text. The image area is $10 \times 10$ cm$^2$ with a bin size of $40 \times 40$ $\mu$m$^2$ for each. Projections of the line pairs within the dashed boxes in (a) and (b) are shown in (c) and (d), respectively. \[fig:sres\]](position_recon.300dpi.eps){width="120mm"} $\mu$PIC with reduced strip pitch --------------------------------- To provide a significant improvement in the spatial resolution, we are working with the manufacturer of the $\mu$PIC, DaiNippon Printing Co., Ltd., to develop a new $\mu$PIC readout element with reduced strip pitch. (Simulations indicate that the spatial resolution should scale roughly with the strip pitch.) The standard $\mu$PIC described above is manufactured using conventional printed circuit board techniques, which, due to poor tolerances, are not well suited to producing very fine structures below several 10s $\mu$m. By changing to a MEMS (Micro-Electro-Mechanical Systems)-based process, however, smaller structures (as small as 10 $\mu$m) can be created with very good uniformity. Using MEMS manufacturing techniques, a new $\mu$PIC readout element, referred to as a TSV (Through-Silicon-Via) $\mu$PIC, with a 215 $\mu$m pitch, or nearly half that of the standard, 400-$\mu$m pitch $\mu$PIC, has been successfully produced. For our initial study, a 215 $\mu$m pitch test piece, comprised of $64 \times 64$ strips for a detection area of $1.4 \times 1.4$ cm$^2$, was manufactured and testing was carried out at RADEN. In preliminary testing described in Ref. [@parker16], the TSV $\mu$PIC test piece provided sufficient gain for neutron detection (at about 200), but showed poor gain stability under sustained neutron irradiation. This observed gain instability was thought to arise from charge build-up within the silicon substrate, which in the MEMS process is used in place of the insulating polyimide substrate of the standard $\mu$PIC. Based on the above assumption, we investigated the effect of electrically grounding the silicon substrate, which would be expected to allow the evacuation of any charge build-up, in a subsequent test of the TSV $\mu$PIC. Figure \[fig:mems\](a) shows the relative gain measured over a 5-hour period of constant neutron irradiation with and without grounding the substrate, where the gain is represented by the peak TOT value averaged over all channels. These results show that by grounding the substrate, the gain stability of the TSV $\mu$PIC can be significantly improved. With the gain stabilized, we were then able to take the first test image with the fine-pitch $\mu$PIC as shown in Fig. \[fig:mems\](b). The statistics are low, but the Siemens star of our Gd test target is clearly visible. We are now preparing a new 215 $\mu$m pitch test piece with $256 \times 256$ strips and an area of 5.5 cm $\times$ 5.5 cm, and we will continue to study the gain stability and spatial resolution with this large-area TSV $\mu$PIC from this spring at RADEN. ![Results of TSV $\mu$PIC tests are shown for: a) gain stability with and without substrate grounding, and b) imaging of gadolinium test target (Siemens star). The image area is 1.4 cm $\times$ 1.4 cm (21.5 $\mu$m $\times$ 21.5 $\mu$m bin size). The dark areas at top and bottom and distortion in the lower third are due to damaged strips. \[fig:mems\]](mems_test.300dpi.eps){width="127mm"} $\mu$NID with boron-based neutron converter ------------------------------------------- To provide a significant improvement in the peak count-rate capacity, we are also developing a $\mu$NID with a $^{10}$B-based neutron converter. Use of a boron-based converter is expected to provide a three-fold increase in the peak count-rate capacity of the system (to over 20 Mcps) due to the fact that the alpha particle released in the neutron-$^{10}$B reaction travels a much shorter distance in the gas of the detector as compared to the lighter proton and triton in the $^3$He case, thereby creating fewer hits per event and allowing more events to be transmitted over the same system bandwidth. This small event size (of only 2 or 3 hit strips per readout direction), however, comes with a trade-off in spatial resolution as the limited information renders detailed reconstruction algorithms, such as the template-fitting method above and the $\mu$TPC method of Ref. [@pfeiffer], less effective. The change from $^3$He gas to a $^{10}$B-based converter also reduces the long-term maintenance costs of the detector. As a proof-of-principle demonstration, we installed an aluminum drift cathode with a 1.2-$\mu$m thick $^{10}$B coating ($>$98% purity) into one of our $\mu$NID systems, as shown in Figs. \[fig:boron\](a) and \[fig:boron\](b), and filled the vessel with a CF$_4$-iC$_4$H$_{10}$ (90:10) gas mixture at 1.6 atm. The CF$_4$-based gas mixture was chosen for its high stopping power to keep the alpha tracks short. Figure \[fig:boron\](c) is an image of the Gd test target produced at RADEN, showing a spatial resolution of around 500 $\mu$m, or slightly larger than the pitch of the $\mu$PIC strip readout. We also observed a 2.8 times reduction in event size compared to the $^3$He case, confirming, in principle, an expected increase in peak count-rate capacity of up to 22 Mcps. Due to the low efficiency of only 3 to 5% at $E_n = 25.3$ meV for the present converter and a limited neutron beam power of 150 kW at the time of the measurement, however, we were unable to directly measure the peak count-rate capacity. We are now considering new converter designs for increased efficiency, and we will measure the peak count-rate capacity in a future test at the MLF. ![ Shown here are: a) a simple diagram showing the basic structure of the $\mu$NID with boron converter, b) a photograph of the 1.2 $\mu$m $^{10}$B layer (dark rectangular area) deposited on one side of the aluminum drift cathode, and c) an image of the gadolinium test target taken with the $\mu$NID with boron converter at RADEN. The image area in (c) is 7.7 cm $\times$ 7.7 cm with a bin size of 400 $\mu$m $\times$ 400 $\mu$m. \[fig:boron\]](boron_unid_alt.300dpi.eps){width="125mm"} Conclusion ========== At the RADEN instrument of the J-PARC MLF, we use advanced event-type neutron imaging detectors, including the $\mu$NID and nGEM micro-pattern detectors and the LiTA12 scintillator pixel detector. The performance of these detectors has been verified at RADEN, and they have been used by both the RADEN instrument group and general users to carry out energy-resolved neutron imaging measurements since 2015. In order to fully utilize the intense pulsed neutron beam of the MLF and better meet the needs of users, we continue to develop these detectors for improved spatial resolution, higher efficiency, and better rate performance. Specifically, through the ongoing development of the $\mu$NID system described here, we have improved the spatial resolution from 200 to 100 $\mu$m at 10% MTF, increased the efficiency from 18 to 26% at $E_n = 25.3$ meV, and increased the effective peak count rate from 0.4 to 1 Mcps, with further improvement expected with optimization of the offline event analysis. Furthermore, a new 215 $\mu$m pitch $\mu$PIC is expected (from simulation) to provide nearly double the spatial resolution, while a $\mu$NID with boron converter should provide a factor of three increase in peak count-rate capacity for more than 20 Mcps total throughput. Acknowledgment {#acknowledgment .unnumbered} ============== The development of the 215 $\mu$m pitch TSV $\mu$PIC readout element and the $\mu$NID with boron converter was partially supported by JST ERATO Grant No. JPMJER1403, Japan. Testing at RADEN was carried out under MLF Instrument Group Use Proposal No. 2017I0022, MLF General Use Proposal No. 2016B0161, and CROSS Development Use Proposal No. 2017C0004. [20]{} T. Shinohara [*et al.*]{}, J. Phys.: Conf. Series **746**, 012007 (2016). H. Sato, O. Takada, K. Iwase, T. Kamiyama, and Y. Kiyanagi, J. Phys.: Conf. Series [**251**]{}, 012070 (2010). H. Sato, T. Kamiyama, and Y. Kiyanagi, Nucl. Instr. and Meth. A [**605**]{}, 36 (2009). T. Shinohara [*et al.*]{}, Nucl. Instr. and Meth. A [**651**]{}, 121 (2011). J.D. Parker [*et al.*]{}, Nucl. Instr. and Meth. A [**697**]{}, 23 (2013). J.D. Parker [*et al.*]{}, Nucl. Instr. and Meth. A [**726**]{}, 155 (2013). S. Uno, T. Uchida, M. Sekimoto, T. Murakami, K. Miyama, M. Shoji, E. Nakano, and T. Koike, Phys. Proc. [**37**]{}, 600 (2012). S. Satoh, JPS Conf. Proc. [**8**]{}, 051001 (2015). A. Ochi, T. Nagayoshi, S. Koishi, T. Tanimori, T. Nagae, and M. Nakamura, Nucl. Instr. and Meth. A [**471**]{}, 264 (2001). F. Sauli, Nucl. Instr. and Meth. A [**386**]{}, 531 (1997). J.D. Parker [*et al.*]{}, 2015 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC), 1 (2016). J.D. Parker [*et al.*]{}, 2016 IEEE Nuclear Science Symposium, Medical Imaging Conference and Room-Temperature Semiconductor Detector Workshop (NSS/MIC/RTSD), 1 (2017). M. Segawa [*et al.*]{}, submitted to these proceedings. I. Stefanescu [*et al.*]{}, J. Instrumentation [**12**]{}, P01019 (2017). T. Kai [*et al.*]{}, Phys. B, in press. T. Mizumoto [*et al.*]{}, Nucl. Instr. and Meth. A [**800**]{}, 40 (2015). M. Ester, H-P. Kriegel, J. Sander, and X. Xu, Proc. 2$^{nd}$ Int. Conf. Knowledge Discovery and Data Mining (KDD-96), Portland, USA, 1996, p. 226. S. Agnostinelli [*et al.*]{}, Nucl. Instr. and Meth. A [**506**]{}, 250 (2003). J. Allison [*et al.*]{}, IEEE Trans. Nucl. Sci. NS-53, 270 (2006). D. Pfeiffer [*et al.*]{}, J. Instrumentation [**10**]{}, P04004 (2015).
--- abstract: 'We examine the transformation of particle trajectories in models with deformations of Special Relativity that have an energy-dependent and observer-independent speed of light. These transformations necessarily imply that the notion of what constitutes the same space-time event becomes dependent on the observer’s inertial frame. To preserve observer-independence, the such arising nonlocality should not be in conflict with our knowledge of particle interactions. This requirement allows us to derive strong bounds on deformations of Special Relativity and rule out a modification to first order in energy over the Planck mass.' author: - 'Sabine Hossenfelder [^1]\' title: 'The Box-Problem in Deformed Special Relativity' --- Introduction ============ It is generally believed that the Planck mass $m_{\rm Pl}$ is of special significance. As the scale where effects of the yet-to-be-found theory of quantum gravity are expected to become important, it has been argued the energy associated to the Planck mass should have an observer-independent meaning. Lorentz-transformations however do not leave any finite energy invariant. Thus, the requirement of assigning an observer-independent meaning to the Planck mass seems to necessitate a modification of Special Relativity and a new sort of Lorentz-transformations. This modification of Special Relativity, which does not introduce a preferred frame but instead postulates the Planck mass as an observer-independent invariant, has become known as “Deformed Special Relativity” (DSR) [@AmelinoCamelia:2000ge; @KowalskiGlikman:2001gp; @AmelinoCamelia:2002wr; @Magueijo:2002am]. The deformed Lorentz-transformations that leave the Planck mass invariant under boosts can be explicitly constructed. There are infinitely many of such deformations, and they generically result in a modified dispersion relation and an energy-dependent speed of light [@Hossenfelder:2005ed]. In the low energy limit, this energy-dependent speed of light coincides with the speed that we have measured. Depending on the sort of deformation, the speed of light can increase, decrease, or remain constant with energy. We will here examine the case where it is not constant. These deformations of Special Relativity have recently obtained increased attention since measurements of gamma ray bursts observed by the Fermi Space Telescope have now reached a precision high enough to test a modification in the speed of light to first order in the energy over the Planck mass [@Science; @Nature; @AmelinoCamelia:2009pg]. While such modifications could also be caused by an actual breaking of Lorentz-invariance that introduces a preferred frame, models that break Lorentz-invariance are subject to many other constraints already [@Maccione:2007yc]. This makes [DSR]{} the prime candidate for an energy dependent speed of light. We will here argue that [DSR]{} necessitates violations of locality that put much stronger bounds on an energy-dependent speed of light already than the recent measurements of gamma ray bursts. This paper is organized as follows. In the next section we will study a thought-experiment that lays out the basic problem that an energy-dependent but observer-independent speed of light renders locality a frame-dependent notion. In section \[2.0\], we will transfer this thought-experiment into a realistic setting. We will show that, with a first order modification of the speed of light, the violations of locality would be within current measurement precision and thus cannot be dismissed on grounds of practical impossibility of detection. In section \[2.1\] we will consider a variant of the setup that covers the case in which there is an additional enhanced quantum mechanical uncertainty in [DSR]{} and show that still the problem is within current measurement precision. This then requires us to put bounds on the energy dependence of the speed of light such that the previously studied effect is not in conflict with already existing measurements. This will be done in section \[bounds\]. In section \[disc\] we will consider some alternative options to prevent these bounds but have to conclude that these are all implausible. We use the convention $c=\hbar=1$. The Box-Problem, Version 1.0 {#1.0} ============================ In the cases of DSR we will examine, the speed of light is a function of energy $\tilde c(E)$, such that this function is the same for all observers. Thus, in a different restframe where $E$ was transformed into $E'$ under the deformed Lorentz-transformation, the speed of light would be $\tilde c'(E') = \tilde c(E')$. In ordinary Special Relativity it is only one speed, $\lim_{E\to 0} \tilde c(E) =1$, that is invariant under the Lorentz-transformations. This is a result of deriving Lorentz-transformations as the symmetry-group of Minkowski space and not an assumption for the derivation. It is thus puzzling how an energy-dependent speed of light that takes different values can also be observer-independent. The intuitive problem can be seen in the following scenario. Consider the case in which the speed of light was decreasing monotonically and finally reached zero when the energy equaled the Planck mass. Then, a photon with $E=m_{\rm Pl}$ would be at rest. We put this photon inside a box. The box represents a classical, macroscopic, low-energy object, one for which modifications of Special or General Relativity are absent or at least negligible. What does an observer moving relative to the box with velocity $v$ see? He sees the box move with $-v$ relative to him. The photon’s energy in his restframe is also the Planck mass, since it is an invariant of the deformed Lorentz-transformation. Consequently the photon is also at rest, and cannot remain inside the box. Indeed, if the observer only waits long enough, the photon will be arbitrarily far outside the box. If we bring another particle into the game, for example an electron, that in the restframe of the box interacts with the photon, then the moving observer will generically see the particles interact outside the box (except for the specifically timed case in which the electron just meets the photon in the moment when the photon is also in the box). The different transformation behavior of the world-lines of the box and the photon thus results in an observer-dependent notion of what constitutes ‘the same’ spacetime event. In contrast to the observer-dependence of ‘the same’ moment in time that one also has in Special Relativity, this concerns the observer dependence of what happens at the same time [*and*]{} the same place. Since two straight, non-parallel lines always meet in one point, an example requires at least three lines, rspt. three objects moving with constant velocity, here the photon, the electron and the box. In one reference frame they all meet in the same space-time point. In another reference frame they do not. This poses significant challenges if one wants to accommodate it in a local theory. While this setting exemplifies the box-problem, it can be criticized on the grounds that experimentalists do not have many reasons to worry about particles with energies of $10^{19}$ GeV. We will thus in the next section study an actually observable situation. This will be a more complicated setup, but the underlying cause of the problem remains the same. It is the requirement that the speed of photons changes with energy but changes in an observer-independent way that forces upon us that the world-lines of particles transform differently depending on the particle’s energy. This then has the effect that the question what constitutes ‘the same’ spacetime event becomes observer-dependent, which can run into conflict with observations that have confirmed the locality of particle interactions to high precision. The Box-Problem, Version 2.0 {#2.0} ============================ Consider a gamma ray burst ([GRB]{}) at distance $L \approx 4$ Gpc that, for simplicity, has no motion relative to the Earth. This source emits a photon with $E_\gamma \approx 10$ GeV, such that it arrives in the Earth restframe at $(0,0)$ inside a detector. Together with the 10 GeV photon there is a low-energetic reference photon emitted. The energy of that photon can be as low as wanted. In the [DSR]{} scenario we are considering the dispersion relation of photons is modified to E\^2 = p\^2 + 2 + , \[disp\] and the phase velocity depends on the photons’ energy. To first order c(E) ( 1 + ) + [O]{} ( ) . \[alpha\] \[cofe\] where we will neglect corrections of order higher than $E_\gamma/m_{\rm Pl}$ in the following, and set $\alpha = -1$, in which case the speed of light decreases with increasing energy. The important point is that Eq. (\[disp\]) and (\[cofe\]) are supposed to be observer independent, such that these relations have the same form in every reference frame. This then requires the non-linear, deformed Lorentz-transformations in momentum space. These transformations depend on the form of the modified dispersion relation. We will however here work in an approximation and only need to know that the Lorentz-transformations receive to lowest order a correction in $E/m_{\rm Pl}$. The higher energetic photon is slowed down and arrives later than the lower energetic one. One has for the difference $\Delta T$ between the arrival times of the high and low energetic photon T = L ( - 1 ) = L + [O]{} ( ) . \[DeltaT\] With 4 Gpc $\approx 10^{26}$ m, $E_\gamma \approx 10^{-18} m_{{\rm Pl}}$, the delay is of the order 1 second, take or give an order of magnitude. Strictly speaking, this equation should take into account the cosmological redshift since the photon propagates in a time-dependent background. However, for our purposes of estimating the effects it will suffice to consider a static background, since using the proper General Relativistic expression does not change the result by more than an order of magnitude [@Ellis:2002in; @Jacob:2008bw]. We further consider an electron at $E_e \approx 10$ MeV emitted from a source in the detector’s vicinity such that it arrives together with the high energetic photon at $(0,0)$ inside the detector. The source can be as close as wanted, but to make a realistic setup it should be at least of the order $1$ m away from the detection point. The low energetic photon leaves the GRB together with the high energetic photon at $(x_e, t_e) = (-L, - L/\tilde c)$. It arrives in the detector box at $(x_a, t_a)=(0,L(1-1/\tilde c))$, by $-t_a$ earlier than the electron. We have chosen the emission time such that $-t_a=\Delta T$, and the electron arrives with the same delay after the low energetic photon as the high energetic photon. With an energy of 10 MeV, the electron is relativistic already, but any possible energy-dependent [DSR]{} effect is at least 3 orders of magnitude smaller than that of the photon, and due to the electron’s nearby emission the effects cannot accumulate over a long distance. The electron’s velocity is v\_e ( 1 - ) ( 1 - 10\^[-3]{} ) +O(E\^2\_e/m\_[Pl]{}\^2). Inside the detector at $x=0$ the photon scatters off the electron. The photon changes the momentum of the electron, which triggers a bomb and the lab blows up. That is of course completely irrelevant. It only matters that the elementary scattering process can cause an irreversible and macroscopic change. This setup is depicted in Fig. \[1\]. ![[Labframe. The gamma ray burst (thick red line) is in rest with the detector (grey shaded area). It emits at the same time one low energetic photon (thin red line) and one high energetic photon (dotted purple line) that is slowed down due to the energy-dependent speed of light. From a source close to the detector, there is an electron emitted (blue line) that meets the low energetic photon in the detector. The electron scatters on the photon, changes momentum and triggers a bomb. A satellite flies by towards the gamma ray burst and crosses the detector just when the photon also meets the electron. The thin grey lines depict the light-cone in the low energy limit.]{}[]{data-label="1"}](dsrbox1c.eps){width="11.5cm"} Also in the picture is a satellite moving relative to the Earth restframe (thick grey line in Fig. \[1\]). From that satellite, a team of physicists observes and tries to describe the processes in the lab. The satellite crosses the lab just when the bomb blows off at $(0,0)$. That’s somewhat of a stretch, but let’s not overdo it with the realism. The typical speed of a satellite in Earth orbit is $v_S= -10$ km/s, or, in units of $c$, $v_S \approx - 3 \times 10^{-5}$, and the gamma factor is approximately $\gamma_S \approx 1+10^{-9}$ for the relative motion between lab and satellite. Of course the satellite is bound in the gravitational field of the Earth and not on a constant boost, but on the timescales that matter for the following this is not relevant. Alternatively, replace Earth by a space station with negligible gravitational field. Now let us look at the same scenario from the satellite restframe, shown in Fig \[2\]. We will denote the coordinates of that restframe with $(x',t')$. The satellite is moving towards the [GRB]{}, thus the electron’s and photons’ energies are blueshifted. We have E’\_&=& E\_+ [O]{} ( ) , and the energy of the very low energetic photon remains very low energetic. The low-energetic photon crosses the satellite at $(x,t) = (L(1/\tilde c(E_\gamma)-1)/(1-1/v_S),L(1/\tilde c(E_\gamma)-1)/(v_S-1))$. In the satellite frame the time passing between the arrival of the low energetic reference photon and the electron at $x'=0$ is t’\_a = . (Note that this is not the Lorentz-transformation of $t_a$, as becomes clear from the figures.) The formulation of [DSR]{} in position space has been under debate. It has been argued that the space-time metric should become energy-dependent [@Magueijo:2002xx; @Kimberly:2003hp; @Galan:2004st; @Amelino-Camelia:2005ne], and in [@Hossenfelder:2006rr] it was shown that keeping the energy-dependent speed of light observer-independent forces one to accept also the transformations in position-space become dependent on an external parameter characterizing the particle (for example its energy), though the interpretation remains unclear. Thus, to keep track of assumptions made, let us point out that we are talking here about an observation made on two low energetic particles from a very macroscopic, non-relativistic satellite. Even if there was a [DSR]{}-modification to the above transformation, it could come in here only through corrections of the order $E_e/m_{{\rm Pl}}$, and do so without this tiny contribution being able to add up over a long distance. ![[The same scenario as in Fig. \[1\] as seen from the satellite restframe. The gamma ray burst (thick red line) now moves to the right, and emits the low energetic photon (thin red line) and the high energetic photon (dotted purple line) at slightly blueshifted energies. The high energetic photon is slowed down even more, misses the electron and the bomb is not triggered.]{}[]{data-label="2"}](dsrbox2c.eps){width="11.5cm"} With higher energy, the speed of the electron increases. The speed of the photon also changes but, and here is the problem, according to [DSR]{} by assumption the function $\tilde c$ is [*observer-independent*]{}. In the satellite frame one then has c (E\_’) = 1 - = 1 - + [O]{} ( ) , and the distance the photons travel until they reach the satellite is L’= \_S ( v\_S/c(E\_) - 1 ) L . Thus, the time passing between the arrival of the reference photon and the high energetic photon at the satellite is T’ = L’ = T + [O]{} ( ) . \[dt\] Again, the question arises whether there could be some energy dependence in this transformation. Since we are talking about passive transformations here, this creates an interpretational mess, but nevertheless we will discuss this possibility later in section \[disc\]. With the above, in the satellite frame the high energetic photon thus arrives later than the electron by T’ - t’\_a = ( - ) T + [O]{} ( ) . Inserting $1/\gamma_S \approx 1 -1/2 v^2_S$ for $v_S\ll 1$, one finds T’ - t’\_a -3 T (v\_S - v\_S\^2 ) 10\^[-5]{} T . In the satellite frame, the low-energetic photon thus misses the electron by $\approx 10^{-5}$ seconds. Possible additional [DSR]{} effects for the electron are negligible because of its low energy and short travel distance and thus cannot save the day. Now $10^{-5}$ seconds might not appear much given the typical time resolution for detection of such particles is at best of the order milliseconds. However, multiplied by the speed of light, the high energetic photon is still lagging behind as much as a kilometer when it arrives in the detector. It only catches up with the electron at x’ = (t’\_a - T’) 10\^[5]{}  [m]{} , and thus safely outside the detector. The photon then cannot scatter off the electron in the detector, and the electron cannot trigger the bomb to blow up the lab. The physicists in the satellite are puzzled. The Box-Problem, Version 2.1 {#2.1} ============================ An assumption we implicitly made in the previous section was that the quantum mechanical space- and time-uncertainties $\Delta t, \Delta x$ are not modified in [DSR]{}, such that the GeV photon can be considered peaked to a $\Delta t$ smaller than the distance to the electron at arrival. For a distance of 1 km, this is about 19 orders of magnitude higher than $1/E_\gamma$ and thus an unproblematic assumption. Whether or not [DSR]{} has a modification of quantum mechanics is hard to say in absence of a formulation of the model in position space, so let us just examine the possibilities. There either is a modification, or there is not. The previous section examined the case in which there is no modification. Here we will consider the case that there was a modification of quantum mechanics. We will show that if the difference in arrival time in the Earth frame $\Delta T$ was of the order seconds, this would either be incompatible with experiment, or with observer independence. Later, we can use the experimental limits to obtain a on bound the possible delay compatible with experiment. The question whether or not the wave function spreads in [DSR]{} depends on how one interprets the modified dispersion relation. It is supposed to describe the propagation of a particle in a background that displays quantum gravitational effects. Yet the question is whether this modification should be understood as one for a plane wave or for a localized superposition of plane waves already. In the first case a wave-packet would experience enhanced dispersion, in the latter case not. In the absence of a derivation, both interpretations seem plausible. Let us point out that we are here talking about the dispersion during propagation and the position uncertainty resulting from this and not a modification of the maximally possible localization itself. [DSR]{} generically does not only have an energy-dependence of the speed of light, but also an energy-dependence of Planck’s constant $\hbar$ [@Hossenfelder:2005ed]. This results in a generalized uncertainty principle which in particular has the effect that particles with momentum approaching the Planck scale have an increasing position uncertainty, as opposed to the limit on position uncertainty monotonically decreasing with the ordinary Heisenberg relation. However, these DSR corrections to $\hbar$ also go with powers of $E/m_{\rm Pl}$. This means that the maximally possible localization of the 10 GeV photon at emission is affected, but to an extend that is negligible. The relevant contribution to the uncertainty would be the one stemming from the dispersion during propagation. In case there is a modification caused by a dispersion of the wave-packet, then the uncertainty of the slowed down, high energetic photon at arrival would be vastly larger than the maximal localization of the Heisenberg limit allows. If one starts with a Gaussian wave-packet localized to a width of $\sigma_0$ at emission and tracks its spread with the modified dispersion relation, one finds that to first order the now time-dependent width is (t) = \_0 If we start with a width of $\sigma_0 \approx 1/E_\gamma$, then for times $t \gg m_{\rm Pl} \sigma_0^2$ (which amounts for the values we used to $t \gg 10^{-6}$  seconds), one finds that the width is to first order $\sigma(t) \approx 2 t E_\gamma/m_{\rm Pl}$. Or, in other words, in the worst case the uncertainty of the wave-packet at arrival is about the same size as the time delay $\Delta t \approx \Delta T$. In this case the photon at arrival would be smeared out over some hundred thousand kilometers. A delay of $\Delta T$ with an uncertainty of $\Delta T$ is hard to detect, but it would also be impossible to find out whether or not the center of the wave-packet had been dislocated by a factor five orders of magnitude smaller than the width of the wave-packet. This is sketched in Fig. \[4\]. ![[Satellite frame, with increased quantum uncertainty. The same scenario as in Fig. \[2\] with added space and time uncertainty for the high energetic photon (purple area). The photon is smeared out all over the detector. It interacts with the electron and triggers the bomb without that interaction appearing nonlocal for the observer in the satellite.]{} []{data-label="4"}](dsrbox3c.eps){width="11.5cm"} We recall however that the box-problem was caused by the unusual transformation behavior of $\Delta T$. To entirely hide this behavior, the quantum mechanical uncertainty $\Delta t$ needs to be much larger than the delay $\Delta T - t_a$ in all restframes, such that it was practically unfeasible to ever detect a tiny difference in probability with the photons we can receive, say, in the lifetime of the universe. We run into a problem when the delay between the electron and the slow photon is about equal to or even smaller than the uncertainty of the slow photon. The two times $\Delta T$ and $t_a$ however transform differently, since the one is determined by the requirement of leaving the energy-dependent speed of light observer-independent, whereas the other is determined by the crossing of worldlines of particle for which all [DSR]{}-effects are negligible. As a consequence, the delay will in some reference frames be larger than or of the same order as the uncertainty. To see this, let us boost into a reference frame with $v=1-\epsilon$, such that $\gamma \approx 1/\sqrt{2 \epsilon}$. The inequality that needs to be fulfilled to hide the delay is then |T’ - t\_a’| && |t’ |\ |- | && , \[see\] which is clearly violated without even requiring extreme boosts. To put in some numbers, consider an observer in rest with the electron with $\epsilon = 10^{-3}$, and $\gamma \approx 20$. We then have |T’ - t\_a’ | 10\^4 t’ . Similarly, if we boost into the other direction $v=-1+\epsilon$, the requirement to hide the delay takes the form |T’ - t\_a’ | && |t’ |\ | - | && , which is also clearly violated. Though in this case the delay does not actually get much larger than the uncertainty, they both approach the same value. We would then be comparing the probability of interaction at the center of the wave-packet with one at a distance comparable to its width. In this case then the probability of interaction, if we consider a Gaussian wave-packet, had fallen by a factor of order one. Thus, in some reference frames the particles would be able to interact inside the box with some probability (depending on the cross-section), whereas in other frames they would only interact in a fraction of these cases, in conflict with observer independence. This would require several photons to get a proper statistic, but it is a difference in probability that is feasible to measure within the lifetime of the universe, and thus is still in conflict with observer-independence The advantage of boosting to a velocity in the opposite direction as the photon is that the delay itself does not also decrease. Let us mention again that we have considered here a photon whose approximate uncertainty in momentum space is at emission comparable to the mean value, which is quite badly localized. If the photon’s momentum had instead an uncertainty of $\approx 100$ MeV only, then the mismatch in timescales was by two orders of magnitude larger. We have here assumed that it is appropriate to use the normal Lorentz-boosts to calculate the time span $t'_a$, but to what precision do we know these? The transformation behavior under Lorentz-boosts has been tested to high precision in particle collisions where boosts from the center of mass system to the laboratory restframe are constantly used. For the time-dilatation in particular, the decay-time of muons is known to transform as $\Delta t' = \gamma \Delta t$ up to a $\gamma$-factor of 30 to a precision of one per mille [@Bailey:1977de]. Note however that $\gamma =30$ is only marginally larger than in the example we have used. If the arising mismatch thus was a timescale smaller than the scattering process could test, then we would not have a problem. We will exploit this later to obtain a bound on the delay still compatible with experiment. To further distinguish possible options, let us notice that the latter argument actually referred to an active Lorentz-boost rather than a passive one. An active boost is needed to describe in our coordinate system properties of the same physical system at different relative velocities, such as the muons at different rapidity. A passive boost on the other hand is used to describe the same physical system as seen from two observers at different velocities, such as the box in the Earth frame and the satellite frame. In Special Relativity, both boosts are identical (rspt. the one is the inverse of the other). Due to the human body commonly being in very slow motion compared to elementary particles, experimental tests for passive boosts are very limited. In the limit of small boosts where we can test both, they agree and confirm Special Relativity. Otherwise we would have to take great care which boost we should be using to describe signals from [GPS]{} satellites or read out spectra of atoms in motion [@Reinhardt:2007zz]. We are thus lead to consider the option that the active boost describing the fast moving muon is not identical with a passive boost that would be needed to describe the muon/electron from a reference frame at such a high boost. That would then mean a muon in rest in our reference frame does not appear to a fast moving observer as the fast moving muon to us. To be concrete, while the muon’s lifetime might be enhanced to $\Delta T' = \gamma_{\rm active} \Delta T$ for us when we accelerate it, the alien-observer at high $\gamma$ might see our muon in rest decaying with $\Delta T' = \gamma_{\rm passive} \Delta T$, where $\gamma_{\rm passive} \approx 1- v$, such that the box-problem caused by the different transformation behaviors would be avoided. That however is either in disagreement with observer independence or with experiment, which can be seen as follows. Consider an ultra-high energetic proton that hits our detector. Are we supposed to describe it by applying an active boost to protons in rest on Earth, or are we supposed to describe it by a passive boost, assuming that we should instead transform our coordinate system to that of the proton? The only way to answer this question is to decide whether or not the proton has been “actively” boosted. But this boost would necessarily be a boost relative to something. We might for example be tempted to call the proton actively boosted because it moves fast relative to us or the cosmic microwave background, but that notion depends on the presence of a preferred frame. In the case of our box-problem the question comes down to which reference frame is the right one to decide whether or not the electron interacts with the slow moving photon inside the box (with some probability), and why that particular frame was the right one to pick. Alternatively, we could try to find out whether the particle we aim to describe has ever been accelerated after its formation. Since acceleration is an absolute notion, the particle’s initial restframe could then hold as a reference frame to define further active boosts without singling out a globally preferred frame. Leaving aside the problem of defining a restframe for massless particles, this would mean the boost we needed to describe a particle depended on the previous history of the particle. In particular this would mean properties of particles produced at high rapidity in a collision would have to be transformed into the lab frame by a passive boost. This boost would in high energy collisions have to differ by many orders of magnitude from the standard Lorentz-transformation, a modification we would long have seen. But in addition, this would mean that the muon-decay actually does probe passive rather than active boosts and thus provides the constraint we were using. To summarize this argument, we have seen that an increased quantum mechanical uncertainty $\Delta t$ that scales with the delay between the high- and low-energetic photon $\Delta T$ cannot in all reference frames bridge the distance the photon is lagging behind the electron when we use a normal Lorentz-boost. And that even though we have used an at emission very badly localized photon already. Active boosts have been tested up to the necessary precision such that a delay of $\Delta T$ of the order seconds would result in a conflict with observer-independence. If passive boosts were different from active boosts, this would necessitate the introduction of a preferred frame and thus disagree with our aim to preserve observer independence. Either way we turn it, quantum mechanics does not solve the box-problem. We will thus in the following section draw consequences. It is worthwhile to note however that adding quantum mechanical uncertainty does solve the box problem, version 1.0, discussed in section \[1.0\]. This is because, as previously noted, [DSR]{} generically also implies a modification of the maximally possible localization due to an energy dependence of Planck’s constant. Take for example the dispersion relation [@Magueijo:2002am]: = p\^2 . It has the property of setting a maximal possible value for the momentum, $p = m_{\rm Pl}$, which is only reached for $E \to \infty$. In this case the energy-dependent speed of light and Planck’s constant are [@Hossenfelder:2006rr]: c (E) = ,(E) = [1+E/m\_[Pl]{}]{} . Thus, while the speed of light goes to zero, Planck’s constant goes to infinity. For the photon in rest in the box this would result in an infinite position uncertainty, such that neither observer could plausibly say whether the particle is inside the box or not. Bounds ====== What if we tried to live with the electron scattering off the photon 10 meters outside the detector? This would require the cross-section for Coulomb-scattering in the satellite frame to be dramatically different from what we have measured in the Earth frame. In the Earth frame, this scattering process probes a typical distance inverse to the center of mass energy of the scattering particles. In the satellite frame, the cross-section must be the same for the distance the photon is lagging behind the electron. This cross-section might not indeed have been measured in any satellite, but this is unnecessary because if it was different from that in our Earth frame this would be incompatible with observer-independence. The logic of the here presented argument is as follows. If there was an energy-dependent speed of light that resulted in the 10 GeV photon arriving about 1 second later than the low-energetic photon, then the requirement of observer-independence implies violations of locality that are incompatible with previously made experiments. Note that it is not necessary to actually perform the experiment as in the setup explained in the previous sections since observer-independence means we can rely on cross-sections previously measured on Earth. In that sense, the experiment has already been done. The setup has only been added to make clear that the effect is not in practice undetectable and thus cannot be discarded as a philosophical speculation. To then resolve the disagreement, we either have to give up observer-independence, which would mean we are not talking about [DSR]{} any longer, or, if we want to stick with [DSR]{}, the violations of locality should be small enough to not be in conflict with any already made experiment. This means one can use the excellent knowledge of [QED]{} processes to constrain the possibility of there being such a [DSR]{} modification by requiring the resulting mismatch in arrival times not to result in any conflict with cross-sections we have measured. Let us first consider the case where there is no [DSR]{}-modification of the quantum mechanical uncertainty. The distance $L =$ some Gpc is as high as we can plausibly get in our universe, and the $10$ GeV photon is as high as we have reliable observational data from particles traveling that far. The center of mass energy of the electron and the high energetic photon is $\sqrt{s} \approx 15$ MeV. The process thus probes distances of $\approx 10$ fm. If the photon and the electron were in the satellite frame closer already than the distance their scattering process probes, we would not have a problem. Requiring $|\Delta T' -t'_a| < 10$ fm leads to a bound on the delay between the low and high energetic photon of T &lt; 10\^[-17]{} [s]{} , in order for there not to be any conflict with known particle physics. If we reinsert the $\alpha$ that we set to one from Eq. (\[alpha\]), we can write the bound as $\alpha < 10^{-18}$. This is what we find from the requirement that there be no problem in the satellite frame in the case without an additional dispersion of the photon’s wave-packet. With such a dispersion, there is no problem in the satellite frame. However, according to our argumentation in the previous section we can trust Lorentz-boosts up to $\gamma \approx 30$. Using this boost increases the mismatch to $|\Delta T' - t_a'| \approx 80 \Delta T$, and the requirement that it be unobservable with presently tested [QED]{} precision amounts to T &lt; 10\^[-23]{} [s]{} , or $\alpha < 10^{-24}$. Note that this does take into account a possible [DSR]{}-modification of quantum mechanics already, and thus covers both cases, the one with and without spread of the wave-packet. However, since the ratio $E_\gamma/m_{\rm Pl}$ is approx $10^{-18}$, present-day observations do already rule out any first order modification in the speed of light, and come indeed close to testing a second order modification. The here offered analysis however depends on the scaling in Eq. (\[DeltaT\]) and thus applies only for modifications linear in the energy. It is quite possible that the energies we have chosen and the setup we have used do not yield the tightest constraints possible. One could for example have used a photon scattering off another photon or more complicated scattering processes involving neutrinos or other light elementary particles, or have the electron be emitted from a different source such that the center of mass energy is higher. We will not examine all of these cases here, but it seems feasible to get the bound another one or two orders of magnitude stronger. Even stronger constraints might arise from considering high energetic scattering processes in the early universe. Discussion {#disc} ========== Let us now see whether there are other options to save [DSR]{} in face of the box problem. First we notice that the problem evidently stems from the transformation behavior of $\Delta T$ in Eq. (\[dt\]). This behavior is a direct consequence of requiring the energy-dependent speed of light $\tilde c$ to be observer-independent, together with applying a normal, passive, Lorentz-transformation to convert the distance $L$ into the satellite restframe. Now if one would use a modified Lorentz-transformation also on the coordinates, a transformation depending on the energy of the photon, then $\Delta T$ could indeed transform properly and both particles would meet also in the satellite frame. This would require that the transformation on the distance $L$ was modified such that it converted the troublesome transformation behavior of $\Delta T$ back into a normal Lorentz-transformation. Then, all observers would agree on their observation. The consequence of that would be that the distance between any two objects would depend on the energy of a photon that happened to propagate between them, an idea that is hard to make sense of. But even if one wants to swallow this, the result would just be that the distance between the [GRB]{} and the detector was energy-dependent such that it got shortened in the right amount to allow the slower photon to arrive in time together with the electron. That however meant of course the speed of the photon would not depend on its energy. The confusion here stems from having defined a speed from the dispersion relation without that speed a priori having any meaning in position space. Thus, this possibility does indeed solve the box-problem, but just reaffirms that observer-independence requires the speed of light to be constant. Or, one might want to argue that maybe in the satellite frame the both photons were not emitted at the same time, such that still the electron could arrive together with the high energetic photon. This however just pushes the bump around under the carpet by moving the mismatch in the timescales in the satellite frame away from the detector and towards the source. One could easily construct another example where the mismatch at the source had macroscopic consequences. This therefore does not help solving the problem. Another option would be to exploit that the problem arises from the same fact that made the time-delay of the photon observable in the first line: the long distance traveled. One could thus demand the cross-section to depend on the history of the photon, such that it was only the long-traveled photons that required strong modifications on [QED]{} cross-sections. Basically, this would mean that any particle’s cross-section was dependent on the particle’s history. This is unappealing, but worse, then cross-sections had to be modified for all ultra-high energetic particles that have travelled long distances, and there is so far no indication for that. In particular, since interstellar space is not actually empty, a large increase in the photon-photon cross-section would not allow the high energetic photons to arrive on Earth at all. Then, finally, one could try to accept that the electron just does not scatter off the photon. This would mean that the macroscopic history an observer sees depended on his relative velocity. This would certainly have made stays in space stations much more interesting. Let us point out that the box-problem does not exist in theories that break rather than deform Lorentz-invariance. The reason is that in the case Lorentz-invariance is broken, the speed of the high energetic photon is not an observer-independent function of the energy. Instead, the relations (\[disp\]) and (\[cofe\]) only hold in one particular frame, and in all other frames they contain the velocity relative to that particular frame. There are however strong constraints on the breaking of Lorentz-invariance already from many other observations, see e.g. [@Maccione:2007yc] and references therein. We started with the motivation that the requirement of the Planck energy being observer-independent seems to necessitate a modification of Lorentz-invariance that can result in an energy-dependent speed of light. This energy-dependent speed of light has then lead us to violations of locality that are hard to reconcile with experiment. That [DSR]{} implies a frame-dependent meaning of what is “near” was mentioned already in [@AmelinoCamelia:2002vy]. Serious conceptual problems arising from this were pointed out in [@Schutzhold:2003yp; @Hossenfelder:2006rr], and here we demonstrated a conflict with experiment to very high precision. It has however been argued in [@Hossenfelder:2006cw] that the requirement of the Planck scale being observer-independent does not necessitate it to be an invariant of Lorentz-boosts, since the result of such a boost does not itself constitute an observation. It is sufficient that experiments made are in agreement over that scale. In particular if the Planck length plays the role of a fundamentally minimal length no process should be able to resolve shorter distances. This does require a modification of interactions in quantum field theory at very high center-of-mass energies and small impact parameters, but it does not necessitate a modification of Lorentz-boosts for free particles. In this case, the speed of light remains constant and the box is not a problem. Conclusion ========== We have studied the consequences of requiring an energy-dependent and observer-independent speed of light in Deformed Special Relativity. We have shown it to result in an observer-dependent notion of what constitutes the same space-time event and thus were lead to consider violations of locality arising from such a transformation behavior. Using the concrete example of a highly energetic photon emitted from a distant gamma ray burst, we have shown that these violations of locality would be in conflict with already measured elementary particle interactions if the energy dependence was of first order in the energy over the Planck mass. This in turn was used to derive a bound on the still possible modifications in the speed of light, which is 22 orders of magnitude stronger than previous bounds that were obtained from direct measurements of delays induced by the energy-dependence. This new bound rules out modification to first order in the energy over the Planck mass. Acknowledgements {#acknowledgements .unnumbered} ================ I want to thank Giovanni Amelino-Camelia, Stefan Scherer, and Lee Smolin for helpful comments. [99]{} G. Amelino-Camelia, “[*Testable scenario for relativity with minimum-length,*]{}” Phys. Lett.  B [**510**]{}, 255 (2001) \[arXiv:hep-th/0012238\]. J. Kowalski-Glikman, “[*Observer independent quantum of mass,*]{}” Phys. Lett.  A [**286**]{}, 391 (2001) \[arXiv:hep-th/0102098\]. G. Amelino-Camelia, “[*Doubly special relativity,*]{}” Nature [**418**]{}, 34 (2002) \[arXiv:gr-qc/0207049\]. J. Magueijo and L. Smolin, “[*Generalized Lorentz invariance with an invariant energy scale,*]{}” Phys. Rev.  D [**67**]{}, 044017 (2003) \[arXiv:gr-qc/0207085\]. S. Hossenfelder, “[*Self-consistency in theories with a minimal length,*]{}” Class. Quant. Grav.  [**23**]{}, 1815 (2006) \[arXiv:hep-th/0510245\]. The Fermi LAT and Fermi GBM Collaborations, “[*Fermi Observations of High-Energy Gamma-Ray Emission from GRB 080916C,*]{}” Science [**323**]{} 5922 (2009) 1688. A. A. Abdo [*et al*]{}, “[*A limit on the variation of the speed of light arising from quantum gravity effects,*]{}” Nature [**462**]{} (2009) 331. G. Amelino-Camelia and L. Smolin, “[*Prospects for constraining quantum gravity dispersion with near term observations,*]{}” Phys. Rev.  D [**80**]{}, 084017 (2009) \[arXiv:0906.3731 \[astro-ph.HE\]\]. L. Maccione, S. Liberati, A. Celotti and J. G. Kirk, “[*New constraints on Planck-scale Lorentz Violation in QED from the Crab Nebula,*]{}” JCAP [**0710**]{}, 013 (2007) \[arXiv:0707.2673 \[astro-ph\]\]. J. R. Ellis, N. E. Mavromatos, D. V. Nanopoulos and A. S. Sakharov, “[*Quantum-gravity analysis of gamma-ray bursts using wavelets,*]{}” Astron. Astrophys.  [**402**]{}, 409 (2003) \[arXiv:astro-ph/0210124\]. U. Jacob and T. Piran, “[*Lorentz-violation-induced arrival delays of cosmological particles,*]{}” JCAP [**0801**]{}, 031 (2008) \[arXiv:0712.2170 \[astro-ph\]\]. J. Bailey [*et al.*]{}, “[*Measurements Of Relativistic Time Dilatation For Positive And Negative Muons In A Circular Orbit,*]{}” Nature [**268**]{} (1977) 301. S. Reinhardt [*et al.*]{}, “[*Test of relativistic time dilation with fast optical atomic clocks at different velocities,*]{}” Nature Phys.  [**3**]{} (2007) 861. J. Magueijo and L. Smolin, “[*Gravity’s Rainbow,*]{}” Class. Quant. Grav. [**21**]{}, 1725 (2004) \[arXiv:gr-qc/0305055\]. D. Kimberly, J. Magueijo and J. Medeiros, “[*Non-Linear Relativity in Position Space,*]{}” Phys. Rev. D [**70**]{}, 084007 (2004) \[arXiv:gr-qc/0303067\]. P. Galan and G. A. Mena Marugan, “[*Quantum time uncertainty in a gravity’s rainbow formalism,*]{}” Phys. Rev. D [**70**]{}, 124003 (2004) \[arXiv:gr-qc/0411089\]. G. Amelino-Camelia, “[*Building a case for a Planck-scale-deformed boost action: The Planck-scale particle-localization limit,*]{}” Int. J. Mod. Phys. D [**14**]{}, 2167 (2005) \[arXiv:gr-qc/0506117\]. S. Hossenfelder, “[*Deformed Special Relativity in Position Space,*]{}” Phys. Lett.  B [**649**]{}, 310 (2007) \[arXiv:gr-qc/0612167\]. G. Amelino-Camelia, “[*Doubly-Special Relativity: First Results and Key Open Problems,*]{}” Int. J. Mod. Phys.  D [**11**]{}, 1643 (2002) \[arXiv:gr-qc/0210063\]. R. Schutzhold and W. G. Unruh, “[*Problems of doubly special relativity with variable speed of light,*]{}” JETP Lett.  [**78**]{}, 431 (2003) \[Pisma Zh. Eksp. Teor. Fiz.  [**78**]{}, 899 (2003)\] \[arXiv:gr-qc/0308049\]. S. Hossenfelder, “[*Interpretation of quantum field theories with a minimal length scale,*]{}” Phys. Rev.  D [**73**]{}, 105013 (2006) \[arXiv:hep-th/0603032\]. [^1]: hossi@nordita.org
--- author: - | Atul Ingle[^1]\ [ingle@uwalumni.com]{}\ - | Andreas Velten[^2]\ [velten@wisc.edu]{} - | Mohit Gupta[^3]\ [mohitg@cs.wisc.edu]{}\ title: '[High Flux Passive Imaging with Single-Photon Sensors]{}' --- =1 [10]{}=-1pt Y. Altmann, S. McLaughlin, M. J. Padgett, V. K. Goyal, A. O. Hero, and D. Faccio. Quantum-inspired computational imaging. , 361(6403), 2018. I. M. Antolovic, C. Bruschini, and E. Charbon. Dynamic range extension for photon counting arrays. , 26(17):22234–22248, aug 2018. I. M. Antolovic, S. Burri, C. Bruschini, R. A. Hoebe, and E. Charbon. imagers for super resolution localization microscopy enable analysis of fast fluorophore blinking. , 7:44108, mar 2017. H. R. Blackwell. Contrast thresholds of the human eye. , 36(11):624, nov 1946. D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller. Multiscale gigapixel photography. , 486(7403):386–389, jun 2012. M. Buttafava, J. Zeman, A. Tosi, K. Eliceiri, and A. Velten. Non-line-of-sight imaging using a time-gated single photon avalanche diode. , 23(16):20997–21011, Aug 2015. S. Cova, M. Ghioni, A. Lacaita, C. Samori, and F. Zappa. Avalanche photodiodes and quenching circuits for single-photon detection. , 35(12):1956–1976, Apr 1996. P. E. Debevec and J. Malik. Recovering high dynamic range radiance maps from photographs. In [*ACM SIGGRAPH 2008*]{}, page 31, Los Angeles, CA, 2008. ACM, ACM. N. A. W. Dutton, I. Gyongy, L. Parmesan, S. Gnecchi, N. Calder, B. R. Rae, S. Pellegrini, L. A. Grant, and R. K. Henderson. A [SPAD]{}-based [QVGA]{} image sensor for single-photon counting and quanta imaging. , 63(1):189–196, Jan 2016. E. Fossum, J. Ma, S. Masoodian, L. Anzagira, and R. Zizza. The quanta image sensor: Every photon counts. , 16(8):1260, Aug 2016. A. E. Gamal and H. Eltoukhy. image sensors. , 21(3):6–20, May 2005. G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio. Detection and tracking of moving objects hidden from view. , 10:23–26, 2016. G. R. Grimmett and D. R. Stirzaker. . Oxford University Press, 3 edition, 2001. I. Gyongy, N. Calder, A. Davies, N. A. Dutton, R. R. Duncan, C. Rickman, P. Dalgarno, and R. K. Henderson. . , 65(2):547–554, 2018. S. W. Hasinoff and K. Ikeuchi. , pages 608–610. Springer US, Boston, MA, 2014. F. D. I. and O. A. S. Using multiple exposures to improve image processing for autonomous vehicles, May 2017. M. A. Itzler. Apparatus comprising a high dynamic range single-photon passive 2[D]{} imager and methods therefor, Mar 2017. S. Kavadias, B. Dierickx, D. Scheffer, A. Alaerts, D. Uwaerts, and J. Bogaerts. A logarithmic response cmos image sensor with on-chip calibration. , 35(8):1146–1152, aug 2000. A. Kirmani, D. Venkatraman, D. Shin, A. Cola[ç]{}o, F. N. C. Wong, J. H. Shapiro, and V. K. Goyal. First-photon imaging. , 343(6166):58–61, 2014. C. Marois, B. Macintosh, T. Barman, B. Zuckerman, I. Song, J. Patience, D. Lafreniere, and R. Doyon. Direct imaging of multiple planets orbiting the star [HR]{} 8799. , 322(5906):1348–1352, nov 2008. J. W. M[ü]{}ller. Dead-time problems. , 112(1-2):47–57, 1973. Nayar and Branzoi. Adaptive dynamic range imaging: optical control of pixel exposures over space and time. In [*Proceedings Ninth [IEEE]{} International Conference on Computer Vision*]{}. [IEEE]{}, 2003. S. Nayar and T. Mitsunaga. High dynamic range imaging: spatially varying pixel exposures. In [*Proceedings [IEEE]{} Conference on Computer Vision and Pattern Recognition [CVPR]{} 2000*]{}, pages 472–479, Hilton Head, SC, 2000. [IEEE]{} Comput. Soc. T. Nieh[ö]{}rster, A. L[ö]{}schberger, I. Gregor, B. Kr[ä]{}mer, H.-J. Rahn, M. Patting, F. Koberling, J. Enderlein, and M. Sauer. Multi-target spectrally resolved fluorescence lifetime imaging microscopy. , 13(3):257, 2016. M. O’Toole, D. B. Lindell, and G. Wetzstein. Confocal non-line-of-sight imaging based on the light-cone transform. , 555:338–341, Mar 2018. H. P. Robinson. On printing photographic pictures from several negatives. , 7(115):94, 1860. N. Roy, F. Nolet, F. Dubois, M. O. Mercier, R. Fontaine, and J. F. Pratte. Low power and small area, 6.9 ps rms time-to-digital converter for 3d digital sipm. , PP(99):1–1, 2017. D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. C. Wong, and J. H. Shapiro. Photon-efficient imaging with a single-photon camera. , 7:12046, jun 2016. J. N. Tinsley, M. I. Molodtsov, R. Prevedel, D. Wartmann, J. Espigul[é]{}-Pons, M. Lauwers, and A. Vaziri. Direct detection of a single photon by humans. , 7:12172, jul 2016. M. D. Tocci, C. Kiser, N. Tocci, and P. Sen. A versatile [HDR]{} video production system. , 30(4):41, 2011. C. Vinegoni, C. L. Swisher, P. F. Feruglio, R. J. Giedt, D. L. Rousso, S. Stapleton, and R. Weissleder. Real-time high dynamic range laser scanning microscopy. , 7:11077, apr 2016. F. Yang, Y. M. Lu, L. Sbaiz, and M. Vetterli. Bits from photons: Oversampled image acquisition using binary poisson statistics. , 21(4):1421–1436, Apr 2012. [^1]: Department of Computer Sciences and Department of Biostatistics (corresponding author) [^2]: Department of Biostatistics and Department of Electrical and Computer Engineering [^3]: Department of Computer Sciences, University of Wisconsin-Madison
--- abstract: | We study the critical behavior of a driven interface in a medium with random pinning forces by analyzing spatial and temporal correlations in a lattice model recently proposed by Sneppen \[Phys. Rev. Lett. [**69**]{}, 3539 (1992)\]. The static and dynamic behavior of the model is related to the properties of directed percolation. We show that, due to the interplay of local and global growth rules, the usual method of dynamical scaling has to be modified. We separate the local from the global part of the dynamics by defining a train of causal growth events, or “avalanche”, which can be ascribed a well-defined dynamical exponent $z_{loc} = 1 + \zeta_c \simeq 1.63$ where $\zeta_c$ is the roughness exponent of the interface. address: - ' Theoretische Physik III, Ruhr-Universität Bochum, Universitätsstr. 150, D-44801 Bochum, Germany ' - ' Institut für Theoretische Physik, Universität zu Köln, Zülpicher Str. 77, D-50937 Köln, Germany ' author: - Heiko Leschhorn - 'Lei-Han Tang' title: Avalanches and Correlations in Driven Interface Depinning --- Introduction ============ The behavior of a driven interface subjected to quenched random forces plays an important role in the ordering kinetics of impure magnets and other domain growth phenomena [@review1]. The driving force $F$ can be realized by a magnetic field, pressure or chemical potential favoring the growth of one of the coexisting phases. If $F$ is weak, the interface is typically pinned in one of the many locally stable configurations. In this case growth is possible only through thermally activated hopping, which is an extremely slow process at low temperatures. However, when $F$ exceeds some critical value $F_c$, all metastable states disappear and the interface is then free to move even at zero temperature. The depinning of the interface at $F_c$ can be considered as a critical phenomenon where characteristic quantities show power-law behavior, e.g. the velocity of the interface is expected to scale as $v \sim (F-F_c)^\theta $, where $\theta $ is a critical exponent. It is known that charge-density waves pinned by impurities exhibit very similar behavior [@cdwfish]. The dynamical behavior associated with the disappearance of metastable states (i.e. avalanche) as $F$ is increased towards $F_c$ has been a subject of recent interest in the study of systems far from equilibrium [@soc; @earthquake; @rob; @cdwmidd]. A plausible continuum description of the interface dynamics is given by the following equation with a Kardar-Parisi-Zhang (KPZ) [@kpz] nonlinear term, $${\partial h \over \partial t} = \nu \nabla ^2 h + {\lambda \over 2} (\nabla h)^2 + F - \eta ({\bf x} , h), \eqno(1)$$ where $h({\bf x},t)$ is the height of the interface. Unlike the original KPZ-equation for Eden-type growth processes, here $ \eta ({\bf x},h)$ is a quenched random force with short range correlations. In the case $\lambda = 0 $, all critical exponents describing the depinning transition have been calculated recently in a functional renormalization group treatment close to four interface dimensions [@nstl]. For growth in an isotropic medium, it is plausible that the $\lambda$-term is not present when the interface moves with vanishing velocity. However, the term can be present for anisotropic growth. Most solid-on-solid type models are expected to be in the latter category. In 1+1 dimensions, there is a particularly simple class of lattice models which exhibit a critical depinning transition [@tl; @boston]. The mechanism for pinning is the directed percolation of cells with pinning forces $\eta$ greater than the driving force $F$. The threshold force $F_c$ needed to depin the interface is then simply related to the critical percolation density $\rho _c$ of such cells by $F_c = 1 - \rho _c$. The roughness exponent of the pinned interface is equal to that of the critical percolation cluster, $\zeta = \zeta _c \simeq 0.63 $. In a separate development, Sneppen introduced a simple model (model B of Ref. [@snep], hereafter referred to as the Sneppen model) to examine the interplay between local and global rules of growth in determining interface roughening and temporal correlations. He found numerically in (1+1)-dimensions, that the roughness of the interface also obeys the scaling of a string on a critical directed percolation cluster. As we explain below, the same roughness exponent for the two types of models is due to the fact that the same geometrical constraint (the “Kim-Kosterlitz” condition) is invoked in defining a locally stable configuration [@tlcom]. There are however differences in the way the interface is driven. Although the pinning and elastic forces on a given site in the Sneppen model are [*local*]{} as in eq.(1), each time the site on the interface with minimal $\eta=\eta_{\rm min}$ is selected [*globally* ]{} and made to grow by one unit in height. Neighboring interface sites then adjust themselves to recover the geometrical constraint. In the language of a uniformly driven interface, such a rule corresponds to increasing $F$ just above $\eta_{\rm min}$ to make one site unstable, and then quickly set $F$ to a much smaller value to prevent an avalanche taking place. The Sneppen rule allows one to sample a particular sequence of interface configurations, each of them being a metastable configuration at some value of $F$. Successive configurations in the sequence differ only by an infinitesimal amount, i.e. a few sites (about 4) which have moved when one site is made unstable. The situation here resembles that of an interface at a finite temperature and driven by a uniform force $F$ far below $F_c$: Local irreversible motions are made possible by thermal fluctuations, but the interface stays always close to some metastable state. Further numerical studies of the model by Sneppen and Jensen [@sj] revealed interesting spatial correlation of successive growth events. In addition, they have found that, in the saturated regime, the height advance at a given column exhibits complicated scaling with time, with exponents varying with the moment considered. The aim of the present paper is to analyze the spatial and temporal correlations in the Sneppen model and try to relate the observed scaling behavior to the properties of directed percolation. It turns out that some of these correlations depend on the value of $\eta _{\rm min}$ (or equivalently $F$) at a given moment. Since $ \eta _{\rm min}$ fluctuates in time, for these correlations the temporal translational invariance is lost. This is an important feature of the dynamics based on global rules. Another significant consequence of the global rules is that growth is no longer homogeneous in space. At a given moment, only a small part of the interface is moving. The growth events that follow may either closeby, or far away. Due to this property, the usual method to perform dynamical scaling, based on the presumption of a single growing correlation length, should be modified. In this connection we found it useful to distinguish growth events which are closeby and hence bear strong correlations, from those which are far apart. We observe that a train of growths events, started with some $\eta _{\rm min}^0$, propagates laterally with a well-defined dynamical exponent $z_{loc}$, which can be related to the roughness of the directed percolation cluster: $z_{loc} = 1 + \zeta _c \simeq 1.63$. In the context of a driven interface, this motion can be thought of as an avalanche. We found that the distribution of avalanche sizes obeys a power-law decay up to a size related to $\eta _{\rm min}^0$. The spatial-temporal correlations between successive growth events, on the other hand include both local and global motion. The paper is organized as follows. In section II we recall the definition of the Sneppen model and present a theorem which relates the stable (static) configurations of the Sneppen model to directed percolating strings. In Section III we define the avalanches (causal events) and determine their dynamical behavior as well as their size distribution. In Section IV the spatial-temporal correlations are investigated by considering the distribution of lateral distances between successive growth events. Section V contains conclusions and a summary. Distribution of pinning forces and roughness ============================================ We first review the definition of the Sneppen model [@snep]. Each cell $(i,h)$ on a square lattice is assigned a random pinning force $\eta (i,h)$ uniformly distributed in the interval $[0,1)$. The interface is specified by a set of integer column heights $h_i ~(i=1,...,L)$ with the local slope constraint $|h_i-h_{i-1}|\leq 1$ for all $h_i$ (“Kim-Kosterlitz” condition). Growth $h_j\rightarrow h_j+1$ proceeds at the site $j$ where the pinning force $\eta(j, h_j)=\eta_{\rm min}$ is the minimum among all interface sites, followed by necessary adjustments at neighboring sites until the slope constraint is recovered. The growth rules are illustrated in Fig.1. Since we want to relate the behavior of the interface in the Sneppen model to directed percolation, we first recall some of the properties of directed percolation clusters [@percgen]. When the density $\rho$ of occupied sites is less than some threshold value $\rho _c$, a typical cluster of occupied sites connected horizontally or diagonally extends over a distance of the order of $\xi_\parallel$ in the parallel direction and a distance of the order of $\xi_\perp$ in the perpendicular direction. For $\rho >\rho_c$, there appears a directed percolating cluster which extends over the whole system. This cluster has a network structure of nodes and compartments, where each compartment has an anisotropic shape similar to the connected clusters below $\rho _c$, characterized by $\xi_\parallel$ and $\xi_\perp$. On both sides of the percolation transition, the two lengths have the power-law behavior $$\xi_\parallel\sim |\rho-\rho_c|^{-\nu_\parallel},\qquad \xi_\perp\sim |\rho-\rho_c|^{-\nu_\perp}.\eqno(2)$$ Series calculations give $\nu_\parallel=1.733\pm 0.001~$, $\nu_\perp=1.097\pm 0.001$ [@percexp] and $\rho_c = 0.5387 \pm 0.003 $ [@percfc]. The roughness of a percolating string scales as $\xi_\perp \sim \xi_\parallel ^{\nu_\perp / \nu_\parallel}$, i.e. the roughness exponent $\zeta _c = \nu_\perp / \nu_\parallel \simeq 0.63.$ In Ref.[@tlcom] we proposed to study the distribution $P_p (\eta) $ of pinning forces $\eta (i,h_i) $ at the interface and the probability distribution $P_m (\eta_{\rm min})$ during growth. When starting at $t=0$ with a flat interface $h_i \equiv 0$, the forces $\eta (i,h_i) $ are equally distributed. During the transient regime the interface becomes rough and since always the smallest pinning force $\eta_{\rm min}$ is updated, the sites with small $\eta (i,h_i) $ get rare. This in turn implies that the typical value of the selected $\eta _{\rm min}$ increases with time. The distributions $P_p (\eta) $ and $P_m (\eta_{\rm min})$ shown in Fig.2a were recorded in the transient regime in the time interval $L/4 \leq t < L/2$ for a system of size $L=8192$. The peak of $P_m (\eta_{\rm min})$ moves to the right with increasing time and thereby “eating up the store” of small $\eta (i,h_i) $ which were present at $t=0$ for the flat interface. As long as there were no $\eta_{\rm min}$ larger than a value $\eta _u (t) $, the distribution $P_p (\eta)$ is still a constant for $\eta > \eta _u (t) $. (In Fig.2a $\eta _u (t) \approx 0.3$.) In the next paragraph we show that in the thermodynamic limit, there will never be an $\eta _{\rm min} $ larger than a critical value $ F_c = 1 - \rho_c \simeq 0.461$ and therefore the transient regime ends when $\eta _{\rm min}$ first comes close to $F_c$. When the peak of $P_m (\eta_{\rm min})$ approaches $F_c$, its height vanishes and the distribution becomes stationary, which we show in Fig.2b together with $P_p(\eta)$ in the saturated regime. Since $\eta _{\rm min} \leq F_c$, the stationary distribution $P_p ( \eta )$ is flat for $\eta > \eta_u = F_c$ in the limit $L \to \infty$. To see that the growing interface always has $\eta_{\rm min} \leq F_c$ in the thermodynamic limit we first note that every interface configuration satisfying the slope constraint is a path on a directed percolation cluster of sites with $\eta$ greater than or equal to $\eta_{\rm min}$. Such a path only exists if the density $1-\eta_{\rm min}$ of these cells on the lattice is greater than the critical percolation density $\rho_c$, i.e. $\eta_{\rm min}\leq 1-\rho_c $ for all interfaces. Paths on the infinite [*critical*]{} percolating cluster have the [*largest*]{} $\eta _{\rm min} = F_c = 1 - \rho_c \simeq 0.461 $. In a numerical simulation with a finite system however, we see only a part of an infinite critical percolating cluster. Thus, an interface which traces out a critical path can have a value $\eta_{\rm min}$ slightly larger than $F_c$ and the distribution $P_m (\eta_{\rm min})$ in the saturated regime in Fig.2b is not exactly zero for $\eta _{\rm min} > F_c$. The motion of the flat interface at $t=0$ to the first critical path corresponds to the transient regime (see Fig.2a). In the following we show that an interface in the transient regime as well as an interface which already crossed a critical path, is driven to configurations with successively increasing $\eta _{\rm min}$ and thereby approaching the next path on a critical percolating cluster. We first introduce a few notations. Assuming that the random forces $\eta$ are real numbers, there will not be two interface configurations sharing the same $\eta _{\rm min}$ because an interface is always updated at the site with $\eta _{\rm min}$. Thus each interface configuration $\{ h_i \} $ can be characterized by its $\eta _{\rm min}$ and is denoted by $ H[\eta _{\rm min}]$. An order relation is defined by $ H[\eta _{\rm min}^A] > H[\eta _{\rm min}^B]$ if $h_i^A \geq h_i^B$ for all $i$ and $ H[\eta _{\rm min}^A] \neq H[\eta _{\rm min}^B]$. Consider an interface $ H[\eta _{\rm min}^0]$ at a time $t=t_0$ and choose a real number $c$ with $\eta _{\rm min}^0 \leq c \leq F_c$. We next show that the growing interface at times $t > t_0$ has to overlap completely with the closest percolating path which has $\eta _{\rm min} > c$. This closest path is defined by $ H[\eta _{\rm min}^c] \equiv {\rm min} \bigl \{ ~H[\eta _{\rm min}] $, such that $\eta _{\rm min} > c$ and $ H[\eta _{\rm min}] > H[\eta _{\rm min}^0] ~\bigr \}$. There can be many percolating pathes which share the same $\eta _{\rm min}^c$ but only the lowest path $ H[\eta _{\rm min}^c] $ will be realized by the interface as can be seen as follows. The growing interface configurations $ H[\eta _{\rm min}] $ at times $t \geq t_0$ which are below $ H[\eta _{\rm min}^c] $, $ H[\eta _{\rm min}^0] \leq H[\eta _{\rm min}] < H[\eta _{\rm min}^c] $, all have an $\eta _{\rm min}<\eta _{\rm min}^c$ by the definition of $H[\eta _{\rm min}^c] $. (This is also true for interfaces $ H[\eta _{\rm min}] $ which already overlap in part with $ H[\eta _{\rm min}^c] $, no matter if the site with $\eta _{\rm min}^c $ is already on the interface $ H[\eta _{\rm min}] $.) Therefore these $\eta_{\rm min}$ will all be updated [*before*]{} $\eta _{\rm min}^c$ and hence the interface will grow until $ H[\eta _{\rm min}] = H[\eta _{\rm min}^c]$ and then update the site with $\eta _{\rm min}^c$ because any interface site can advance at most by one unit in height upon each updating. The motion with $c=\eta_{\rm min}^0$ we will call later an “avalanche” (see Sec.III). If we choose however $c=F_c$, it follows that all pathes which are the minimum on a [*critical* ]{} percolation cluster, i.e. $ H[\eta _{\rm min}^c] = H[F_c]$ in the thermodynamic limit, act as “checking points” where the interface has to go through. There, a “snapshot” of the distribution $P_p (\eta)$ is exactly a step function. It can be seen from Fig.2b that the intermediate interface configurations between two checking points are also close to critical, in the sense that only a small percentage (which we expect to vanish in the thermodynamic limit) of sites on the interface have $\eta< F_c$. Hence the interface in the Sneppen model has approximately the roughness exponent $\zeta$ of the critical directed percolating path with $\zeta \simeq \zeta _c \simeq 0.63$. We have seen that the interface $ H[\eta _{\rm min}]$ is successively driven to configurations $ H[\eta _{\rm min}^c]$ with $\eta _{\rm min}^c > \eta _{\rm min}$ and thereby eliminating $\eta (i,h_i) < \eta _{\rm min } ^c $ on the interface. This process (see Fig.2) may be perceived as “self-organizing” part of the approach to criticality, where $\eta_{\rm min}^c = F_c$ for infinite systems. In comparison, the interface in the model of Ref. [@tl] is pinned by the critical percolating cluster when the driving force $F$ is tuned to its threshold value $F_c = 1 - \rho _c$. The roughness exponent $ \zeta $ can be measured by the equal time height-height correlation function $C(r) = \langle \overline{[h(r+r',t) - h(r',t)]^2} \rangle \sim r^{2 \zeta} $ where the overbar and the angular brackets denote the spatial and the configurational average, respectively. For a system of size $L=8192$ we found a roughness exponent $\zeta = 0.655 \pm 0.005$ which is somewhat larger than the critical value 0.63. However, the measured exponent $\zeta$ varies systematically with the system size: For $L=900$ we get $\zeta = 0.665 \pm 0.005$, whereas for $L=65536$ we found $\zeta = 0.648 \pm 0.005 $. An explanation is that in a finite system the distribution $P_p (\eta)$ is not exactly a step function and the motion between two critical clusters yields an effective exponent larger than 0.63 similar to a moving interface of Ref. [@tl]. In our simulations, $ P_p (\eta) $ approaches the step function with increasing system size. Thus we expect that the measured discrepancy to the exponent of the percolating cluster $\zeta _c \simeq 0.63$ vanishes in the thermodynamic limit. This is also supported by a simulation where we measured $C(r)$ only when $\eta_{\rm min} $ is close to $F_c$. In this case $\zeta = 0.64 \pm 0.01$ which is consistent with the expected critical value. Causal events: avalanches ========================= After a transient regime, when $\eta _{\rm min}$ first comes close to $F_c$, the interface in the Sneppen model exhibits a steady-state critical behavior (saturated regime), which allows a convenient study of the dynamics at criticality. However, the behavior is complicated, caused by the interplay between the local adjustments due to the slope constraint and the rule that the growth site with $\eta = \eta _{\rm min}$ is chosen among all interface sites. Successive growth events can be far apart and the motion is therefore inhomogeneous in space. Hence, a single growing correlation length does not exist and the usual dynamical scaling has to be considered with care. The realized sequence of growth sites after a time $t=t_0$ depends on the globally chosen value $\eta_{\rm min} (t=t_0)$, which is responsible for the growth inhomogeneity in space. Thus we will separate the local part from the global part of the dynamics by defining an “avalanche”, which has the property that the sequence of growth sites [*inside*]{} the avalanche does [*not*]{} depend on $\eta_{\rm min} (t=t_0)$. An avalanche is defined by a sequence of growth events (including the necessary adjustments due to the slope constraint) started at any integer time $t=t_0$ where $ \eta_{\rm min} (t=t_0) $ is denoted by $ \eta_{\rm min} ^0 $. This avalanche is terminated at the first time $\tau$ when the successive $\eta_{\rm min} (\tau +1) $ is larger than $ \eta_{\rm min} ^0 $, i.e. for all times $t$ with $t_0 < t \leq \tau $ the growth events have $ \eta_{\rm min} (t) < \eta_{\rm min} ^0 $. We call this causal events because the avalanche consists of a train of growths events which are all induced by local adjustments in $h_i$ due to the slope constraint after $t=t_0$. To see in what sense the sequence of growth sites inside an avalanche is independent of $ \eta_{\rm min} ^0 $, we consider at $t=t_0$ two identical interface configurations $A$ and $B$ with the same random forces $\eta (i,h) $ above the interface but with different $\eta (i,h_i)$ at the interface such that $ \eta_{\rm min} ^0 [A] < \eta_{\rm min} ^0 [B] $ at the same site $j$. For the configuration $A$ there exist forces $\eta (i,h_i)$ with $ \eta_{\rm min} ^0 [A] < \eta (i,h_i) < \eta_{\rm min} ^0 [B] $ but for interface $B$ all $\eta (i,h_i) > \eta_{\rm min} ^0 [B] $. Since the random forces above the interface are assumed to be the same, all $ \eta_{\rm min} $ of the growing interfaces $A$ and $B$ inside both avalanches are identical because they are all induced by identical local adjustments after $t=t_0$. When however at time $\tau ^A +1$, $ \eta_{\rm min} ^0 [A] < \eta_{\rm min} (\tau ^A +1) < \eta_{\rm min} ^0 [B] $, i.e. when avalanche $A$ is terminated, this new growth site of interface $A$ can be far away, but for the interface $B$ it has to be still induced by the local adjustments of avalanche $B$ because there were no $ \eta (i,h_i) < \eta_{\rm min} ^0 [B] $ at $t=t_0$. We have seen that although the random environment at (and below) the two interfaces $A$ and $B$ is different, the motion inside the avalanche is only influenced by the local adjustments after $t= t_0$. A size $s$ of the avalanche can be defined by the number of growth events including the necessary adjustments, $s= \sum_i \bigl( h_i(\tau ) - h_i (t_0) \bigl )$, and a width by ${\rm max} \{~i$, such that $h_i (\tau) > h_i (t_0)~\} ~-~ {\rm min} \{~i $, such that $h_i (\tau) > h_i (t_0)~\}$. From Sec.II we know that an interface with $\eta _{\rm min}^0$ is driven to configurations with $\eta _{\rm min}^c > \eta _{\rm min}^0$. At $t=t_0$ with $ \eta_{\rm min} (t=t_0) = \eta_{\rm min} ^0 $, a part of the interface starts to move which was “pinned” just before $t_0$ by a path with $\eta (i,h_i) \geq \eta _{\rm min}^0$. This part of the interface moves through a compartment of the percolation cluster to the next path which again pins the interface with $\eta (i,h_i) > c = \eta _{\rm min}^0$ (see Sec.II). Since the compartment has a height of the order of $\xi_\perp ( \eta_{\rm min} ^0) $ and a width of the order of $\xi _\parallel ( \eta_{\rm min} ^0)$, it is a natural conjecture that the width of the avalanche scales with $ \xi _\parallel ( \eta_{\rm min} ^0) \sim [F_c - \eta_{\rm min} ^0]^{-\nu _\parallel} $ and the size is at most $\xi _\parallel ( \eta_{\rm min} ^0) * \xi _\perp (\eta_{\rm min} ^0) \sim [F_c - \eta_{\rm min} ^0]^{-(\nu _\perp + \nu_\parallel)} $. Note that at every (integer) time an avalanche is started and big avalanches can contain smaller ones. Thus successive growth events inside a big avalanche can be quite far apart (jumps between small avalanches), but all events are inside the correlation length $ \xi_\parallel (\eta_{\rm min} ^0) \sim [F_c - \eta_{\rm min} ^0]^{-\nu _\parallel} $. In this sense an avalanche is “localized” and we call the corresponding motion “local dynamics”. The presumption of dynamical scaling, that there is only a single growing correlation length, is reestablished for the local dynamics and we can ascribe a well-defined local dynamical exponent $z_{loc}$ to the lateral propagation of growth inside the avalanche: $\tau \sim \xi _\parallel ^{z_{loc}}$. Since the size of an avalanche is proportional to the time $\tau$, one has from $\xi_\parallel ^{z_{loc}} \sim \tau \sim \xi_\perp \xi_\parallel $ $$z_{loc} \simeq 1+ \nu_\perp / \nu_\parallel = 1 + \zeta _c \simeq 1.63. \eqno (3)$$ In a simulation, this local dynamical exponent can be detected by considering the infinite moment of the height-height time correlation function in the saturated regime $$C_q (t) = \left \langle \overline {[ h_i (t+t') - h_i (t') - \overline {h_i (t+t') - h_i (t') } ]^q } ^{1/q} \right \rangle , \eqno (4)$$ which becomes for $q \to \infty $ $$C_\infty (t) \simeq \left \langle {\rm max}_i\{h_i (t+t') - h_i (t')\} \right \rangle . \eqno (5)$$ Since only the column with maximum height-advance $\Delta h_{\rm max}$ contributes to the infinite moment $C_\infty (t)$, most growth events during the time $t$ are involved in the motion through a compartment of a percolation cluster of height $\Delta h_{\rm max}$, i.e. $\Delta h_{\rm max} \sim \xi_\perp \sim t/\xi_\parallel \sim t^{1-1/z_{loc}}=t^{\zeta / z_{loc}}$. Thus $C_\infty (t) \sim t ^ {\beta _ \infty}$ with $\beta _\infty = \zeta / z_{loc} = \beta _{loc}$. Sneppen and Jensen observed a scaling of $C_\infty (t) $ with $\beta _\infty = 0.41 \pm 0.02 $ which is close to the exponent $\beta_{loc} = \zeta / z_{loc} = \nu_\perp /(\nu_\perp + \nu_\parallel ) \simeq 0.39 $. Our own simulations give $\beta _\infty = 0.40 \pm 0.01 $ which is in perfect agreement with $\beta_{loc}$ if we insert the measured $\zeta \simeq 0.655 $. The distribution $P_h(\Delta h,t)$ of height advances $ \Delta h(t) = h_i (t'+t) - h_i(t') $ is shown in Fig.3a for $2^6 \leq t \leq 2^{15}$ and $\Delta h>0$. $P_h$ has a large peak at $\Delta h=0$ which is not shown, i.e. most of the columns $i$ have not grown (over 99 percent for $t=2^6$ and 70 percent for $t=2^{15}$ with $L=8192$). Next we show that the distribution $P_h (\Delta h,t)$ for the moving columns can be roughly brought to a “local” scaling form when $\Delta h$ is scaled by $t^{\beta_{loc}}$. To normalize $P_h(\Delta h>0,t)$ we note that the portion $n_{mov}$ of columns which have moved, scales as the correlation length divided by the system size, $n_{mov} \sim t^{1/z_{loc}} / L$. Thus one has $$P_h (\Delta h>0,t) = {t^{1/z_{loc}} \over L}~ {1 \over t^{\beta_{loc}}}~ \Gamma \left ( {\Delta h \over t^{\beta_{loc}}} \right )$$ $$\sim t^{1-2\beta _{loc}}~ \Gamma \left ( {\Delta h \over t^{\beta_{loc}}} \right ), \eqno (6)$$ where $\Gamma (y)$ is a scaling function (see Fig.3b). For fast growing columns with large $\Delta h/t^{\beta_{loc}}$ the above argument for $\Delta h _{\rm max} $ applies and the data collapse in Fig.3b is perfect. For smaller $\Delta h/t^{\beta_{loc}}$, however, there are significant deviations from the “local” scaling form. For the second moment $C_2 (t) $ [@difdef] Sneppen and Jensen observed a scaling with an exponent $\beta _2 = 0.69 \pm 0.02$ [@sj]. However, due to the inhomogeneity in growth, the application of dynamical scaling is questionable as mentioned above. The deviation of the effective exponent $\beta _2$ from the scaling of the local dynamics is caused by the fact that only a small part of the interface ($n_{mov} \sim t^{1/z_{loc}} / L$) has moved for short times $t$. The observed value for $\beta_2$ can be explained by using that $\Delta h$ scales [*roughly*]{} with $t^{\beta_{loc}}$ for $\Delta h>0$ (see Fig.3b). For general integer $q$ we write $$C_q(t) = \left \langle \overline {\left ( \Delta h - \overline {\Delta h} \right ) ^q } ^ {1/q} \right \rangle$$ $$= \left \langle \left ( \overline {\Delta h ^q} -q \overline {\Delta h^{q-1} } ~ \overline {\Delta h} +~...~+ \overline {\Delta h}^q \right ) ^{1/q} \right \rangle$$ $$\simeq \left \langle \left ( {t^{1/z_{loc}} \over L} t^{q\beta_{loc}} - q {t^{1/z_{loc}} \over L} t^{(q-1)\beta_{loc}}{t\over L}+...+ {t^q \over L^q} \right ) ^ {1\over q} \right \rangle$$ ($\overline {\Delta h (t)} = t / L$). In the observed scaling regime $t \ll L^{z_{loc}}$ and therefore the main contribution comes from the first term. Thus we have $$C_q (t) \sim t^{\beta_{loc} + 1/qz_{loc}} \sim t^{1+(1-q)/ qz_{loc}}, \eqno (7)$$ i.e. $\beta _q \simeq 1+ (1-q)/q(1+\zeta)$ and $\beta_2 \simeq 0.70 $ which agrees with the simulations, while for $q \to \infty ~$, $ \beta _q \to \beta_{loc}$. Next the avalanche size is investigated. We observe that the distribution $ P_{av} ( s, \eta _{\rm min} ^0) $ of the avalanche size $s$ for a given $\eta _{\rm min} ^0 $ shows a power-law decay with an exponent $\kappa \simeq 1.25 \pm 0.05 $ up to a size $s_0 \sim \xi _\parallel ( \eta_{\rm min} ^0) * \xi _\perp (\eta_{\rm min} ^0)$ and then drops to zero for $s> s_0$ (see Fig.4a). Thus the avalanche size distribution obeys the scaling form $$P_{av}(s) = s ^{- \kappa} ~ \Phi \left ( {s \over s_0} \right ) \eqno (8)$$ with $\Phi (y) = const.$ for $y<1$ and a rapid decay for $y>1$. The good data collapse onto the scaling form eq.(8) in Fig.4b supports our picture that an avalanche corresponds to the motion through a compartment of a percolation cluster, which is characterized by the lengths $\xi_\perp $ and $\xi_\parallel$. Spatial-temporal correlations ============================= In this section we try to understand the spatial-temporal correlations between successive growth events. To this end we investigate the probability distribution $P_{co} (x, \Delta t) $, where $x$ is the distance parallel to the interface between growth events which occur after a time $\Delta t$. Sneppen and Jensen [@sj] observed that $P_{co} (x, \Delta t) $ is constant for sufficiently small $x$ and a has power-law decay with an exponent $\gamma = 2.25 \pm 0.05 $ above a value $x_c$ which increases with $\Delta t$ (Fig.5a). We next explain that the behavior $P_{co} (x, \Delta t) = const $ corresponds to the dynamics of causal growth events inside an avalanche. The local adjustments due to the slope constraint after the avalanche has started induce randomly distributed $ \eta (i,h_i) $. During the avalanche all $\eta _{\rm min} (t) < \eta _{\rm min} ^0 $ are taken from these newly appeared $\eta (i,h_i) $. Thus, the $\eta _{\rm min} (t) $ are randomly distributed in space, i.e. the distance between successive growth events is also equally distributed as long as $\Delta t$ is smaller than the duration of the avalanche $\tau$, i.e. as long as $ x < \xi_\parallel (\eta_{\rm min} ^0) $. Therefore we can cast $P_{co} (x,\Delta t) $ into the scaling form $$P_{co} = { 1 \over \Delta t^{1/z_{loc}}} ~ \Psi \left ({ x \over \Delta t^{1/z_{loc}}} \right ) \eqno (9)$$ with $z_{loc}= 1 + \zeta$ and $\Psi (y) = const. $ for $y<1$ and $\Psi \sim y^{-\gamma} $ for $y>1$. The scaling form eq.(9) with a satisfactorily data collapse is shown in Fig.5b. We see that the spatial-temporal correlations depend on the value $\eta _{\rm min} $. For $\eta _{\rm min} $ close to $F_c$, $x$ is equally distributed even for large $x$. For small $\eta _{\rm min} $ on the other hand, $P_{co} (x,\Delta t) $ obeys a power-law decay also for small $x$, i.e. it is more probable that successive growth events are closeby. Thus, for these correlations the temporal translational invariance is destroyed. The value of the exponent $\gamma $ can be understood by considering the conditional distribution $P_{co}$ for fixed $\eta _{\rm min}$ of the first event, which we denote by $ \tilde P_{co} (\eta _{\rm min},x,\Delta t)$. We express $ P_{co} (x,\Delta t)$ by $$P_{co} (x,\Delta t) = \int d \eta _{\rm min}~ \tilde P_{co} (\eta _{\rm min},x,\Delta t)~ P_{\eta}(\eta _{\rm min}). \eqno (10)$$ From our simulation (see Fig.2b) we see that the probability distribution $ P_{\eta}(\eta _{\rm min}) \sim (F_c - \eta_{\rm min}) $ close to $F_c$. From the above discussion we know that $ \tilde P_{co} (\eta _{\rm min},x,\Delta t) \simeq 1/\xi_\parallel$ if $\xi_\parallel > x$. Thus the integral eq.(10) takes the form $$P_{co} (x,\Delta t) = \int _{\xi_\parallel > x} d \eta _{\rm min}~ {1 \over \xi_\parallel}~ (F_c - \eta_{\rm min})$$ from which one obtains $P_{co} (x,\Delta t) \sim x ^{- \gamma}$ with $$\gamma = 1 + (2 / \nu_\parallel) \simeq 2.16. \eqno (11)$$ This is quite close to our numerical result $\gamma = 2.20 \pm 0.05$. We have also directly measured the distribution $ \tilde P_{co} (\eta _{\rm min},x,\Delta t) $ to check our assumptions. We found that $ \tilde P_{co} (\eta _{\rm min},x,\Delta t = 1) $ first decreases for small $x$ due to a high probability for choosing the next $\eta _{\rm min}$ from the newly appeared $\eta$ from the local adjustments. However, to explain the exponent $\gamma$ we are interested in large $x$ (and thus in large $\xi_\parallel$), for which we indeed observe a constant $ \tilde P_{co} (\eta _{\rm min},x,\Delta t) $ for $x < \xi_\parallel (\eta _{\rm min})$. Conclusions and summary ======================= We have analyzed a number of spatial and temporal correlations in the Sneppen model [@snep; @sj]. The imposed slope constraint $|h_i - h_{i-1}| \leq 1$ is the reason for the relation of the static and dynamic behavior to the properties of directed percolation. The roughness exponent of the interface in the Sneppen model and of the pinned interface in the model of Ref. [@tl] is equal to that of a percolating string, $\zeta _c \simeq 0.63$. The difference between the two models is, however, that in the model of Ref. [@tl] the interface is driven by a uniform force whereas in the Sneppen model there is a self-tuned driving force which keeps the interface at the onset of steady-state motion and therefore the interface shows critical behavior. This is achieved by the rule, that the site grows, which has the weekest pinning force $\eta_{\rm min}$ among all sites of the interface. This induces a nonlocal part in dynamics. As a consequence, the motion of the interface is inhomogeneous in space and the methods of dynamical scaling are not applicable in a direct way because there is no single growing correlation length. Thus we have separated the local from the global part of the motion by introducing an “avalanche”, and assigned a well-defined dynamical exponent $z_{loc} = 1 + \zeta_c$ to the lateral propagation of the growth inside an avalanche. We found that the size distribution of the avalanches started with $\eta_{\rm min} ^0$ has a power-law decay with an exponent $\kappa \simeq 1.25$ up to a size $\xi_\parallel (\eta_{\rm min} ^0) * \xi_\perp (\eta_{\rm min} ^0) $. The spatial-temporal correlations where investigated by the probability distribution $P_{co}(x,\Delta t)$ which shows a crossover from a behavior determined by causal growth events ($P_{co}(x,\Delta t) = const.$) to a power-law decay with an exponent $\gamma \simeq 2.2$ which can be also related to exponents of directed percolation. We have seen that for the distribution $P_{co}$ the temporal translational invariance is lost, which is due to the global part of the dynamics. Upon completion of the paper we became aware of an independent work by Z. Olami, I. Procaccia, and R. Zeitak where ideas similar to ours have been developed. Acknowledgements {#acknowledgements .unnumbered} ================ We thank Y. C. Zhang for a useful discussion and C. Külske for helpful conservations and critical remarks on the manuscript. The work is supported in part by the Deutsche Forschungsgemeinschaft under SFB 166 and 341. [99]{} , edited by S. Komura and H. Furukawa (Plenum, New York, 1988); [*Phase transitions and Critical Phenomena*]{}, edited by C. Domb and J. L. Lebowitz, Vol.8 (Academic, New York, 1983). D. S. Fisher, Phys. Rev. Lett. [**50**]{}, 1486 (1983); O. Narayan and D. S. Fisher, Phys. Rev. Lett. [**68**]{}, 3615 (1992). P. Bak, C. Tang, and K. Wiesenfeld, Phys. Rev. Lett. [**59**]{}, 381 (1987). J. M. Carlson and J. S. Langer, Phys. Rev. Lett. [**62**]{}, 2632 (1989); J. S. Langer and C. Tang, Phys. Rev. Lett. [**67**]{}, 1043 (1991). N. Martys, M. O. Robbins, and M. Cieplak, Phys. Rev B. [**44**]{}, 12294 (1991); H. Ji and M. O. Robbins, Phys. Rev. B [**46**]{}, 14519 (1992). D. S. Fisher and A. A. Middleton, Phys. Rev. B [**47**]{}, 3530 (1993); O. Narayan and A. A. Middleton, to be published. M. Kardar, G. Parisi, and Y.-C. Zhang, Phys. Rev. Lett. [**56**]{}, 889 (1986). T. Nattermann, S. Stepanow, L.-H. Tang, and H. Leschhorn, J. Phys. II France [**2**]{}, 1483 (1992); O. Narayan and D. S. Fisher, to be published. For numerical results on eq.(1) with $\lambda = 0$ in lower dimensions see H. Leschhorn, Physica A [**195**]{}, 324 (1993). L.-H. Tang and H. Leschhorn, Phys. Rev. A [**45**]{}, R8309 (1992). S. V. Buldyrev, A.-L. Barabási, F. Caserta, S. Havlin, H. E. Stanley, and T. Vicsek, Phys. Rev. A [**45**]{}, R8313 (1992). K. Sneppen, Phys. Rev. Lett. [**69**]{}, 3539 (1992). L.-H. Tang and H. Leschhorn, Phys. Rev. Lett. [**70**]{}, 3832 (1992). K. Sneppen and M.H. Jensen, Phys. Rev. Lett. [**71**]{}, 101 (1993). D. Stauffer and A. Aharony, [*Introduction to Percolation Theory*]{}, 2nd edition, (Taylor & Francis, London, 1992); W. Kinzel, in [*Percolation Structures and Processes*]{}, edited by G. Deutscher, R. Zallen, and J. Adler (A. Hilger, Bristol, 1983), p.425; S. Redner, Phys. Rev. B [**25**]{}, 5646 (1982); J. Kertész and D. E. Wolf, Phys. Rev. Lett. [**62**]{}, 2571 (1989). J. W. Essam, A. J. Guttmann, and K. De’Bell, J. Phys. [**A21**]{}, 3815 (1988); However, more recent transfer matrix calculations by D. ben-Avraham, R. Bidaux, and L. S. Schulman \[Phys. Rev. A [**43**]{}, 7093 (1991)\] gave $\nu_\perp/\nu_\parallel=0.630\pm 0.001$. J. A. M. S. Duarte, Physica A [**189**]{}, 43 (1992). Note that the directed percolation clusters in the model of Buldyrev et.al. Ref. [@boston] have a somewhat different geometry yielding a different value $\rho_c$ but the same critical exponents $\nu_\perp$ and $\nu_\parallel$. The definition of $C_q(t)$ in Ref. [@sj] is slightly different from that we used in eq.(4) which yields the expression in eq.(5) in the limit $q \to \infty $. Since in eq.(4) we first take the power $1/q$ and then average over time, we still average in eq.(5) over time (or over the disorder). Figure captions {#figure-captions .unnumbered} ===============
--- abstract: | Laser Induced Damage Thresholds (LIDTs) are a measure of the level of fluence an optical component may be expected to handle without observable damage. In this work we present an automated system for measurement of LIDTs for a wide range of components and include details of the data handling, image processing and code required. We then apply this system to LIDT measurements of a commercial HDP-1280-2 ’BlueJay’ ferroelectric display finding a LIDT of $9.2~\si{\watt\per\centi\metre\squared} \diameter 27 \si{\micro\metre}$, $5.5~\si{\watt\per\centi\metre\squared} \diameter 150 \si{\micro\metre}$ and $3.2~\si{\watt\per\centi\metre\squared} \diameter 3.1 \si{\milli\metre}$ with wavelength $1090\pm5\si{\nano\meter}$. Finally, the quality of the results obtained are discussed and conclusions drawn. author: - 'Peter J. Christopher' - Nadeem Gabbani - 'William O’Neill' - 'Timothy D. Wilkinson' bibliography: - 'references.bib' title: Automated laser induced damage threshold testing applied to a ferroelectric spatial light modulator --- Introduction ============ Laser Induced Damage Thresholds (LIDTs) are an internationally standardised way of quantifying the threshold laser fluence required to cause damage in optical elements. In this paper we present an automated system for LIDT testing in accordance with the ISO 11254 and ISO 21254 standards [@ISO11254; @ISO21254]. As part of ongoing research into the power handling capabilities of Spatial Light Modulators (SLMs) we present an automated system for measurement of damage threshold values. We demonstrate this for a commercial Liquid Crystal on Silicon (LCoS) device under Continuous Wave (CW) laser illumination. This substrate was chosen as a significant quantity of devices were available along with detailed accompanying power handling measurements for comparison. For CW power sources, the optic is exposed at $10$ locations to a laser of known beam diameter and power. The result is then examined under a high magnification optical microscope for visible damage. The laser power is varied between measurements with the LIDT being taken as the highest laser power for which damage is not observed on any of the $10$ exposure sites. ’Damage’ is here defined according to the ISO definition as any detectable change in the substrate. As bulk heating is assumed to be the primary mechanism for damage under CW exposure, [@WOOD1998517] LIDTs are often quoted with the associated beam diameter. [@2014Ldio; @Bloembergen:73] We here give the LIDT in terms of power per area or $\si{\watt\per\centi\metre\squared}$ for a $\nicefrac{1}{e^2}$ beam diameter $\diameter$ given in $\si{\micro\metre}$ or $\si{\milli\metre}$. LIDTs may also be given in terms of the effective area equal to the ratio of laser power to maximum power density. [@ISO11254; @PhysRevLett.91.127402] The ISO standards do not require a specific beam profile and only maximum beam intensity and beam diameter are required. In the case discussed here detailed manufacturer specifications were available for the beam diameter. For Gaussian illumination, the peak power is approximately $2 \times$ the equivalent power of an equivalent uniformly distributed beam. [@WOOD1998517] The primary motivation for this work is the automation of a task for improvement in speed and reduction in human error. We begin by presenting the experimental setup and automation arrangements with a focus of methodology. The system is then validated using an LCoS device as a test case. Finally, the measured response is discussed and conclusions are drawn. Experimental Setup ================== Manual LIDT measurement is straightforward requiring only a known light source, a means of attenuation, a substrate and a microscope. A ’plug-and-play’ automated system requires little further work. Some devices such as polarising filters require light of known polarisation and the damage thresholds for components can range significantly. Figure \[fig:schematic\] shows the schematic of the system designed for automating this process. The Computer Aided Design (CAD) design is shown in Figure \[fig:combo\] along with its real-world implementation. A ’plug-and-play’ approach is taken for the laser source which can include a range of directly cage mountable sources including diodes as well as fibre launched light sources. The laser beam - red in Figure \[fig:schematic\] - is passed vertically downwards through a window and adjustable linear polariser. Adjustment of the polariser relative to the fast axis of the polarising beam splitter (PBS) allows for intensity control in elliptical beams and ensures a know polarisation on the sample. A 90:10 or higher beam splitter extracts a portion of the power for power measurement and a switchable neutral density (ND) filter ensures compatibility with a wide range of intensities. Len 4 acts as a telescope with the distance between the lens and the stage defining the incident beam spot. Integration with Zemax allows the control system to automate this process. Any reflected light is captured by the beam dump. The microscope system - blue in Figure \[fig:schematic\] - operates by passing a white light LED source through an objective and imaging the back reflected light. This allows for real time measurement of substrate degradation. In order to ensure maximum flexibility, all components in the system are designed to be modular and interchangeable. Automation, Control and Operation ================================= A control suite for the system was developed in [ [C-.05em ]{}]{} and [ [C/C-.05em ]{}]{} based on the HoloGen framework [@hologen]. This is capable of automating the entire alignment, characterisation and metrology process with a minimum of initial user input. Source Calibration ------------------ There are three automated calibration procedures for the source measuring power, stability and ellipticity. ### Source Power Calibration Calibration of the laser source power is straight forward provided the power sensor used is of known properties. The waveplate and polarising beam splitter are aligned with parallel fast axes and the response curve of source driving voltage to measured power is taken. Aligning the fast axis of the laser at $45^{\circ}$ to the polarising beam splitter and repeating the measurement allows a second response curve to be measured. The combined response of the laser is equal to the sum of these measurements and allows us to determine laser power without removing the polarising beam splitter or half waveplate. ### Source Stability Calibration The stability of the laser source can be determined simply by holding the source at constant driving voltage and recording the change in measured power over a period of time, in this case taken as a period of 8 hours. Taking sufficient measurements allows a least squares fit to a gaussian distribution in order to calculate the FWHM stability. In the application discussed below, stability was sufficient to ignore it from LIDT calculations. As before, systems incorporating the waveplate and polarising beam splitter require two measurements at $45^{\circ}$ in order to fully understand stability behaviour. [![Power incident on the power meter vs waveplate angle for the system shown in Figure \[fig:schematic\] and a 10mW solid state laser[]{data-label="fig:waveplate"}](WavePlateFigure.pdf "fig:"){width="\linewidth"}]{} ### Source Ellipticity Calibration Slightly more involved is the source ellipticity calibration. For an arbitrary elliptical polarisation $E=[A, B e^{i\delta}]$ passed through a polariser of variable orientation, the minimum and maximum values are given by $$\begin{aligned} \label{jones3} &E_{\psi} = \\ &E\sqrt{A^2\cos^2\psi + B^2\sin^2\psi + AB\cos\delta\sin{2\psi}} \nonumber\\ &E_{\psi \pm \frac{\pi}{2}} = \\ &E\sqrt{A^2\sin^2\psi + B^2\cos^2\psi - AB\cos\delta\sin{2\psi}} \nonumber \end{aligned}$$ where $A$, $B$, $E$ and $\delta$ are scalar constants, $\sqrt{A^2+B^2}=1$ and where $\psi$ is given by $$\label{jones2} \psi=\frac{1}{2}\tan^{-1}{\left(\frac{2AB\cos{\delta}}{A^2-B^2}\right)}$$ This gives a relationship for measured intensity $I$ of $$\label{jones7} I= \underbrace{C\frac{\left(A^2+B^2\right)}{2}}_\text{Constant Term} + \underbrace{C\sqrt{\frac{\left(A^2-B^2\right)^2+A^2B^2\cos{\delta}^2}{2}}}_\text{Amplitude Term}\times \underbrace{\sin\left(4\theta_0+4\theta+\tan^{-1}\left(\frac{\left(A^2-B^2\right)}{2AB\cos{\delta}} \right) \right)}_\text{Frequency Term}$$ where constant $C$ incorporates the scaling and loss terms of the system and $\theta$ is the angle subtended by the waveplate. When the waveplate is initially mounted at a non-zero angle $\theta_{0}$ and the source is mounted with unknown orientation the waveplate is rotated through 360 degrees and the incident powers recorded. An example is shown in Figure \[fig:waveplate\]. Linear regression then allows for determination of source properties from which the ellipticity can be determined. Computer Vision --------------- The initial focus of the microscope sub-system is set by the user. A basic software autofocus implementation is used with an integrated Zemax model of the objective lens system used to inform z-axis adjustments on the alignment stage. The power of the illumination LED is controlled to ensure good image white-balance and contrast and reduce post processing. To automate the damage observation process, a control image is taken before the start of each test. After each test the recorded image $I_{i}$ for measurement $i$ is compared to the control image $I_{0}$ using a normalised mean squared error $E_{MSE}$ where $$E_{MSE}(I_{0},I_{i}) = \frac{1}{N_x N_y}\sum_{x=0}^{N_x-1}\sum_{y=0}^{N_y-1} \left[k\abs{I_{0}(x,y)} - \abs{I_{i}(x,y)}\right]^2 \quad\textit{where} \quad k=\sum_{x=0}^{N_x-1}\sum_{y=0}^{N_y-1}\frac{{\abs{I_{i}(x,y)}}^2}{{\abs{I_{0}(x,y)}}^2}$$ and $N_x$ and $N_y$ are the respective $x$ and $y$ resolutions. A suitable cutoff value for $E_{MSE}$ can then be taken. All captured images are preserved to allow for manual confirmation if required. [![Structure of a spatial light modulator[]{data-label="fig:slm"}](liquidcrystal.pdf "fig:"){width="\linewidth"}]{} [![image](capture.pdf){width="\linewidth"}]{} The system parameters are set by the user in a JSON format. These define the volume of operation for the stage as well as testing area on the component and initial power values for testing. Validation ========== In order to validate the system we used NENIR30A ND filters from ThorLabs as they are low cost and have well defined damage thresholds under CW illumination. ThorLabs specify the NENIR30A as having a LIDT of $25~\si{\watt\per\centi\metre\squared} \diameter 62 \si{\micro\metre}$ at $1064$nm while we measured a LIDT of $29.6~\si{\watt\per\centi\metre\squared} \diameter 70 \si{\micro\metre}$ at $1090$nm. Demonstration {#results} ============= The experimental rig discussed so far was designed as part of ongoing research into SLM power handling capabilities and we demonstrate our system using a number of HDP-1280-2 ’BlueJay’ ferroelectric displays. These have a resolution of $1280\times 1280$ pixels and a package size of 11 by 25. As SLMs are multi-level devices, Figure \[fig:slm\], we take the definition of ’damage’ to include any visible change in the device rather than simply visible change in the substrate. As interest was in Near Infrared (NIR) behaviour, a $200W$ $1090\pm5\si{\nano\meter}$ fibre laser source from was used. This is delivered to the system through a multi-mode fibre. Results and Discussion ====================== The automated system ran $\approx 350$ tests for a number of spot sizes. The operator time for testing was under $35$ minutes with a combined automated runtime of $6$ hours. It is estimated that an entire operator day would be required in an equivalent manual system. As can be expected, the measured maximum power was higher for smaller gaussian spot sizes with LIDTs of $9.2~\si{\watt\per\centi\metre\squared} \diameter 27 \si{\micro\metre}$, $5.5~\si{\watt\per\centi\metre\squared} \diameter 150 \si{\micro\metre}$ and $3.2~\si{\watt\per\centi\metre\squared} \diameter 3.1 \si{\milli\metre}$ being measured at $1090\pm5\si{\nano\meter}$. This is presumed to be due to bulk heating. A number of failure paradigms were observed with some extremal cases shown in Figure \[fig:capture\]. Figure \[fig:capture\] (left) shows liquid crystal breakdown under prolonged exposure. Figure \[fig:capture\] (centre) shows delamination of the liquid crystal from the glass without substrate damage and Figure \[fig:capture\] (right) shows direct substrate damage. The captured microscope images were manually inspected with only one image being classified differently by the human operator and computer vision system. While not unexpected, there was no observed difference in the LIDT against polarisation parallel or perpendicular to the SLM major axis. Conclusion ========== This work has presented a fully automated system for Laser Induced Damage Threshold testing of substrates using only commercial off-the-shelf components. The setup requires $<10\%$ of the operator time required for the equivalent manual system and reduces the manual error sources. The system was demonstrated by testing a Liquid Crystal on Silicon (LCoS) device. LIDTs of $9.2~\si{\watt\per\centi\metre\squared} \diameter 27 \si{\micro\metre}$, $5.5~\si{\watt\per\centi\metre\squared} \diameter 150 \si{\micro\metre}$ and $3.2~\si{\watt\per\centi\metre\squared} \diameter 3.1 \si{\milli\metre}$ were found for the active device face with an excitation wavelength of $1090\pm5\si{\nano\meter}$. The authors would like to thank the Centre for Advanced Photonics and Electronics (CAPE) technical team for their tireless efforts in helping construct this system. In particular we would like to thank Mr Stephen Drewitt, Mr Ady Ginn, Mr Mark Barnett, Mr Joe Smith and Mr Dave Edwards. The authors would also like to thank the Centre for Doctoral Training in Ultra Precision Engineering (CDT-UP) and the Engineering and Physical Sciences Research Council (EPSRC) for financial support (EP/L016567/1).
--- abstract: | Distributive skew lattices satisfying $x\wedge (y\vee z)\wedge x = (x\wedge y\wedge x) \vee (x\wedge z\wedge x)$ and its dual are studied, along with the larger class of linearly distributive skew lattices, whose totally preordered subalgebras are distributive. Linear distributivity is characterized in terms of the behavior of the natural partial order between comparable $\DD$-classes. This leads to a second characterization in terms of strictly categorical skew lattices. Criteria are given for both types of skew lattices to be distributive.\ Keywords: skew lattice, distributive, partial ordering, $\DD$-class Mathematics Subject Classification (2010): 06A11, 03G10, 06F05 address: - 'University of Denver, USA' - 'Westmont College. USA' - 'Inštitut Jozef Štefan, Slovenia' author: - Michael Kinyon - Jonathan Leech - João Pita Costa title: '**Distributivity in Skew Lattices**' --- Introduction {#Introduction} ============ Recall that a lattice $(L; \wedge, \vee)$ is *distributive* if the identity $x\wedge (y \vee z) = (x\wedge y) \vee (x\wedge z)$ holds on $L$. One of the first results in lattice theory is the equivalence of this identity to its dual, $x\vee(y\wedge z) = (x\vee y) \wedge (x\vee z)$. Distributive lattices are also characterized as being *cancellative* in that $x\wedge y = x\wedge z$ and $x\vee y = x\vee z$ jointly imply $y = z$. A third characterization is that neither of the 5-element lattices below can be embedded in the given lattice. $\begin{array}{ccccc} \mathbf{M}_{3} & \begin{tikzpicture}[scale=.7] \node (1) at (0,1){$1$} ; \node (a) at (-1,0){$\cdot$} ; \node (b) at (0,0){$\cdot$}; \node (c) at (1,0){$\cdot$} ; \node (0) at (0,-1){$0$} ; \draw (1) -- (c) -- (0) -- (a) -- (1) -- (b) -- (0); \end{tikzpicture} & & \mathbf{N}_{5} & \begin{tikzpicture}[scale=.7] \node (1) at (0,1){$1$} ; \node (a) at (-1,0){$\cdot$} ; \node (b) at (1,0.5) {$\cdot$} ; \node (c) at (1,-0.5) {$\cdot$} ; \node (0) at (0,-1) {$0$} ; \draw (1) -- (a) -- (0) -- (c) -- (b) -- (1) ; \end{tikzpicture}\\ \end{array}$ Distributivity also arises when studying *skew lattices*, that is, algebras with associative, idempotent binary operations $\vee $ and $\wedge $ that satisfy the absorption identities: $$\label{absidentities}\tag{1.1} x\wedge (x\vee y) = x = (y\vee x)\wedge x \text{ and } x\vee (x\wedge y) = x = (y\wedge x)\vee x.$$ Given that $\wedge$ and $\vee$ are associative and idempotent, (\[absidentities\]) is equivalent to the dualities: $$\label{absequivalences}\tag{1.2} x\wedge y=x \text{ iff } x\vee y=y \text{ and } x\wedge y=y \text{ iff } x\vee y=x.$$ For skew lattices, the distributive identities of greatest interest have been the dual pair: $$\label{GMD}\tag{1.3} x\wedge (y\vee z)\wedge x = (x\wedge y\wedge x)\vee (x\wedge z\wedge x);$$ $$\label{GJD}\tag{1.4} x\vee (y\wedge z)\vee x = (x\vee y\vee x)\wedge (x\vee z\vee x).$$ Indeed, a skew lattice is *distributive* if it satisfies both. Unlike the case of lattices, (\[GMD\]) and (\[GJD\]) are not equivalent. Spinks, however, obtained a computer proof in [@Sp00] (humanized later by Cvetko-Vah in [@Ka06]) of their equivalence for skew lattices that are *symmetric* in that: $$\label{sym}\tag{1.5} x\wedge y = y\wedge x \text{ iff } x\vee y = y\vee x.$$ (See [@Ka06], [@Sp00R], [@Sp00].) Also unlike lattices, distributive skew lattices need not be *cancellative* in that they need not satisfy: $$\label{canc}\tag{1.6} x\vee y=x\vee z \text{ and } x\wedge y=x\wedge z \text{ imply } y=z \text{, and } x\vee z=y\vee z \text{ and } x\wedge z=y\wedge z \text{ imply } x=y.$$ Conversely, cancellative skew lattices need not be distributive, but they are always symmetric, unlike distributive skew lattices. $\mathbf M_{3}$ and $\mathbf N_{5}$ are forbidden subalgebras of both types of algebras. Their absence is equivalent to the weaker condition of being *quasi-distributive* in that the skew lattice has a distributive maximal lattice image. (See [@Ka11c] Theorem 3.2.) Of course, many skew lattices are both distributive and cancellative. This is true for skew Boolean algebras ([@Ba11], [@BL], [@BS], [@Ku12], [@Le90], [@Le08], [@Sp00], [@Sp06]) and skew lattices of idempotents in rings ([@Ka05c], [@Ka05], [@Ka08], [@Ka12], [@Ka11], [@Le89], [@Le05]). Identities and also arise in studying broader types of noncommutative lattices. (See [@La02] Section 6.) Identities and insure that the maps $x \mapsto a\wedge x\wedge a$ and $x \mapsto a\vee x\vee a$ are homomorphic retractions of $\mathbf S$ onto the respective subalgebras ${\{\,x \in S\mid a\wedge x=x=x\wedge a\,\}}$ and ${\{\,x \in S\mid a\vee x=a=x\vee a\,\}}$ for each element $a$ in the skew lattice $\mathbf S$. In this paper we study further effects of being distributive, as well as connections between distributive skew lattices and other varieties of algebras. A main concept in our study is *linear distributivity* which assumes that all subalgebras that are totally preordered under the natural preorder $\succeq$ as defined in below, are distributive. This is unlike the case for lattices where totally ordered sublattices are automatically distributive. Like quasi-distributivity, linear distributivity, is necessary but not sufficient for a skew lattice to be distributive. We begin by reviewing some of the required background for this paper in Section \[Background\]. (For more thorough remarks, see [@Le96] or introductory remarks in [@CAT].) In Section \[Linear Distributivity\] linear distributivity is introduced with characterizing identities given in Theorem \[lindist\]. In the next section it is studied in terms of the natural partial order $\geq$ defined in below, with attention given to the behavior of $\geq$ on a skew chain of comparable $\DD$-classes, $A > B > C$. Distributive skew chains are characterized by the behavior of their *midpoint sets* given by $\mu(a, c) = {\{\,b\in B\mid a>b>c\,\}}$ for any pair $a>c$ with $a\in A$ and $c\in C$. While these sets often contain many midpoints, (\[GMD\]) and (\[GJD\]) minimize their size. The details are given in Section \[Midpoints and Distributive Skew Chains\], whose main result, Theorem \[disteq\], characterizes distributive skew chains (and by extension, linearly distributive skew lattices) not only in terms of midpoints but also in terms of strictly categorical skew lattices (first studied in [@CAT], Section \[Midpoints and Distributive Skew Chains\]). The latter generalize both normal skew lattices (where $(S, \wedge)$ is a normal band) first studied in this journal [@Le92] and their $\wedge-\vee$ duals. Is linear distributivity in concert with quasi-distributivity enough to guarantee that a skew lattice is distributive? In general, the answer is no. It is, however, for strictly categorical skew lattices, which form a significant subclass of linearly distributive skew lattices. (See Theorem \[strcat\] and the relevant discussion in Section \[Midpoints and Distributive Skew Chains\].) If we assume that the skew lattice is symmetric, the answer is yes (Theorem \[distrib\]). A characterization of those linearly distributive and quasi-distributive skew lattices that are distributive is given in Theorem \[distlindist\]. Background {#Background} ========== Returning first to symmetric skew lattices, they form a variety of skew lattices that is characterized by the following identities: $$\label{syma}\tag{2.1} x\vee y\vee (x\wedge y) = (y\wedge x)\vee y\vee x$$ $$\label{symb}\tag{2.2} x\wedge y\wedge (x\vee y) = (y\vee x)\wedge y\wedge x$$ given first by Spinks [@Le05]. The identity characterizes *upper symmetry* ($x \wedge y = y \wedge x$ implies $x \vee y = y \vee x$ for all $x, y\in S$) while the identity characterizes lower symmetry ($x \vee y = y \vee x$ implies $x \wedge y = y \wedge x$ for all $x, y\in S$). The *GreenÕs relations* are defined on a skew lattice by $$\label{R}\tag{2.3R} a\RR b \Leftrightarrow (a\wedge b = b \text{ and } b\wedge a = a) \Leftrightarrow (a\vee b = a \text{ and } b\vee a = b);$$ $$\label{L}\tag{2.3L} a\LL b \Leftrightarrow (a\wedge b = a \text{ and } b\wedge a = b) \Leftrightarrow (a\vee b = b \text{ and } b\vee a = a);$$ $$\label{D}\tag{2.3D} a\DD b \Leftrightarrow (a\wedge b\wedge a = a \text{ and } b\wedge a\wedge b = b) \Leftrightarrow (a\vee b\vee a = a \text{ and } b\vee a\vee b = b).$$ All three relations are canonical congruences, with $\LL \vee \RR = \LL \circ \RR = \RR \circ \LL = \DD$ and $\LL \cap \RR = \Delta={\{\,(x,x)\mid x\in S \,\}}$, the identity equivalence. Their congruence classes are called $\DD$-classes, $\LL$-classes or $\RR$-classes and are often denoted by $\DD_{x}$, $\LL_{x}$ or $\LL_{x}$ where $x$ is some class member. A skew lattice $\mathbf S$ is *rectangular* if $x\wedge y\wedge x = x$, or dually $y\vee x\vee y = y$, holds on $\mathbf S$. Such a skew lattice is anti-commutative in that $x\wedge y = y\wedge x$ or $x\vee y = y\vee x$ imply $x = y$. The *First Decomposition Theorem* (see [@Le89] Theorem 1.7) states that *in any skew lattice $\mathbf S$ each $\DD$-congruence class is a maximal rectangular subalgebra of $\mathbf S$ and $\mathbf S/\DD$ is the maximal lattice image of $\mathbf S$*. In particular, a rectangular skew lattice consists of a single $\DD$-class. A skew lattice is *right-handed* \[respectively *left-handed*\] if it satisfies the identities $$\label{RH}\tag{2.4R} x\wedge y\wedge x = y\wedge x\text{ and } x\vee y\vee x = x\vee y$$ $$\label{LH}\tag{2.4L} [x\wedge y\wedge x = x\wedge y \text{ and } x\vee y\vee x = y\vee x].$$ Equivalently, $x\wedge y = y$ and $x\vee y = x$ \[$x\wedge y = x$ and $x\vee y = y$\] hold in each $\DD$-class, thus reducing $\DD$ to $\RR$ \[or $\LL$\]. The *Second Decomposition Theorem* (see [@Le89] Theorem 1.15) states that *given any skew lattice $\mathbf S$, $\mathbf S/\RR$ and $\mathbf S/\LL$ are its respective maximal left and right-handed images, with $\mathbf S$ being isomorphic to the fibered product, $S/\RR \times_{S/\DD} S/\LL$, of both over their common maximal lattice image under the map $x \mapsto (\RR_{x}, \LL_{x})$*. All this is because every skew lattice is *regular* in that for all $x,y,z\in S$ and all $x',x''\in \DD_{x}$ the following holds: $$\label{reg}\tag{2.5} x\vee y\vee x'\vee z\vee x = x\vee y\vee z\vee x \text{ and } x\wedge y\wedge x'\wedge z\wedge x = x\wedge y\wedge z\wedge x.$$ A skew lattice $\mathbf S$ is distributive (symmetric, cancellative, etc.) if and only if its left and right factors $\mathbf S/\RR$ and $\mathbf S/\LL$ are distributive (symmetric, cancellative, etc.). In general, $\mathbf S$ belongs to a variety $\VV$ of skew lattices if and only if both $\mathbf S/\RR$ and $\mathbf S/\LL$ do. (See also [@Ka05b] and [@Le96], Section 1). The natural preorder is defined on a skew lattice by $$\label{pre}\tag{2.6} a \succeq b \Leftrightarrow a\vee b\vee a = a\text{ or, equivalently, }b\wedge a\wedge b = b.$$ Observe that $a\succeq b$ in $\mathbf S$ if and only if $\DD_{a} \geq \DD_{b}$ in the lattice $\mathbf S/\DD$ where$\DD_{a}$ and $\DD_{b}$ are the respective $\DD$-classes of $a$ and $b$. Useful variants of and for the respective right and left-handed cases are as follows: $$\label{preR}\tag{2.7R} x \succeq x' \Rightarrow x\wedge y\wedge x' = y\wedge x' \text{ and } x\vee y\vee x' = x\vee y;$$ $$\label{preL}\tag{2.7L} x \succeq x' \Rightarrow x'\wedge y\wedge x = x'\wedge y \text{ and } x\vee y\vee x' = y\vee x'.$$ We let $a \succ b$ denote $a \succeq b$ when $a \DD b$ does not hold. For left-handed skew lattices, the following identities hold: $$\label{id1}\tag{2.8} x\wedge (y\vee x) = x = (x\wedge y)\vee x.$$ $$\label{id2}\tag{2.9} (x\vee (y\wedge x))\wedge x = x\vee (y\wedge x)$$ $$\label{id3}\tag{2.10} (x\vee (y\wedge x))\wedge y = y\wedge x.$$ If $\mathbf S$ is a left-handed skew lattice, then $x\wedge (y\vee x) =_{\eqref{LH}} x\wedge (x\vee y\vee x) =_{\eqref{absidentities}} x$. Similarly, $(x\wedge y) \vee x = x$. As for observe that for *all* skew lattices, $x\vee(y\wedge x)\vee x = x$, since $x \succeq y\wedge x$. Thus follows from . follows from: $\begin{array}{lcl} (x \vee (y \wedge x)) \wedge y &=_{\eqref{id2},\eqref{LH}}& (x \vee (y \wedge x)) \wedge x \wedge y \wedge x \\ &=_{\eqref{id2}}& (x \vee (y \wedge x)) \wedge y \wedge x \\ &=_{\eqref{absidentities}}& y \wedge x. \end{array}$ The natural preorder $\succeq$ is refined by the *natural partial order* which is defined on $\mathbf{S}$ by $$\label{poset}\tag{2.11} x \geq y \leftrightarrow x\wedge y = y \wedge x = y \text{ or, equivalently, } x \vee y = y \vee x = x.$$ All preorders and partial orders are assumed to be natural. Of course $x > y$ means $x \geq y$ but $x \neq y$. Given $a \succeq b$, elements $a_b \in \DD_a$ and $b_a\in \DD_b$ exist such that $a \geq b_a$ and $a_b \geq b$. To see this just consider $a_b= b\vee a\vee b$ and $b_a= a\wedge b\wedge a$. Linear Distributivity {#Linear Distributivity} ===================== A skew lattice $\mathbf S$ is *linearly distributive* if every subalgebra $\mathbf T$ that is totally preordered under $\succeq $ is distributive. Since totally preordered skew lattices are trivially symmetric, *a skew lattice $\mathbf S$ is linearly distributive if and only if each totally preordered subalgebra $\mathbf T$ of $\mathbf S$ satisfies or, equivalently,* . Linearly distributive skew lattices form a variety of skew lattices. Consider the terms $x$, $y\wedge x\wedge y$ and $z\wedge y\wedge x\wedge y\wedge z$. Clearly $x \succeq y\wedge x\wedge y \succeq z\wedge y\wedge x\wedge y\wedge z$ holds for all skew lattices. Conversely given any instance $a \succeq b \succeq c$ in some skew lattice $\mathbf S$, the assignment $x \mapsto a$, $y \mapsto b$, $z \mapsto c$ will return this particular instance. Thus a characterizing set of identities for the class of all linearly distributive skew lattices is given by taking the basic identity $$u\wedge (v \vee w)\wedge u = (u\wedge v\wedge u) \vee (u\wedge w\wedge u)$$ and forming all the identities possible in $x$, $y$, $z$ by making bijective assignments from the variables ${\{\,u, v, w\,\}}$ to the terms ${\{\,x, y\wedge x\wedge y, z\wedge y\wedge x\wedge y\wedge z\,\}}$. In what follows, the following pair of lemmas will be useful. \[LRdist\] Left-handed skew lattices that satisfy are characterized by: $$\label{GMDa}\tag{3.1L} x\wedge y\wedge x=x\wedge y \text{ and } x\wedge (y\vee z)=(x\wedge y)\vee (x\wedge z).$$ Dually, right-handed skew lattices that satisfy are characterized by: $$\label{GMDb}\tag{3.1R} x\wedge y\wedge x = y\wedge x \text{ and } (y \vee z) \wedge x = (y \wedge x) \vee (z \wedge x).$$ \[RLD\] In a left-handed totally preordered skew lattice, if $a\wedge (b\vee c) \neq (a\wedge b) \vee (a\wedge c)$, then $a \succ b \succ c$. Thus a left-handed skew lattice $\mathbf S$ is linearly distributive if and only if $$\label{LLD}\tag{3.2L} a\wedge ((b\wedge a) \vee (c\wedge b\wedge a)) = (a\wedge b) \vee (a\wedge c\wedge b) \text{ for all } a, b, c \in S.$$ Dually a right-handed skew lattice $\mathbf S$ is linearly distributive if and only if $$\label{RLDeq}\tag{3.2R} ((a\wedge b\wedge c) \vee (a\wedge b))\wedge a = (b\wedge c\wedge a) \vee (b\wedge a) \text{ for all } a, b, c \in S.$$ If say $b \succeq a$, then $a\wedge (b\vee c) = a$ and $(a\wedge b)\vee (a\wedge c) = a\vee (a\wedge c) = a$. If $c \succeq a$, then $a\wedge (b\vee c) = a$ again, and $(a\wedge b) \vee (a\wedge c) = (a\wedge b) \vee a = (a\wedge b\wedge a) \vee a = a$. Thus, inequality only occurs when $a\succeq b,c$. But even here, $a\succeq c\succeq b$ gives us $a\wedge (b\vee c)=a\wedge c$ and $(a\wedge c)\succeq (a\wedge b)$ so that $(a\wedge b) \vee (a\wedge c) = a\wedge c$ also. Thus, to completely avoid $a\wedge (b\vee c) = (a\wedge b) \vee (a\wedge c)$ we are only left with $a \succ b \succ c$. Linear distributivity is also characterized succinctly by either of a dual pair of identities. We begin with an observation. Identities and are respectively equivalent to $$\label{GMDc}\tag{3.3} x \wedge ((y \wedge x) \vee (z \wedge x)) = x \wedge (y \vee z) \wedge x = ((x \wedge y) \vee (x \wedge z)) \wedge x.$$ $$\label{GMDd}\tag{3.4} x \vee ((y \vee x) \wedge (z \vee x)) = x \vee (y \wedge z) \vee x = ((x \vee y) \wedge (x \vee z)) \vee x.$$ Since $(y\wedge x)\vee (z\wedge x)\vee x=_{\eqref{absidentities}} x$, the skew lattice dualities give us $$\label{equ}\tag{*} ((y \wedge x) \vee (z \wedge x)) \wedge x = (y \wedge x) \vee (z \wedge x).$$ Thus implies, $$\begin{array}{lcl} x\wedge ((y\wedge x)\vee (z\wedge x)) &=& x\wedge ((y\wedge x)\vee (z\wedge x))\wedge x \\ &=& (x\wedge y\wedge x)\vee (x\wedge z\wedge x) \\ &=& x\wedge (y\vee z)\wedge x. \end{array}$$ Likewise, implies $((x\wedge y)\vee (x\wedge y))\wedge x=x\wedge (y\vee z)\wedge x$. Conversely, if holds, then $$\begin{array}{lclcl} x\wedge (y\vee z)\wedge x &=& x\wedge ((y\wedge x)\vee (z\wedge x)) &=_{\eqref{equ}}& x\wedge ((y\wedge x)\vee (z\wedge x))\wedge x \\ &=& ((x\wedge y\wedge x)\vee (x\wedge z\wedge x))\wedge x &=_{\eqref{equ}}& (x\wedge y\wedge x)\vee (x\wedge z\wedge x). \end{array}$$ For all skew lattices, and imply respectively: $$\label{GMDe}\tag{3.5} x\wedge ((y\wedge x)\vee (z\wedge x)) = ((x\wedge y)\vee (x\wedge z))\wedge x \text{ and}$$ $$\label{GMDf}\tag{3.6} x\vee ((y\vee x)\wedge (z\vee x)) = ((x\vee y)\wedge (x\vee z))\vee x$$ \[lindist\] For all skew lattices, and are equivalent with a skew lattice satisfying either and hence both if and only if it is linearly distributive. We begin with left-handed skew lattices. By Lemma \[RLD\] we need only consider the case where $a \succ b \succ c$. The identity gives us the middle equality in the chain of equalities: $$a\wedge (b\vee c) = a\wedge ((b\wedge a)\vee (c\wedge a)) = ((a\wedge b)\vee (a\wedge c))\wedge a = (a\wedge b)\vee (a\wedge c).$$ Thus $x\wedge (y \vee z) = (x\wedge y) \vee (x\wedge z)$ holds in all totally preordered contexts in left-handed skew lattices satisfying . In such symmetrical contexts, the dual $(z \wedge y)\vee x = (z\wedge x) \vee (y\wedge x)$ also holds making the involved skew lattice linearly distributive. In dual fashion, right-handed skew lattices satisfying are also linearly distributive. Since any skew lattice $\mathbf S$ is embedded in the direct product $S/\RR \times S/\LL$, every skew lattice satisfying is linearly distributive. Conversely assume that $\mathbf S$ is linear distributive. First, let $\mathbf S$ be left-handed. Then $ \begin{array}{lcl} x \wedge ((y\wedge x) \vee (z\wedge x)) &=_{\eqref{LH}}& x \wedge ((z\wedge x) \vee (y\wedge x) \vee (z\wedge x)) \\ &=& (x \wedge ((z\wedge x) \vee (y\wedge x))) \vee (x \wedge (z\wedge x)) \\ &=_{\eqref{LH}}& (x \wedge ((y\wedge x) \vee (z\wedge x) \vee (y\wedge x))) \vee (x \wedge (z\wedge x)) \\ &=& (x \wedge ((y\wedge x) \vee (z\wedge x))) \vee (x \wedge (y\wedge x)) \vee (x \wedge (z\wedge x)) \\ &=& (x \wedge (y\wedge x)) \vee (x \wedge (z\wedge x)) \\ &=_{\eqref{LH}}& (x\wedge y)\vee (x\wedge z) \\ &=& ((x\wedge y)\vee (x\wedge z))\wedge x. \end{array} $ Here the second and fourth equalities follow from linear distributivity. The fifth equality is again left-handedness upon observing that $x \wedge ((y\wedge x) \vee (z\wedge x))$ and $(x \wedge (y\wedge x)) \vee (x \wedge (z\wedge x))$ are $\LL$-related (look at $S/\DD = S/\LL$). The final equality follows from the fact that $x \geq (x\wedge y) \vee (x\wedge z)$ in the left-handed case. Thus holds. Similarly holds for linearly distributive, right-handed skew lattices. Again the embedding $S \rightarrow S/\RR \times S/\LL$ guarantees that all linearly distributive skew lattices satisfy . Thus linear distributivity is characterized by . The dual argument gives a characterization by . For left- and right-handed skew lattices, reduces respectively to $$\label{GMDg}\tag{3.5L} x\wedge ((y\wedge x)\vee (z\wedge x)) = (x\wedge y)\vee (x\wedge z)\text{ and}$$ $$\label{GMDh}\tag{3.5R} ((x\vee y)\wedge (x\vee z))\vee x = (y\wedge x)\vee (z\wedge x)$$ Midpoints and Distributive Skew Chains {#Midpoints and Distributive Skew Chains} ====================================== A skew lattice is linearly distributive if and only if each skew chain of $\DD$-classes in it is distributive. In this section we characterize distributive skew chains in terms of the natural partial order. Given a skew chain $A>B>C$ where $A$, $B$ and $C$ are $\DD$-classes, with $a \in A$, $c \in C$ such that $a > c$, any element $b \in B$ such that $a > b > c$ is called a *midpoint* in $B$ of $a$ and $c$. We begin with several straightforward assertions. \[midpoint\] Given a skew chain $A > B > C$, with $a > c$ for all $a \in A$ and $c \in C$: - For all $b \in B$, $a\wedge (c\vee b\vee c)\wedge a$ and $c\vee (a\wedge b\wedge a)\vee c$ are midpoints in $B$ of $a$ and $c$. - If $b$ in $B$ is a midpoint of $a$ and $c$, then both midpoints in (i) reduce to $b$. - When $A > B > C$ is a distributive skew chain, both midpoints in (i) agree: $$\label{distineq}\tag{4.1} a > a\wedge (c\vee b\vee c)\wedge a = c\vee (a\wedge b\wedge a)\vee c > c.$$ Midpoints provide a key to determining the effects of and in this context. To proceed further, we recall several concepts. Given a skew chain $A > B > C$, an *$A$-coset in $B$* is any subset of $B$ of the form $A\wedge b\wedge A = {\{\,a\wedge b\wedge a' \mid a, a'\in A\,\}}$ for some $b$ in $B$. Given two $A$-cosets in $B$, they are either identical or else disjoint. Since $b \in A\wedge b\wedge A$ for all $b$ in $B$, the $A$-cosets in $B$ form a partition of $B$. Dually a *$B$-coset in $A$* is a subset of $A$ of the form $B\vee a\vee B = {\{\,b\vee a\vee b' \mid b, b'\in B\,\}}$ for some $a$ in $A$. Again, the $B$-cosets in $A$ form a partition of $A$. Given a $B$-coset $X \subseteq A$ and an $A$-coset $Y \subseteq B$, a *coset bijection* $\varphi: X\rightarrow Y$ is given by $\varphi(a) = b$ if $a \in X$, $b\in Y$ and $a>b$. Alternatively, $\varphi(a) = a\wedge b\wedge a$ and, dually, $\varphi^{-1}(b) = b\vee a\vee b$ for all $a \in X$ and all $b \in Y$ . Cosets are rectangular subalgebras in their $\DD$-classes and all coset bijections are isomorphisms. Thus all $A$-cosets in $B$ and all $B$-cosets in $A$ have a common size, denoted by $\omega[A,B]$. If $a, a' \in A$ lie in a common $B$-coset, this is denoted by $a -_{B} a'$; likewise $b -_{A} b'$ if $b$ and $b'$ lie in a common $A$-coset in $B$. This is illustrated in the partial configuration below where the dashed lines indicate $>$ between $a$’s and $b$’s while the full lines represent $\DD$-related elements. \(A) at (0,1)[$A:$]{} ; (B) at (0,0)[$B:$]{} ; (a1) at (1,1)[$a_{1}$]{} ; (a2) at (2,1)[$a_{2}$]{}; (a3) at (3,1)[$a_{3}$]{} ; (a4) at (4,1)[$a_{4}$]{} ; (b1) at (2,0)[$b_{1}$]{} ; (b2) at (3,0)[$b_{2}$]{}; (a1) – (b1); (a2) – (b2); (a3) – (b1); (a4) – (b2); (a1) – (a2) node\[pos=.5,below\] [$B$]{}; (a3) – (a4) node\[pos=.5,below\] [$B$]{}; (b1) – (b2) node\[pos=.5,below\] [$A$]{}; Binary outcomes between elements in $A$ and $B$ are given by, e.g., $a\wedge b =\varphi (a)\wedge b$ in $B$ and $a\vee b =a\vee \varphi^{-1}(b)$ in $A$ using the relevant coset bijection $\varphi: B\vee a\vee B \rightarrow A\wedge b\wedge A$. (For more details see [@Le93] and [@Le96] or remarks in [@CAT].) Similarly there are $A$-cosets in $C$, $C$-cosets in $A$, $B$-cosets in $C$ and $C$-cosets in $B$. The $C$-coset decomposition of $A$ refines the $B$-coset decomposition of $A$; similarly $B$-cosets in $C$ are refined by $A$-cosets in $C$. Our interest is in the middle class $B$ of the skew chain. Elements $b$ and $b'$ in $B$ are *$AC$-connected* if a finite sequence $b = b_{0}, b_{1}, b_{2}, \dots , b_{n} =b'$ exists in $B$ such that $b_{i} -_{A} b_{i+1}$ or $b_{i} -_{C} b_{i+1}$ for all $i\leq n-1$. A maximally $AC$-connected subset of $B$ is an *$AC$-component* of $B$ (or just *component* if the context is clear). $B$ is a disjoint union of all its $AC$-components and every $AC$-component in $B'$ is the disjoint union of all $A$-cosets in $B$ that are subsets of $B'$ and the disjoint union of all $C$-cosets in $B$ that are subsets of $B'$, as well as the disjoint union of all the $AC$-cosets in $B'$. $AC$-connectedness is a congruence relation on $B$. Its congruence classes, the components, are thus subalgebras of $B$. Given a component $B'$ of $B$, a sub-skew chain is given by $A > B' > C$. Since $a\wedge (c\vee b\vee c)\wedge a$ is the same for all $b$ in a common $C$-coset and $c\vee (a\wedge b\wedge a)\vee c$ is the same for all $b$ in a common $A$-coset, we can extend Lemma \[midpoint\] as follows: Given a distributive skew chain $A > B > C$, for any pair $a > c$ where $a \in A$ and $c \in C$, each $AC$-component $B'$ of $B$ contains a unique midpoint $b$ of $a$ and $c$. Given cosets $X \subseteq A$ and $Y \subseteq B$ as above, a coset bijection $\varphi:X\rightarrow Y$ can be viewed as a partial bijection between the involved $\DD$-classes, $\varphi:A\rightarrow B$. Recall that a skew lattice $\mathbf S$ is *categorical* if for all skew chains $A > B > C$ of $\DD$-classes in $\mathbf S$, nonempty composites $\psi\circ \varphi$ of coset bijections $\varphi$ from $A$ to $B$ and $\psi$ from $B$ to $C$ are coset bijections from $A$ to $C$. In this case, adjoining empty partial bijections to account for empty compositions and identity bijections on $\DD$-classes, one obtains a category with $\DD$-classes for objects, coset bijections for morphisms, and the composition of partial functions for composition (see [@CAT], [@JPC11] or [@JPC12] for more details). Clearly, a skew chain $A>B>C$ is categorical if and only if $A > B' > C$ is categorical for each component $B'$. Categorical skew lattices form a variety (see [@Le93], Theorem 3.16). We also have: A skew lattice $\mathbf S$ is categorical if and only if for all $x,y, z \in S$. $$\label{cateq}\tag{4.2} x\geq y \succeq z \text{ implies } x \wedge (z \vee y \vee z) \wedge x = (x \wedge z \wedge x) \vee y \vee (x \wedge z \wedge x)$$ Thus linearly distributive skew lattices are categorical. The converse, however, does not hold (see Example \[nlindist\] below). It does hold, however, for *strictly categorical* skew lattices where, in addition, for every chain of $\DD$-classes $A > B > C$ each $A$-coset in $B$ has nonempty intersection with each $C$-coset in $B$, making of $B$ a single $AC$-component. Strictly categorical skew lattices form a variety (see [@CAT], Corollary 4.3). This class includes: - *Normal* skew lattices characterized by the condition $x\wedge y\wedge z\wedge w = x\wedge z\wedge y\wedge w$, or equivalently, every subset $[e]\downarrow = {\{\,x \in S\mid e \geq x\,\}}$ is a sublattice (see [@Le92]). Skew Boolean algebras are normal as skew lattices. - *Primitive* skew lattices consisting of two $\DD$-classes, $A > B$, and all skew lattices in the subvariety generated from this class of skew lattices. \[strict\] The following conditions on a skew lattice $\mathbf S$ are equivalent: - $\mathbf S$ is strictly categorical. - Given both $a > b > c$ and $a > b' > c$ in $S$ with $b \DD b'$, $b = b'$ follows. - Given $a > b$ in $S$, the subalgebra $[a, b] = {\{\,x\in S\mid a \geq x \geq b\,\}}$ is a sublattice. - $\mathbf S$ is categorical and given skew chain $A > B > C$ in $\mathbf S$, for each coset bijection $\chi: A \rightarrow C$ unique coset bijections $\varphi: A \rightarrow B$ and $\psi: B \rightarrow C$ exist such that $\chi=\psi \circ \varphi$. Returning to distributive skew chains we have the following: \[distlema\] A left-handed, categorical skew chain $\mathbf S$ is distributive if and only if $a\wedge (b\vee c) = (a\wedge b) \vee (a\wedge c)$ for all $a \succ b \succ c$ such that $a > c$, in which case the identity reduces to $a\wedge (b\vee c) = (a\wedge b)\vee c$. Dually, a right-handed categorical skew chain $\mathbf S$ is distributive if and only if $(c\vee b)\wedge a = (c\wedge a)\vee (b\wedge a)$ for all $a \succ b \succ c$ such that $a > c$, in which case the identity reduces to $(c\vee b)\wedge a = c\vee (b\wedge a)$. (Note that these identities are the left and right-handed cases of above.) Given $a\succ b\succ c$ with respective $\DD$-classes $A>B>C$, let $c'=a\wedge c$. Then $a>c'$ and $(a\wedge b) \vee (a\wedge c) = (a\wedge b) \vee c'$. Next, since $c$ and $c'$ lie in the same $A$-coset in $C$ and $\mathbf S$ is categorical, both $b\vee c$ and $b\vee c'$ lie in the same $A$-coset in $B$ so that $a\wedge (b\vee c) = a\wedge (b\vee c')$. Hence $a\wedge (b\vee c) = (a\wedge b) \vee (a\wedge c)$ if and only if $a\wedge (b \vee c') = (a\wedge b) \vee (a\wedge c')$ where $a \succ b \succ c'$, $a > c'$ with the latter expression reducing to $(a\wedge b) \vee c'$ as stated. The lemma follows from Lemma \[LRdist\] and left-right duality. \[disteq\] Given a skew chain $A > B > C$, the following condition are equivalent: - $A > B > C$ is distributive. - For all $a \in A$, $b \in B$ and $c \in C$ with $a > c$, $$a\wedge (c\vee b\vee c)\wedge a = c\vee (a\wedge b\wedge a)\vee c.$$ - Given $a \in A$ and $c \in C$ with $a > c$, each component $B'$ of $B$ contains a unique midpoint $b$ of $a$ and $c$. - For each component $B'$ of $B$, $A>B'>C$ is strictly categorical. When these conditions hold, each coset bijection $\chi: A \rightarrow C$ uniquely factors through each component $B'$ of $B$ in that unique coset bijections $\varphi: A \rightarrow B'$ and $\psi: B' \rightarrow C$ exist such that $\chi = \psi \circ \varphi$ under the usual composition of partial bijections. \(i) clearly implies (ii). Given $a > c$ in (ii), for each element $x$ in $B$, both $b_{1} = a\wedge (c\vee x\vee c)\wedge a$ and $b_{2} = c\vee (a\wedge x\wedge a)\vee c$ are midpoints of $a$ and $c$ in $B$. Replacing $x$ by any element in its $C$-coset, does not change the $b_{1}$-outcome. Likewise, replacing $x$ by any element in its $A$-coset, does not change the $b_{2}$-outcome. Hence (ii) is equivalent to asserting that given $a > c$ fixed, for all $x$ in a common $AC$-component $B'$ of $B$, both $a\wedge (c\vee x\vee c)\wedge a$ and $c\vee (a\wedge x\wedge a)\vee c$ produce the same output $b$ in $B'$ such that $a > b > c$. Conversely, for any $b$ in $B'$ such that $a > b > c$ we must have $a\wedge (c\vee b\vee c)\wedge a = b = c\vee (a\wedge b\wedge a)\vee c$. Thus (ii) and (iii) are equivalent. Their equivalence with (iv) follows from Theorem \[strict\] above. Given (ii) Ð (iv), (iv) forces $A > B > C$ to categorical, since for each component $B'$ in $B$, $A > B' > C$ is categorical. Denoting the skew chain by $\mathbf S$, (ii) forces $\mathbf S/\RR$ and $\mathbf S/\LL$ to be distributive by Lemma \[distlema\] and thus $S \subseteq S/\RR \times S/\LL$ to be distributive. In the light of Theorem \[strict\], the final comment is clear. A strictly categorical skew lattice is linearly distributive. Given $a > c$ as above, their midpoint $b$ in the component $B'$ depends on the interplay of the $A$-cosets and $C$-cosets within $B'$. Indeed, given any $a \in A$, the set of *images* of $a$ in $B'$, is the set $a\wedge B'\wedge a = {\{\,a\wedge b\wedge a\mid b \in B'\,\}} = {\{\,b \in B'\mid a > b\,\}}$. This set parameterizes the $A$-cosets in $B'$ since each possesses exactly one $b$ such that $a > b$. Likewise, for each $c \in C$ the image set $c\vee B'\vee c = {\{\,c\vee b\vee c\mid b \in B'\,\}} = {\{\,b \in B'\mid b > c\,\}}$ parameterizes all cosets of $C$ in $B'$ (see [@Le93], Section 1). Both images sets are orthogonal in $B'$ in the following sense: for any $a \in A$, all images of $a$ in $B'$ lie in a unique $C$-coset in $B'$. Likewise for any $c \in C$, all images of $c$ in $B'$ lie in a unique $A$-coset in $B'$. Finally, given $a>c$ with $a\in A$ and $c\in C$, their unique midpoint $b \in B'$ lies jointly in the $C$-coset in $B'$ containing all images of $a$ in $B'$ and in the $A$-coset in $B'$ containing all images of $c$ in $B'$. (See [@CAT], Theorem 4.1.) Of course, every $b$ in $B'$ is the midpoint of some pair $a>c$. For a fixed pair $a>c$, the set $\mu(a, c)$ of all midpoints in $B$ is a rectangular subalgebra that forms a natural set of parameters for the family of all $AC$-components in $B$: just let $b$ in $\mu(a, c)$ correspond to the component $B'$ containing $b$. In the following partition diagram, the $A$-coset of $b$ contains all images ($\bullet$’s) of $c$ in $B'$, while the $C$-coset of $b$ has all images ($\star$’s) of $a$ in $B'$. The element $b$ is the unique image of both $a$ and $c$. (-2,2) – (3,2); (-2,1) – (3,1); (-2,0) – (3,0); (-2,-2) – (3,-2); (-2,-3) – (3,-3); (-2,-3) – (-2,2); (-1,-3) – (-1,2); (0,-3) – (0,2); (2,-3) – (2,2); (3,-3) – (3,2); at (4,2.2)[$\DD$-class $B$]{}; at (-.5,2.4)[$AC$-coset]{}; (x) at (1,1.5)[$\dots$]{}; (2x) at (1,0.5)[$\dots$]{}; (3x) at (1,-2.5)[$\dots$]{}; (4x) at (1,-1)[$\ddots$]{}; (5x) at (-1.5,-1)[$\vdots$]{}; (6x) at (-0.5,-1)[$\vdots$]{}; (7x) at (2.5,-1)[$\vdots$]{}; (A) at (-4,0.7)[$A$-coset]{}; (C) at (-0.4,-3.8)[$C$-coset]{}; (bb) at (-0.6,0.6)[$b$]{}; (bu1) at (-1.6,0.6)[$\bullet$]{}; (st1) at (-0.6,1.6)[$\star$]{}; (st2) at (-0.5,-2.6)[$\star$]{}; (bu2) at (2.4,0.6)[$\bullet$]{}; (-.2,2.2) – (-0.5,0.5); (-3.2,0.5) – (-1.5,0.5); (-0.5,-3.5) – (-0.5,-2.7); \[nlindist\] Using Mace4 [@prover], two minimal 12-element categorical skew chains have been found that are not linearly distributive, one left-handed and the other its right- handed dual. Their common Hasse diagram follows where $b_{i} -_{C} d_{j}$ iff $i + j = 0$ (mod 4). \(A) at (0,2)[$A$]{} ; (x) at (0,1)[$\vdots$]{} ; (B) at (0,0)[$B$]{} ; (xx) at (0,-1)[$\vdots$]{} ; (C) at (0,-2)[$C$]{} ; (a1) at (4,2)[$a_{1}$]{} ; (a2) at (5,2)[$a_{2}$]{}; (c1) at (4,-2)[$c_{1}$]{} ; (c2) at (5,-2)[$c_{2}$]{} ; (b1) at (1,0)[$b_{1}$]{} ; (b2) at (2,0)[$b_{2}$]{}; (b3) at (3,0)[$b_{3}$]{} ; (b4) at (4,0)[$b_{4}$]{}; (d1) at (5,0)[$d_{1}$]{} ; (d2) at (6,0)[$d_{2}$]{}; (d3) at (7,0)[$d_{3}$]{} ; (d4) at (8,0)[$d_{4}$]{}; (a1) – (b1); (a2) – (b2); (a1) – (b3); (a2) – (b4); (a1) – (d1); (a2) – (d2); (a1) – (d3); (a2) – (d4); (c1) – (b1); (c1) – (b2); (c1) – (b3); (c1) – (b4); (c2) – (d1); (c2) – (d2); (c2) – (d3); (c2) – (d4); (b1) – (b2) node\[pos=.5,below\] [$A$]{}; (b2) – (b3) node\[pos=.5,below\] ; (b3) – (b4) node\[pos=.5,below\] [$A$]{}; (b4) – (d1) node\[pos=.5,below\] ; (d1) – (d2) node\[pos=.5,below\] [$A$]{}; (d2) – (d3) node\[pos=.5,below\] ; (d3) – (d4) node\[pos=.5,below\] [$A$]{}; (a1) – (a2) node\[pos=.5,below\] ; (c1) – (c2) node\[pos=.5,below\] ; In both cases, $a_{1} >b_{odd},d_{odd}$ and $a_{2} >b_{even},d_{even}$, all $b_{i} >c_{1}$, all $d_{i} >c_{2}$, and $a_{1},a_{2} >$ both $c_{1}, c_{2}$. (thus both skew chains are categorical since all cosets involving just $A$ and $C$ are trivial). We denote the left-handed skew lattice thus determined by $\mathbf U_{2}$ and its right-handed dual by $\mathbf V_{2}$. Both $\mathbf U_{2}$ and $\mathbf V_{2}$ are not distributive. Indeed, given the coset structure on $B$, we get $a_{1}\wedge (b_{2}\vee c_{2}) = a_{1}\wedge d_{2} = d_{1}$, while $(a_{1}\wedge b_{2}) \vee (a_{1}\wedge c_{2}) = b_{1} \vee c_{2} = d_{3} \neq d_{1}$ in $\mathbf U_{2}$. $\mathbf V_{2}$ is handled similarly. Note that in both $\mathbf U_{2}$ and $\mathbf V_{2}$, $B$ is an $AC$- connected, but $a_{1} > b_{1}$, $b_{3} > c_{1}$, and also $a_{2} > b_{2}$, $b_{4} > c_{1}$, etc. (Strictly) categorical skew lattices were studied in [@CAT]. A number of lovely counting results for finite strictly categorical skew chains may be found in [@JPC11] or [@JPC12]. From Linear Distributivity to Distributive Skew Lattices {#From Linear Distributivity to Distributive Skew Lattices} ======================================================== One may ask: *Does linearly distributive plus quasi-distributive imply distributive?* In general the answer is no. It is however yes in two special cases. In [@Le92] it was shown that a normal skew lattice is distributive if and only it is quasi-distributive. This result can be extended to strictly categorical skew lattices. But first recall from [@Ka11c] that a skew lattice $\mathbf S$ is simply cancellative if for all $x, y, z \in S$, $$\label{simpcanc}\tag{5.1} x\vee z\vee x = y\vee z\vee y \text{ and } x\wedge z\wedge x = y\wedge z\wedge yÊ\text{ imply } x = y.$$ Cancellative skew lattices are simply cancellative, and simply cancellative skew lattices in turn are quasi-distributive since rules out $\mathbf M_{3}$ and $\mathbf N_{5}$ as subalgebras. \[strcat\] Strictly categorical, quasi-distributive skew lattices are both distributive and simply cancellative. They are cancellative precisely when they are also symmetric. In general, $a > a\wedge (b\vee c)\wedge a$ and $a > (a\wedge b\wedge a) \vee (a\wedge c\wedge a)$ both hold. In turn, so do both $a\wedge c\wedge b\wedge a < a\wedge (b\vee c)\wedge a$ and $a\wedge c\wedge b\wedge a < (a\wedge b\wedge a) \vee (a\wedge c\wedge a)$. Indeed, applying regularity and absorption we have, e.g., $$\begin{array}{lcl} (a\wedge c\wedge b\wedge a)\wedge [a\wedge (b\vee c)\wedge a] &=_{\eqref{reg}}& a\wedge c\wedge b\wedge (b\vee c)\wedge a \\ &=_{\eqref{absidentities}}& a\wedge c\wedge b\wedge a \\ \\ &\text{and}& \\ \\ (a\wedge c\wedge b\wedge a)\wedge [(a\wedge b\wedge a) \vee (a\wedge c\wedge a)] &=_{\eqref{reg}}& a\wedge c\wedge a\wedge b\wedge a\wedge [(a\wedge b\wedge a) \vee (a\wedge c\wedge a)] \\ &=_{\eqref{absidentities}}& a\wedge c\wedge a\wedge b\wedge a =_{\eqref{reg}} a\wedge c\wedge b\wedge a \end{array}$$ In any quasi-distributive skew lattice $\mathbf S$, $a\wedge (b\vee c)\wedge a {\mathbin{\mathcal D}}(a\wedge b\wedge a) \vee (a\wedge c\wedge a)$. Thus if $\mathbf S$ is quasi-distributive and strictly categorical, Theorem \[strict\] implies that and dually must hold. Let $x,y,z\in S$ be such that $x\vee z\vee x=y\vee z\vee y$ and $x\wedge z\wedge x=y\wedge z\wedge y$. If $\mathbf S/\DD$ is distributive, then $x$ and $y$ share a common image in $\mathbf S/\DD$, placing them in the same $\DD$-class in $\mathbf S$. But also $x\wedge z\wedge x \leq$ both $x, y \leq x\vee z\vee x$. If $\mathbf S$ is strictly categorical, then Theorem \[strict\] gives $x=y$. A strictly categorical skew lattice is distributive if and only if no subalgebra is a copy of $\mathbf M_{3}$ or $\mathbf N_{5}$ Given Theorems \[disteq\] and \[strcat\] one might expect linearly distributive, quasi-distributive skew lattices to be distributive. Mace4, however, has produced four minimal counterexamples. They turn out to be Spinks’ minimal 9-element examples of skew lattices for which exactly one of or hold. (See [@Sp00] and Example \[nondistrib\] below.) Since and are equivalent for symmetric skew lattices and skew chains are always symmetric, these examples are linearly distributive, so that appropriate products of them are both linearly distributive and quasi-distributive, but satisfy neither nor . Spinks’ examples are necessarily non-symmetric. This leads one to ask: Do linear distributivity and quasi-distributivity jointly imply distributivity for symmetric skew lattices? Before showing this to be the case, we first consider the broader problem of deciding which linearly distributive, quasi-distributive skew lattices are distributive. To see what else is required, we begin with a property common to all skew lattices. Given $\DD$-classes $A$ and $B$, their meet class $M$, and an element $m \in M$, then $a\wedge b\wedge a = a\wedge b'\wedge a$ for all $a\in A$ and all $b,b' \in Im(m\mid B)$, the set of all images of $m$ in $B$. $\begin{array}{cc} \begin{tikzpicture}[scale=.7] \node (a) at (-1,1){$A$} ; \node (b) at (1,1){$B$} ; \node (m) at (0,0){$M$}; \draw[dotted] (a) -- (m) -- (b); \end{tikzpicture} & \begin{tikzpicture}[scale=.8] \node (a) at (-1,1){$a$} ; \node (b) at (3,1){$b,b' \in Im(m\mid B)$} ; \node (m) at (0,0){$m$}; \draw[dashed] (a) -- (m) ; \draw[dotted] (b) -- (m) ; \end{tikzpicture} \end{array}$ Indeed if $a'\in Im(m\mid A)$ so that $a'\wedge b = m = b\wedge a'$ for all $b \in Im(m\mid B)$, then regularity implies $a\wedge b\wedge a = a\wedge a'\wedge b\wedge a'\wedge a = a\wedge m\wedge a$, and this occurs for all $b \in Im(m\mid B)$. More generally we have: The *Meet-class Condition* (MCC): Given $\DD$-classes $A$ and $B$, their meet-class $M$, and elements $a, a' \in A$ and $b, b' \in B$, then $a\wedge b\wedge a = a\wedge b'\wedge a$ if $b$ and $b'$ share a common image in $M$. Likewise $b\wedge a\wedge b = b\wedge a'\wedge b$ if $a$ and $a'$ share a common image in $B$. Finally, all four outcomes coincide when $a$, $a'$, $b$ and $b'$ all share a common image in $M$. Dualizing, one has the *Join-class Condition* (JCC), with all skew lattices having both properties. Not all skew lattices, however, have their following extensions: The *Extended Meet-class Condition* (EMCC). Given $\DD$-classes $A$ and $B$, their meet class $M$, an element $a$ in $A$ and elements $d$ and $d'$ in a $\DD$-class $D$ lying above $B$, then $a\wedge d\wedge a = a\wedge d'\wedge a$ if $d$ and $d'$ share a common image in $M$ and a common $B$-coset in $D$. $\begin{array}{cc} \begin{tikzpicture}[scale=.8] \node (a) at (-1,1){$A$} ; \node (b) at (1,1){$B$} ; \node (m) at (0,0){$M$}; \node (d) at (1,2){$D$}; \draw[dotted] (a) -- (m) -- (b) -- (d); \end{tikzpicture} & \begin{tikzpicture}[scale=.8] \node (a) at (-1,1){$a$} ; \node (b) at (3,2){$d -_{B} d' \in Im(m\mid D)$} ; \node (m) at (0,0){$m$}; \draw[dashed] (a) -- (m) ; \draw[dotted] (b) -- (m) ; \end{tikzpicture} \end{array}$ Dually, there is the *Extended Join-class Condition* (EJCC). An equivalent formulation of the (E)MCC requires a broader way to describe cosets. Given $\DD$-classes $A$ and $B$, set $A\wedge b\wedge A = {\{\,a\wedge b\wedge a'\mid a,a'\in A\,\}}$ for any $b\in B$. If $A \geq B$, then $A\wedge b\wedge A$ is just a typical $A$-coset in $B$. If $B \geq A$, then $A\wedge b\wedge A = A$, the unique $A$-coset in itself. In general, setting $M = A\wedge B$, regularity and other basic facts imply: - Given $b \geq m$ where $b \in B$ and $m \in M$, $A\wedge b\wedge A = A\wedge m\wedge A$, an $A$-coset in $M$; conversely every coset $A\wedge m\wedge A$ of $A$ in $M$ is just $A\wedge b\wedge A$ for some $b$ in $B$. - For all $b$, $b'$ in $B$, if $A\wedge b\wedge A = A\wedge b'\wedge A$, then $a\wedge b\wedge a = a\wedge b'\wedge a$ for all $a$ in $A$, with $A\wedge b\wedge A$ being just ${\{\,a\wedge b\wedge a\mid a\in A\,\}}$. Indeed, given $b \geq m$, pick $a_m \in A$ so that $a_m \geq m$. Then $m = a_m\wedge b$ so that $a\wedge b\wedge a'= a\wedge a_m\wedge b\wedge a' = a\wedge m\wedge a'$ by . Conversely each $m\in M$ factors as some $a_m\wedge b$; thus $a\wedge m\wedge a' = a\wedge a_m\wedge b\wedge a' = a\wedge b\wedge a'$ by and (i) follows. Note that $A\wedge b\wedge A = {\{\,a\wedge b\wedge a\mid a\in A\,\}}$ since gives $a\wedge b\wedge a' = a\wedge a'\wedge b\wedge a\wedge a'$. The remainder of (ii) also follows from . The MCC is thus equivalent to: given $A$, $B$ and $M$ as above, $b, b' \geq m$ for $m\in M$ and $b, b'\in B$ implies $A\wedge b\wedge A = A\wedge b'\wedge A$ as cosets of $A$ in $M$. The EMCC is likewise equivalent to: given also $d, d' \geq m$ for $m\in M$ and $d -_B d'\in D \geq B$, $A\wedge d\wedge A = A\wedge d'\wedge A$ as cosets of $A$ in $A\wedge D$. Dual remarks apply to the (extended) join-class condition. \[EMCC\] A skew lattice $\mathbf S$ has the EMCC property if and only if it satisfies $$\label{dist3}\tag{5.3} x\wedge ((y\wedge x\wedge y)\vee z\vee y\vee z\vee( y\wedge x\wedge y))\wedge x = x\wedge (y\vee z\vee y)\wedge x.$$ $\mathbf S$ has the EJCC property if and only if it satisfies the dual of . Skew lattices having the EMCC property \[or the EJMC property\] thus form a subvariety. Finally, $\wedge$-distributivity, given by , implies EMCC, while $\vee$-distributivity, given by , implies EJCC. Setting $a = x$, $m = y\wedge x\wedge y$, $B = \DD_{y}$, $D = \DD_{d}= \DD_{d'}$ where $d = y\vee z\vee y$ and $d' = (y\wedge x\wedge y)\vee z\vee y\vee z\vee (y\wedge x\wedge y)$, the EMCC gives . Conversely, given $a$, $m$, $d$ and $d'$ satisfying the requirements of , first pick $b$ in $B$ so that $m < b < d$ and pick $a'$ in $A$ so that $a'\wedge b = b\wedge a' = m$. Next let $d' = b'\vee d\vee b'$ for some $b'$ in $B$. By assumption we also have $d' = m\vee b'\vee m\vee c\vee m\vee b'\vee m$ so that we may assume that $m < b' < d'$. Assigning $a'$ to $x$, $b'$ to $y$ and $d$ to $z$, gives $a'\wedge d\wedge a' = a' \wedge (m \vee d \vee b' \vee d \vee m) \wedge a' = a' \wedge (b' \vee d \vee b') \wedge a' = a'\wedge d'\wedge a'$ from which $a\wedge d\wedge a = a\wedge d'\wedge a$ follows by the argument above, and the EMCC is verified. Clearly we have a pair of subvarieties. The implications are clear. The left-handed and right-handed versions of are respectively: $$\label{dist3L}\tag{5.3L} x \wedge (y \vee z \vee (y \wedge x)) = x \wedge (z \vee y)$$ $$\label{dist3R}\tag{5.3R} ((x \wedge y) \vee z \vee y) \wedge x = (y \vee z) \wedge x.$$ We will soon see (cf. Theorem \[distlindist\] below) that all four special consequences of and - quasi-distributivity, linear distributivity, EMCC and EJCC - are also sufficient for a skew lattice to be distributive. It is fortuitous that the two latter conditions are also consequences of symmetry, leading to a major result of this paper, Theorem \[distrib\]. \[nondistrib\][@Sp00] Consider the following left-handed, 9-element example after Spinks where holds but not since: $$2\wedge (5\vee 8)\wedge 2 = 2\wedge 4\wedge 2 = 6 \neq 5 = 5 \vee 0 = (2\wedge 5\wedge 2) \vee (2\wedge 8\wedge 2).$$ $\begin{array}{ccc} \begin{tabular}{ l | c c c c c c c c c} $$ & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ 2 & 0 & 2 & 2 & 5 & 6 & 5 & 6 & 0 & 0 \\ 3 & 0 & 3 & 5 & 3 & 3 & 5 & 5 & 7 & 7 \\ 4 & 0 & 4 & 6 & 4 & 4 & 6 & 6 & 8 & 8 \\ 5 & 0 & 5 & 5 & 5 & 5 & 5 & 5 & 0 & 0 \\ 6 & 0 & 6 & 6 & 6 & 6 & 6 & 6 & 0 & 0 \\ 7 & 0 & 7 & 0 & 7 & 7 & 0 & 0 & 7 & 7Ê\\ 8 & 0 & 8 & 0 & 8 & 8 & 0 & 0 & 8 & 8 \end{tabular} & \begin{tabular}{ l | c c c c c c c c c} $$ & 0 & 1 & 2 & 3 & 4& 5& 6& 7& 8 \\ \hline 0 & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 2 & 2 & 1 & 2 & 1 & 1 & 2 & 2 & 1 & 1 \\ 3 & 3 & 1 & 1 & 3 & 4 & 3 & 4 & 3 & 4 \\ 4 & 4 & 1 & 1 & 3 & 4 & 3 & 4 & 3 & 4 \\ 5 & 5 & 1 & 2 & 3 & 4 & 5 & 6 & 3 & 4 \\ 6 & 6 & 1 & 2 & 3 & 4 & 5 & 6 & 3 & 4 \\ 7 & 7 & 1 & 1 & 3 & 4 & 3 & 4 & 7 & 8 \\ 8 & 8 & 1 & 1 & 3 & 4 & 3 & 4 & 7 & 8 \end{tabular} & \begin{tikzpicture}[scale=1.2] \node (1) at (0,4){$1$} ; \node (2) at (-1,3){$2$} ; \node (3) at (.5,3){$3$} ; \node (4) at (1.5,3){$4$} ; \node (5) at (-1.5,2){$5$}; \node (6) at (-.5,2){$6$}; \node (7) at (.5,2){$7$} ; \node (8) at (1.5,2){$8$} ; \node (0) at (0,1){$0$}; \draw[dotted] (1) -- (2) ; \draw[dotted] (1) -- (3) ; \draw[dotted] (1) -- (4) ; \draw[dotted] (3) -- (7) ; \draw[dotted] (4) -- (8) ; \draw[dotted] (7) -- (0) ; \draw[dotted] (8) -- (0) ; \draw[dotted] (2) -- (5) ; \draw[dotted] (2) -- (6) ; \draw[dotted] (5) -- (0) ; \draw[dotted] (6) -- (0) ; \draw[dotted] (5) -- (3) ; \draw[dotted] (6) -- (4) ; \draw (3) -- (4) ; \draw (5) -- (6) ; \draw (7) -- (8) ; \end{tikzpicture} \end{array}$ This example is non-upper symmetric. (Indeed, $5\wedge 8 = 0 = 8\wedge 5$ but $5\vee 8 = 4 \neq 3 = 8\vee 5$.) Notice that $2 \wedge (8 \vee 5 \vee (8 \wedge 2)) = 2 \wedge (3 \vee 0) = 2 \wedge 3 = 5$, while $2 \wedge (5 \vee 8) = 2 \wedge 4 = 6$, so that fails. As mentioned above, Spinks’ four examples of order 9 are the first cases where or its dual do not hold. Mace 4 has shown that all cases of order 10 - 13 where or its dual do not hold contain a copy of a SpinksÕ example. The first cases where or its dual do not hold, but contain no copy of a Spinks example occur with order 14. Proceeding on to Theorem \[distlindist\], we first further characterize quasi-distributivity in the left-handed case. A left-handed skew lattice is quasi-distributive if and only if for all $x, y, z\in S$: $$\label{dist4}\tag{5.4} x\wedge ((y\wedge x)\vee z) = x\wedge (y\vee z)$$ ($\Leftarrow$) Clearly, neither $\mathbf M_{3}$ nor $\mathbf N_{5}$ satisfy . ($\Rightarrow $) If a skew lattice $\mathbf S$ is left-handed, then $y\vee z\geq (y\wedge x)\vee z$ for all $y,z\in S$. Indeed,taking joins of both sides both ways gives $$\begin{array}{lcl} y\vee z\vee (y\wedge x)\vee z &=_{\eqref{LH}}& y\vee (y\wedge x)\vee z=_{\eqref{absidentities}} y\vee z \text{ and}\\ (y\wedge x)\vee z\vee y\vee z &=_{\eqref{LH}}& (y\wedge x\wedge y)\vee y\vee z=_{\eqref{absidentities}} y\vee z. \end{array}$$ But quasi-distributivity implies both sides of are $\DD$-related and in fact, $\LL$-related. Thus, we have $(x \wedge (y \vee z)) \wedge (x \wedge ((y \wedge x) \vee z)) = x \wedge (y \vee z)$. But $y \vee z \geq (y \wedge x) \vee z$ gives $(x \wedge (y \vee z)) \wedge (x \wedge ((y \wedge x) \vee z)) =_{\eqref{LH}} x \wedge (y \vee z) \wedge ((y \wedge x) \vee z) =_{\eqref{absidentities}} x \wedge ((y \wedge x) \vee z)$ so that follows. \[distlindist\] A quasi-distributive, linearly distributive skew lattice satisfies if and only if it satisfies . Likewise, a quasi-distributive, linearly distributive skew lattice satisfies if and only if it satisfies the dual of . Finally, a skew lattice is distributive if and only if it is quasi-distributive, linearly distributive and satisfies both and its dual. Clearly, $\Rightarrow $ . To show $\Rightarrow $ under the given conditions, we first consider the left-handed case: $$\begin{array}{lcl} (x \wedge y) \vee (x \wedge z) & =_{\eqref{GMDg}} & x \wedge [(y \wedge x) \vee (z \wedge x)] \\ & =_{\eqref{dist4}} & x \wedge [y \vee (z \wedge x)] \\ &=_{\eqref{LH}} & x \wedge [(z \wedge x) \vee y \vee (z \wedge x)] \\ &=_{\eqref{dist4}} & x \wedge [z \vee y \vee (z \wedge x)] \\ &=_{\eqref{dist3L}} & x \wedge (y \vee z) \end{array}$$ The right-handed case is similar and the general case now follows as usual. The second assertion now follows by $\vee -Ê\wedge$ duality and the final assertion from the first two. Recall that a *skew diamond* is a skew lattice with four $\DD$-classes, two being incomparable, say $A$ and $B$, and the remaining two being their join and meet $\DD$-classes, say $J$ and $M$. Skew diamonds trivially satisfy the EMCC. Here the nontrivial situations are $A > M < B < J$ where $A\wedge j\wedge A = A$ for all $j\in J$, and $B > M < A < J$ where similar remarks hold. Dually they satisfy the EJCC. Since skew diamonds are clearly quasi-distributive, we have: *a skew diamond is distributive if and only if it is linearly distributive*. Skew diamonds play an important role in the basic theory of skew lattices. See, e.g., their role in [@Ka11c] where a number of forbidden algebras are skew diamonds. Under what reasonable conditions must either or its dual hold? They must hold for strictly categorical skew lattices since both sides of are $< x$ but $> x\wedge y\wedge x$. We also have: \[USL\] An upper symmetric skew lattice satisfies . We organize the proof for the case when $\mathbf S$ is left-handed in the following steps: 1\) Upper symmetry in the left-handed case is characterized by $$\label{LUS}\tag{2.1L} x\vee y\vee (x\wedge y) = y\vee x.$$ Since $x, y \succeq y\wedge x, (y\wedge x) \vee y \vee x$ reduces to $y \vee x$ in the left-handed case. 2\) For all $x, y, z \in S$, $x\vee y \geq (x\vee (y\wedge z)) \wedge (z\vee (y\wedge z))$. Set $u = (x\vee (y\wedge z)) \wedge (z\vee (y\wedge z))$. Since $u\wedge y =_{\eqref{id3}} (x\vee (y\wedge z)) \wedge (y\wedge z) =_{\eqref{absidentities}} y\wedge z \leq y$, we get $$\begin{array}{lcl} x \vee y \vee u &=_{\eqref{LUS}}& x \vee u \vee y \vee (u \wedge y) = x \vee u \vee y \vee (y \wedge z) =_{\eqref{absidentities}} x \vee u \vee y \\ &=_{\eqref{reg}}& x\vee y\vee u\vee y =_{\eqref{id1}} x\vee (y\wedge z)\vee y\vee u\vee y \\ &=_{(LH)}& x\vee (y\wedge z)\vee u\vee y= x\vee (y\wedge z)\vee ((x\vee (y\wedge z))\wedge (z\vee (y\wedge z)])\vee y \\ &=_{\eqref{absidentities}}& x\vee (y\wedge z)\vee y =_{\eqref{id1}} x\vee y, \\ \end{array}$$ which is what needed to be shown in the left-handed case. 3\) For all $x,y,z\in S$, $z\wedge [x\vee (y\wedge z)] = z\wedge (x\vee y)\wedge (x\vee (y\wedge z))$. $$\begin{array}{lcl} z \wedge (x \vee (y \wedge z)) &=_{\eqref{absidentities}}& z \wedge (z \vee (y \wedge z)) \wedge (x \vee (y \wedge z)) \\ &=_{\eqref{LH}}& z \wedge (z \vee (y\wedge z)) \wedge u \\ &=_{(2)}& z \wedge (z \vee (y \wedge z)) \wedge (x \vee y) \wedge u \\ &=_{\eqref{LH}}& z\wedge (z\vee (y\wedge z))\wedge (x\vee y)\wedge (x\vee (y\wedge z)) \\ &=_{\eqref{absidentities}}& z \wedge (x \vee y) \wedge (x \vee (y \wedge z)). \end{array}$$ 4\) For all $x, y, z \in S$, $z \wedge (x \vee (y \wedge x \wedge z)) = z \wedge (x \vee (y \wedge x))$. Replacing $y$ by $y \wedge x$ in (3) gives $$\begin{array}{lcl} z\wedge (x\vee (y\wedge x\wedge z)) &=& z\wedge (x\vee (y\wedge x))\wedge (x\vee (y\wedge x\wedge z)) \\ &=_{\eqref{id2}}& z \wedge (x \vee (y \wedge x)) \wedge x \wedge (x \vee (y \wedge x \wedge z)) \\ &=_{\eqref{absidentities}} & z\wedge (x\vee (y\wedge x))\wedge x \\ &=_{\eqref{id2}}& z\wedge (x\vee (y\wedge x)). \end{array}$$ 5\) Concluding the left-handed case. Replace $x$ with $y \vee x$ in (4). On the left side we get $$z\wedge (y\vee x\vee (y\wedge (y\vee x)\wedge z)) =_{\eqref{absidentities}} z\wedge (y\vee x\vee (y\wedge z)).$$ On the right side, $$z \wedge (y \vee x \vee (y \wedge (y \vee x))) =_{\eqref{absidentities}} z \wedge (y \vee x \vee y) =_{\eqref{LH}} z \wedge (x \vee y).$$ Therefore $z \wedge (x \vee y) = z \wedge (y \vee x \vee (y \wedge z))$ which is with the variables permuted. The verification of the right-handed case is similar, and the general case follows. These results and their duals lead to: \[distrib\] - An upper symmetric skew lattice is $\wedge$-distributive if and only if it is both quasi-distributive and linearly distributive; - A lower symmetric skew lattice is $\vee$-distributive if and only if it is both quasi-distributive and linearly distributive. - Thus a symmetric skew lattice is distributive if and only if it is both quasi-distributive and linearly distributive. Prover9 has also provided proofs of the following results, which we just state. A simply cancellative skew lattice is distributive if and only if it is linearly distributive. \[biconditional\] A quasi-distributive skew lattice $\mathbf S$ is distributive if it is biconditionally distributive: holds for any particular $x, y, z \in S$ iff does. A skew lattice $\mathbf S$ is *relatively distributive* if every quasi-distributive subalgebra of $\mathbf S$ is distributive. Such a skew lattice is linearly distributive. More general statements of Theorems \[distrib\] (iii) and \[biconditional\] are as follows: Biconditionally distributive skew lattices as well as symmetric, linearly distributive skew lattices are relatively distributive. Examples \[nondistrib\] show that relative distributivity is properly stronger than linear distributivity. The modular lattice $\mathbf M_{3}$ shows that biconditional distributivity is properly stronger than relative distributivity. Indeed any lattice is relatively distributive, but elements $x$, $y$ and $z$ are easily found in $\mathbf M_{3}$ satisfying exactly one of or . It can be shown that biconditionally distributive skew lattices form a variety. It can also be shown, using Prover9, that a skew lattice is relatively distributive if and only if it is linearly distributive and possesses both the EMCC and EJCC properties. Thus relatively distributive skew lattices also form a variety. [7]{} A. Bauer and K, Cvetko-Vah, Stone duality for skew Boolean intersection algebras. **Houston Journal of Mathematics** **39** (2013), 73-109. R. Bignall and J. Leech, Skew boolean algebras and discriminator varieties. **Algebra Universalis** **33** (1995), 387Ð398. R. Bignall and M. Spinks, Propositional skew Boolean logic. In **Proc. 26th International Symposium on Multiple-valued Logic**, IEEE Computer Soc. Press (1996), 43-48. G. Birkhoff, Lattice Theory. **AMS Colloquium Publicatins** (1940). W. McCune, Mace4/Prover9, Version Dec 2007 [www.cs.unm.edu/\~mccune/mace4](www.cs.unm.edu/~mccune/mace4) K. Cvetko-Vah, Skew lattices of matrices in rings. **Algebra Universalis** **53** (2005), 471Ð479. K. Cvetko-Vak, Skew lattices in rings. **PhD thesis**. University of Ljubljana, 2005. K. Cvetko-Vah, Internal decompositions of skew lattices. **Communications in Algebra** **35** (2007), 243Ð247. K. Cvetko-Vah, A new proof of SpinksÕ Theorem. **Semigroup Forum** **73** (2006), 267–272. K. Cvetko-Vah, M. Kinyon, J. Leech, and M. Spinks, Cancellation in skew lattices. **Order** **28** (2011), 9Ð32. K. Cvetko-Vah and J. Leech, Associativity of the $\nabla$ operation on bands in rings. **Semigroup Forum** **76** (2008), 32–50. K. Cvetko-Vah and J. Leech, Rings whose idempotents are multiplicatively closed. **Communications in Algebra** **40** (2012), 3288–3307. K. Cvetko-Vah and J. Leech, On maximal idempotent-closed subrings of $M_{n}(F)$. **International Journal of Algebra and Computation** **21** (7) (2011), 1097–1110. M. Kinyon and J. Leech, Categorical skew lattices, **Order**, in press. G. Kudryavtseva, A refinement of Stone duality to skew Boolean algebras, **Algebra Universalis** **67** (2012), 397–416. G. Laslo and J. Leech, GreenÕs relations on noncommutative lattices, **Acta Sci. Math. (Szeged)** **68** (2002), 501–533. J. Leech. Skew lattices in rings. **Algebra Universalis** **26** (1989), 48Ð72. J. Leech. Normal skew lattices. **Semigroup Forum** **44** (1992), 1–8. J. Leech. Skew boolean algebras. **Algebra Universalis** **27** (1990), 497Ð506. J. Leech. The geometric structure of skew lattices. **Trans. Amer. Math. Soc.** **335** (1993), 823Ð842. J. Leech, Recent developments in the theory of skew lattices. **Semigroup Forum** **52** (1996), 7Ð-24. J. Leech, Small skew lattices in rings, **Semigroup Forum** **70** (2005), 307–311. J. Leech and M. Spinks, Skew Boolean algebras generated from generalized Boolean algebras, **Algebra Universalis** **58** (2008), 287–302. J. Pita Costa, Coset laws for categorical skew lattices. **Algebra Univers.** **68** (2012), 75-89. J. Pita Costa, On the coset structure of skew lattices. Ph. D. Thesis, University of Ljubljana (2012). M. Spinks, Automated deduction in non-commutative lattice theory, Report 3/98, Monash University, Gippsland School of Computing and Information Technology (1998) M. Spinks, On middle distributivity for skew lattices, **Semigroup Forum** **61** (2000), 341–345. M. Spinks and R. Veroff, Axiomatizing the skew Boolean propositional calculus, **J. Automated Reasoning** **37** (2006), 3–20.
--- abstract: 'We investigated the impact of noisy linguistic features on the performance of a Japanese speech synthesis system based on neural network that uses WaveNet vocoder. We compared an ideal system that uses manually corrected linguistic features including phoneme and prosodic information in training and test sets against a few other systems that use corrupted linguistic features. Both subjective and objective results demonstrate that corrupted linguistic features, especially those in the test set, affected the ideal system’s performance significantly in a statistical sense due to a mismatched condition between the training and test sets. Interestingly, while an utterance-level Turing test showed that listeners had a difficult time differentiating synthetic speech from natural speech, it further indicated that adding noise to the linguistic features in the training set can partially reduce the effect of the mismatch, regularize the model, and help the system perform better when linguistic features of the test set are noisy.' address: ' $^1$National Institute of Informatics, Tokyo, Japan $^2$KDDI Research Inc., Saitama, Japan' bibliography: - 'main.bib' title: 'Investigating accuracy of pitch-accent annotations in neural network-based speech synthesis and denoising effects ' --- **Index Terms**: speech synthesis, deep neural network, Japanese prosody, WaveNet Introduction ============ Because of the rapid development of deep learning, more and more text-to-speech (TTS) synthesis systems adopt end-to-end approaches to some degree [@sotelo2017char2wav; @wang2017tacotron; @shen2017natural]. Although it has been reported that neutral-style synthetic speech from one system achieved a similar degree of quality and naturalness to natural recordings [@shen2017natural], it is unknown how the end-to-end approach could perfectly avoid incorrect pronunciation [@shen2017natural] and make it possible to control prosody like the conventional structured architectures [@takashi2007style; @watts2015sentence; @luong2017adapting; @henter2017principles]. More importantly, since most of the existing commercial TTS systems still adopt the pipeline structure which contains a front-end and a back-end, rapid shifting to a end-to-end architecture may be unable to answer how each part of the conventional structure contributes and limits the performance of existing TTS systems. Therefore, we believe that investigation on the pipeline of conventional TTS systems is still necessary and meaningful. In this work we adopted the conventional speech synthesis architecture which consists of three separate components: a linguistic analyzer, a neural network-based acoustic model [@zen2013statistical; @ling2015deep] and a vocoder to synthesize waveforms from acoustic features. As the initial step, our previous work [@wang2018comparison] has showed that the conventional TTS pipeline can be improved by replacing a deterministic vocoder [@kawahara2006straight; @morise2016world] and RNN-based acoustic models [@fan2014tts; @wang2017rnn] in the back-end with more advanced statistical models such as the WaveNet-based vocoder [@van2016WaveNet; @ping2017deep]. However, our analysis revealed that the gap between synthetic speech and natural recordings still exists. One reason may be due to the fact that the statistical models in our previous work were trained by using linguistic feature automatically extracted from text. This motivated us to investigate the impact of the accuracy of the features on the back-end of the TTS system. There is a relevant investigation on the accuracy of phone sequences used for training of hidden Markov models [@7472660]. Our main focus in this paper is the accuracy of pitch accent information and a neural network. In this study, we first built an oracle system where manually corrected linguistic features were used for both model training and testing. Then, we compared the performance of the system with a few other systems that used corrupted linguistic features at training or/and testing stages. More particularly, we corrupted Japanese pitch accent types by adding discrete noise. From large-scale crowdsourcing listening tests, we found that in our neural network-based speech synthesis system, using corrupted linguistic features has a regularization effect (like a denosing auto-encoder) when linguistic features in the test set are noisy. We believe that this is a new finding in the speech synthesis field. In section \[sec:systems\] and \[sec:linguistic\], we describe the statistical models and linguistic features used in our TTS systems. In section \[sec:experiments\], we explain the methodology used to train and test our systems by using linguistic features with a varied amount of noise. In section \[sec:evaluations\], we list the results of both objective and subjective evaluation. Finally, in section \[sec:conclusions\], we discuss the findings and draw a conclusion. Speech synthesis back-end {#sec:systems} ========================= The back-end of the TTS system we investigated consists of two parts. The first part contains acoustic models that convert linguistic features into acoustic features such as the mel-generalized cepstral coefficients (MGC) and quantized fundamental frequency (F0). The second part is a WaveNet vocoder that generates speech waveform based on the basic of acoustic features. All of the models adopt the configurations used in our previous work [@wang2018comparison]. Acoustic models --------------- ![Structures of acoustic models used in our experiment. Feedforward layers use *tanh* activation function while linear layers use linear activation function. GMM denotes Gaussian mixture models. Bi-LSTM and uni-LSTM denote bi-directional and uni-directional recurrent layers using long-short-term-memory (LSTM) units, respectively.[]{data-label="fig:acoustic"}](dar-sar){width="0.95\columnwidth"} The acoustic models are trained to learn the mapping from a sequence of linguistic features $\mathbf{l}_{1:N}=\{\mathbf{l}_1,\mathbf{l}_2,...,\mathbf{l}_N\}$ into a sequence of acoustic features $\mathbf{a}_{1:N}=\{\mathbf{a}_1,\mathbf{a}_2,...,\mathbf{a}_N\}$, where $N$ denotes the total number of frame. While a vanilla neural network can be used as the acoustic model, it assumes that $\{\mathbf{a}_1,\mathbf{a}_2,...,\mathbf{a}_N\}$ is a set of independent random variables given $\mathbf{l}_{1:N}$ even if convolution or recurrent layers are used. To overcome such weakness, we used autoregressive models, the basic idea of which is to feed the target data of the previous step as the input of the current step. On the basic of this idea, two separate autoregressive models plotted in Figure \[fig:acoustic\] were trained to model MGC and quantized F0, respectively. The model for MGC was referred to as shallow autoregressive recurrent network (SAR). SAR maps a sequence of linguistics features to the value of a parameter set of which the distribution (in this case, a Gaussian distribution) of MGC of each frame can be specified. Different from a normal mixture density network [@bishop1994mixture], SAR uses a linear function to summarize the acoustic features in previous frames and then changes the distribution of the current frame. A similar network was used for quantized F0, which is referred to as deep autoregressive recurrent network (DAR). DAR was trained to map linguistic features to a quantized F0 representation rather than interpolated continuous-valued F0 data. Another distinct feature of DAR in comparison with SAR is that the output of the network is fed back to a recurrent layer that is closer to the input side. The structure of the acoustic modelling networks are illustrated in Figure \[fig:acoustic\]. Bi-directional and uni-directional long-short term memory (LSTM) layers were used after feedforward layers. Details on these models are given in our previous papers [@wang2017autoregressive; @wang2017rnn]. WaveNet vocoder --------------- To improve the quality of synthetic speech, we used a speaker dependent WaveNet vocoder. The WaveNet vocoder is a CNN-based autoregressive network that models a conditional distribution of a waveform sequence $\mathbf{o}_{1:T}$ over an auxiliary feature sequence $\mathbf{a}_{1:N}$ as $$\centering p(\mathbf{o}_{1:T}|\mathbf{a}_{1:N})=\prod\limits_{t=1}^{T}p(\mathbf{o}_{t}|\mathbf{o}_{<t},\mathbf{a}_{1:N}).$$ For each sample $\mathbf{o}_t$ at time $t$, its value is conditioned on all of the previous observations $\mathbf{o}_{<t}$. In practice, the prediction of $\mathbf{o}_t$ was limited to a finite number of previous samples, which together were referred to as receptive field. By sequentially sampling the waveform per time step, the WaveNet vocoder can produce very high-quality synthetic speech in terms of naturalness, as reported in several papers [@van2016WaveNet; @shen2017natural; @ping2017deep]. Linguistic features used for our Japanese TTS system {#sec:linguistic} ==================================================== The linguistic features used for conventional Japanese TTS systems mainly include segmental and supra-segmental linguistic information. Despite the numerous differences in the two sets of linguistic features used in the experiments, i.e., OpenJTalk and oracle that will be introduced in Section 4.1, both sets contain quinphone contexts, word part-of-speech tags, pitch accent types of the accent phrases, interrogative phrase marks, and other structural information such as the position of the mora in a word, accent phrases, and utterances. These linguistic features will be used as the input of the acoustic model. The two types of linguistic features that we were interested in for this investigation include the pitch accent type (Acc\_Type) and the interrogative phrase mark (Question\_Flag). The value of the pitch accent type is equal to the location of the accented mora in a Japanese accent phrase. It can also be a special number such as 0, which indicates a no-accent phrase. The interrogative phrase mark is binary and indicates whether a phrase is interrogative or not. These two types of features are essential to the prosody of Japanese utterances yet difficult to accurately obtain by using automatic prosodic annotation or text-analysis. Experiments {#sec:experiments} =========== Data and features {#sec:data_feature} ----------------- This study used the same speech corpus as our previous work [@wang2018comparison]. This corpus has high-quality speech recordings of a female voice talent and was released as part of the Ximera datasets [@kawai2004ximera]. Compared with our previous work, we excluded hundreds of utterances in which the manually annotated labels were unusable due to imperfect pronunciation. This new training set contained 27,999 utterances while both the validation and test set contained 480 utterances. The duration of the training set was about 46.914 hours, among which the total silence at the two ends of the utterances was around 13.393 hours in total. The duration of the validation and test sets was 0.815 and 0.824 hours. Acoustic features were extracted by using WORLD [@morise2016world] spectral analysis modules and SPTK. We used speech waveforms at a sampling frequency of 48 kHz to obtain these features with a window length of 25 ms and frame shift of 5 ms. 60-dimensional Mel-generalized cepstral coefficients (MGCs) and 25-dimensional band-limited aperiodicity values (BAP) were extracted. F0s were quantized into 255 levels as described in [@wang2017rnn]. To investigate the impact of the accuracy of linguistic features, we prepared three sets of linguistic features: **OpenJTalk:** the first set of linguistic features was extracted automatically from text by using OpenJTalk [@hts2015openjtalk]. These features were converted into 389 dimensional vector. This set is included as a reference because it was used in our previous work. **Oracle:** the second set of linguistic features is based on in-house annotations provided by KDDI Research, Inc. The definition of the linguistic features is very similar to that used in the above first set, but it contains more precise phone definitions. Part-of-speech tagging is not included in the annotations. The dimension of the linguistic feature vector was 265. All features were manually verified. **Corrupted:** the third set is based on the second set. However, we randomly changed the values of certain linguistic features. More specifically, we randomly added discrete noise ranging between -2 and +2 to the original value of \[Acc\_Type\] for each accent phrase with a 50% probability. The value of the binary feature \[Question\_Flag\] for each accent phase was also randomly converted to the opposite value with a 30% probability. We expected that these two types of processing would reproduce the annotation errors of Japanese accent types and question types. Model configurations -------------------- The structure of the acoustic models is plotted in Figure \[fig:acoustic\]. The configuration of the layer size was 512 for feedforward, 256 for bi-directional LSTM-RNN, and 128 for uni-directional LSTM-RNN. The size of a linear layer depends on the size of the output. For SAR network, the output is a parameter set of Gaussian distributions for MGC, BAP, and voiced/unvoiced (V/UV) flags. BAP and V/UV were also included in the output even though they are not used to generate speech waveforms with the WaveNet vocoder. DAR used a similar configuration of layer size as SAR, but the output layer was a hierarchical softmax layer. Although the acoustic features were extracted from speech waveforms at a sampling frequency of 48 kHz, the WaveNet vocoder was trained by using speech waveforms at a sampling frequency of 16 kHz. PCM waveform samples were quantized into 10 bits after they were compressed by $\mu$-law coding [@modulation1988voice]. The network contained 40 causal dilated convolution layers similar to [@tamamori2017speaker]. WaveNet blocks were conditioned on MGC and quantized F0 parameters locally. The WaveNet vocoder was trained on acoustic features extracted from natural speech, while, in the generation stage, MGC and quantized F0 features predicted from DAR and SAR models were used. Experimental conditions ----------------------- To investigate the impact of the noise in linguistic features, we trained a few systems by using different sets of linguistic features in the training and test stages as we described earlier. The definition and notations of each system can be found on the left part of Table \[tab:objective\]. Note that the linguistic features for the validation set were not corrupted. Also note that, instead of using the results of our previous study, we retrained the OpenJTalk-based model (OJT) by using the same data set configuration described in Section 4.1. Thus, the results of OJT can be compared with those of other experimental models. Evaluations {#sec:evaluations} =========== Objective evaluation -------------------- Table \[tab:objective\] shows the performance of each system in terms of RMSE, correlation, and V/UV errors of F0 trajectories converted from predicted F0 classes including an unvoiced class. As expected, the model trained and tested by using manually annotated labels, i.e., MOO, achieved the best results among the systems for all of the measurements. We can also see that when testing on the corrupted linguistic labels, the performance of MOC drastically dropped. Interestingly, MMC and MMO, which were trained by using partially corrupted linguistic features, performed better than MOC. For MMO, the objective results are comparable to those of MOO, which suggests that 7999 (around 28.57% of all training data) corrupted labels did not affect the overall quality significantly. Meanwhile, MMC performed better than MOC even though MMC used corrupted linguistic features for training. Our hypothesis is that mixing corrupted labels with the training data is similar to a regularization method, such as the denoising auto-encoder and doing so helps a model generalize better and eases the negative impacts of the wrong information provided from incorrect linguistic features in the testing stage. Subjective evaluation --------------------- The objective evaluation hinted at the performance of the acoustic models. However, because we used WaveNet vocoder and its sampling rather than a traditional deterministic vocoder to synthesize speech waveforms, it was necessary to test the overall quality of the synthetic speech samples subjectively. Therefore, we also conducted a large-scale subjective test. Synthetic speech samples were generated by using the WaveNet vocoder. The natural speech was downsampled to 16 kHz and further converted to 8-bit $\mu$-law. Synthetic speech was converted to 8-bit and was normalized to have a similar volume to natural speech using the sv56 program [@p56]. With the above five systems and the natural speech (NAT), each of which contained 480 utterances from the test set, we conducted two subjective tests. The first test was done to evaluate the mean-opinion-score (MOS) on a five point scale. The second was similar to a Turing test, where participants were asked to identify which of two samples presented is synthetic. During this test, an anchor question was included, where we presented the same natural speech twice. This question was expected to provide some insight into the nature of the testing environment. No default answers were given in any of these tests to make sure participants would have to make their own choices. This large-scale listening test was conducted online through crowd-sourcing. Each participant was asked to navigate twelve pages for each set. Each page contained two questions, one for the MOS and another for the Turing test. The audio sample for the quality question contained a different sentence from that for the Turing question on the same page. One hundred subjects participated. They were allowed to repeat the test up to ten times. We collected a total of 720 sets, which led to 3 data points per unique audio sample for all of the systems. ![Subjective evaluation for quality of speech using MOS test. Bars indicate 95% confidence intervals.[]{data-label="fig:subjective"}](mos-new){width="0.92\columnwidth"} **Quality test:** Figure \[fig:subjective\] shows subjective results for the quality test with a 95% confidence interval with a student’s t-distribution. Unsurprisingly natural speech still achieved the highest and most statistically significant score at 3.96 even when converted to the $\mu$-law encoding format. Audio samples generated by using manually annotated labels at the generation stage (MOO and MMO) achieved the second highest score, and the difference between MOO and MMO was not statistically significant (3.62 versus 3.63, p-value=0.720). We can also see that OJT and MOC performed the worst, and the difference between them was not statistically significant (3.33 versus 3.26, p-value=0.05). Note that the p-value was calculated with Holm-Bonferroni correction. What’s interesting is that MMC, which used the corrupted linguistic features at both the training and testing stages, was better than OJT and MOC. These subjective results were consistent with the results of the objective evaluation on F0. These results indicate a correlation between the accuracy of the linguistic features and the quality of the synthetic speech. A greater impact could be seen if the accuracy of annotated labels is high at the testing stage instead of the training stage. Another finding is that, when linguistic features used in the test set contained noises, training the neural network models with a small amount of corrupted linguistic features seemed to improve the quality of the synthetic speech. We can also see that adding a small amount of corrupted linguistic features in the training set did not degrade the quality of the synthetic speech even if the test set did not contain any noise. **Turing (Identification) test:** for the Turing test, participants were asked to identify which of two audio samples presented on left or right side of the web page was synthetic. The audio samples from one of the TTS systems and from natural speech were randomly switched between left and right to discourage subjects from developing any bias patterns. Figure \[fig:turing\] shows the result. Surprisingly, for all comparisons between the synthetic and natural speech utterances, the correct-identification ratio was around 50%, which suggested that our participants could not decide with certainty which of the two samples presented was synthetic. There was no significant difference between the five pairs of generated and (slightly degraded) natural speech. We think that this may not be surprising because the correlation of our F0 prediction model was as high as 0.9, we used a very large speaker-dependent corpus, that was larger than in a recent paper on Google’s Tacotron 2 [@shen2017natural], and natural speech was also slightly degraded by the $\mu$-law coding. ![Results of utterance-level Turing (identification) test. Bars indicate 95% confidence intervals. []{data-label="fig:turing"}](combine){width="1.0\columnwidth"} As we included an anchor test in our evaluation in which participants were asked to judge the differences between the same natural speech, it may be helpful to look into the result to gain some insight into our testing environment. The results we got for this anchor test showed that the left options were favored 60% of the time, which suggested that participants had a slight bias for left option when it was difficult to choose the correct one. We can also analyze whether participants had a slight bias for left options for comparisons between the synthetic and natural speech utterances. Although the two options were randomly switched, from two sub figures at the bottom, we can see that the same tendency exists regardless of system types used. This basically gives some insight into developing a more sophisticated Turing test in the future. With the outcomes for the Turing test, we can conclude that, while the synthetic speech did not achieve the same quality as natural speech, it was difficult for a normal human being to correctly determine the synthetic speech with our current state-of-the-art setups, at least when a reference natural-speech utterance was not offered. Conclusions {#sec:conclusions} =========== In this paper, we investigated the impact of noisy linguistic features on the performance of a Japanese speech synthesis system based on neural network that uses WaveNet vocoder. In this investigation, an ideal system that used manually corrected linguistic features in the training and test sets was compared against a few other systems that used corrupted linguistic features. The corrupted linguistic features, which were created by adding noises artificially to the correct pitch accent information. Both subjective and objective results demonstrate that corrupted linguistic features, especially those in the test set, affected our TTS system’s performance significantly in a statistical sense due to mismatched conditions between the training and test sets. It was further indicated that adding noise to the linguistic features in the training set can partially reduce the effect of the mismatch, regularize the model, and help the system perform better when the linguistic features of the test set are noisy. As far as we know, this is a new finding in the speech synthesis field. Interestingly the utterance-level Turing test showed that our listeners had a difficult time differentiating synthetic speech from slightly degraded natural speech. Our future work includes comparing of our TTS system using manually corrected labels with recent end-to-end TTS systems and evaluating without using $\mu$-law coding.
--- abstract: 'We study the relation between two sets of correlators in interacting quantum field theory on de Sitter space. The first are correlators computed using in-in perturbation theory in the expanding cosmological patch of de Sitter space (also known as the conformal patch, or the Poincaré patch), and for which the free propagators are taken to be those of the free Euclidean vacuum. The second are correlators obtained by analytic continuation from Euclidean de Sitter; i.e., they are correlators in the fully interacting Hartle-Hawking state. We give an analytic argument that these correlators coincide for interacting massive scalar fields with any $m^2 > 0$. We also verify this result via direct calculation in simple examples. The correspondence holds diagram by diagram, and at any finite value of an appropriate Pauli-Villars regulator mass $M$. Along the way, we note interesting connections between various prescriptions for perturbation theory in general static spacetimes with bifurcate Killing horizons.' author: - | Atsushi Higuchi${}^*$, Donald Marolf${}^\dagger$, and Ian A. Morrison${}^\dagger$\ \ ${}^*$ Department of Mathematics, University of York\ Heslington, York, YO10 5DD, United Kingdom\ [`ah28@york.ac.uk`](mailto:ah28@york.ac.uk)\ \ ${}^\dagger$Physics Department, UCSB, Santa Barbara,\ CA 93106, USA\ [`marolf@physics.ucsb.edu`](mailto:marolf@physics.ucsb.edu), [`ian_morrison@physics.ucsb.edu`](mailto:ian_morrison@physics.ucsb.edu)\ title: 'On the Equivalence between Euclidean and In-In Formalisms in de Sitter QFT' --- Introduction {#intro} ============ While free quantum fields in de Sitter space (dS${}_D$) have been well understood for some time (see [@Allen:1985ux] for scalar fields), interacting de Sitter quantum field theory continues to be a topic of much discussion. In particular, there has been significant interest in the possibility of large infrared (IR) effects in interacting de Sitter quantum field theories [@AAS; @EM; @Hu:1985uy; @Hu:1986cv; @TW; @polyakov1; @PerezNadal:2008ju; @Faizal:2008ns; @Akhmedov:2008pu; @Higuchi:2009zza; @Higuchi:2009ew; @Akhmedov:2009ta; @Polyakov:2009nq; @Burgess:2010dd; @Giddings:2010nc; @Krotov:2010ma]), both with and without dynamical gravity. Most of these discussions have been in Lorentzian signature, using some form of in-in perturbation theory. (See, e.g. [@Hajicek; @Kay80; @Jordan:1986ug; @Calzetta:1986ey] for early use of in-in perturbation theory in QFT in curved space.) A popular choice is to choose the initial surface to be a cosmological horizon, so that the perturbation theory involves integrals over the region to the future of this horizon (see figure \[fig:one\]). This region of de Sitter space is also known as the expanding cosmological patch, the conformal patch, or the Poincaré patch. We will therefore refer to the associated perturbation scheme as the Poincaré in-in formalism, especially when the initial state is chosen to be the free Bunch-Davies (i.e., Euclidean) vacuum. ![Standard Carter-Penrose diagram of de Sitter space. Region I is the static patch, and the Poincaré patch consists of regions I and II. The causal pasts of points $X_1$ and $X_2$ are the shaded regions. See section \[geom\] for details.[]{data-label="fig:one"}](fig1.pdf) On the other hand, IR effects are often easier to control and analyze in Euclidean signature de Sitter space, which is just the $D$-sphere $S^D$. Analytic continuation of such correlators to Lorentz signature defines the so-called Hartle-Hawking vacuum of the theory [@Hartle:1976tp]. The fact that $S^D$ is compact means that no IR divergences can arise in perturbation theory unless they are already present at order zero. With appropriate techniques one can often analytically continue the resulting IR-finite Euclidean correlators to Lorentzian signature while maintaining control over the IR behavior. This was done in [@Marolf:2010zp; @Marolf:2010nz; @Hollands:2010pr] for massive scalar fields using standard perturbation theory. For massless scalars, [@Rajaraman:2010xd] used the Euclidean setting to introduce a new form of perturbation theory which again yields IR-finite Euclidean correlators whose continuation to Lorentz signature can be controlled. One would therefore like to understand precisely how correlators analytically continued from Euclidean signature are related to those computed using an intrinsically Lorentz-signature technique. On general grounds, the analytically continued correlators will satisfy the Lorentz-signature Schwinger-Dyson equations. So long as they satisfy appropriate positivity requirements to define a positive-definite Hilbert space, this means that the analytically-continued (Hartle-Hawking) correlators define a valid state of the theory. Recall that positivity will generally follow from the de Sitter analogue [@Schlingemann:1999mk] of reflection-positivity and the Osterwalder-Schräder construction, and that reflection positivity holds formally when the Euclidean action is bounded below[^1]. In such cases, it remains only to ask how the Hartle-Hawking state relates to other states of interest, such as the state defined by in-in perturbation theory in the Poincaré patch. A hint was given by [@Higuchi:2009ew] which studied a free scalar field but treated the mass term as a perturbation about the conformally-coupled value. The Euclidean and Poincaré in-in formalisms were found to agree, and in fact to both give the exact result once all orders in perturbation theory had been included. (There are no UV divergences due to the fact that the theory has only quadratic terms and thus only tree diagrams.) This may at first seem surprising. Indeed, for in-in perturbation theory defined using a Cauchy surface at finite time as the initial surface, a result of this form would be impossible. Since the past light cone of any external point of a Feynman diagram is cut off by the initial surface, all integrals are over regions of finite spacetime volume. Furthermore, the volume of any such region would shrink to zero when the external point approaches the initial surface. As a result, the in-in correlators would necessarily approach the correlators of the zeroth-order theory as all arguments approach the initial slice. On the other hand, analytic continuation of Euclidean correlators gives a de Sitter invariant interacting state that cannot approach the zeroth-order state on any surface, so the two formalisms could not agree. In contrast, in the Poincaré in-in formalism the initial surface is a null cosmological horizon. In particular, it has the important property that there is an infinite volume of spacetime that lies both to the future of this surface and to the past of any given point in the interior of the Poincaré patch[^2]. This means that the integrals which compute perturbative corrections to the zeroth-order correlators need not become small as the arguments of correlators approach the initial surface and no contradiction with the Euclidean formalism arises. Indeed, symmetry arguments suggest that this correspondence holds more generally. Since both the free propagators and the Poincaré patch is invariant under translations, rotations, and dilations, the results of Poincaré in-in perturbation theory will be similarly invariant so long as all integrals converge. But for free fields on $dS_D$ the only Hadamard state which is invariant under these symmetries is the Euclidean vacuum. One therefore expects a similar result to hold in perturbation theory, suggesting that the Poincaré in-in approach generally computes correlators in the interacting Euclidean vacuum. An independent motivation comes from the work of Gibbons and Perry [@Gibbons:1976pt], who pointed out that interacting Euclidean field theory on $S^D$ describes thermal field theory inside the cosmological horizon of de Sitter space (i.e., in the static patch) with Gibbons-Hawking temperature [@Gibbons:1977mu]. While the Euclidean formalism is commonly used to study thermal field theory, there is a Lorentzian version called the Schwinger-Keldysh formalism [@Schwinger:1960qe; @Keldysh:1964ud]. This formalism agrees with what is usually called the in-in formalism in relativistic field theory if the property called *factorization* is satisfied (see, e.g., [@Landsman:1986uw]). The physical content of this property is that generic states thermalize if given sufficient time, so that one need not take particular care to prepare a thermal state so long as the initial state is taken to be sufficiently far in the past. Since it is known that correlators in a wide class of states approach those of the Euclidean vacuum at late times [@Marolf:2010zp; @Marolf:2010nz; @Hollands:2010pr], it is reasonable to conjecture that the Euclidean and in-in formalisms agree at least in the static patch of de Sitter space. We argue below that the Euclidean and Poincaré in-in approaches in fact agree for general interacting scalar field theories with $m^2 > 0$. The argument can be sketched in three steps. Step 1 is to relate the analytic continuation of Euclidean correlators to in-in perturbation theory in the static patch of de Sitter. This amounts to checking that conditions are right for the usual relation between Euclidean field theory and Lorentz-signature thermal field theory, i.e., factorization, to hold. Step 2 is to note that, for position-space correlators with all arguments in the static patch, in-in perturbation theory is the same whether one thinks of it as perturbation theory in the static patch or as perturbation theory in the Poincaré patch. This follows from the well-known fact that in-in perturbation theory can be expressed in terms of integrals over the region that is i) to the past of all external points of a Feynman diagram and ii) to the future of the initial surface; see figure \[fig:one\]. As a result, analytic continuation from the Euclidean reproduces Poincaré in-in calculations at least when the arguments are restricted to a single static patch. Finally, step 3 is to show that both sets of correlators are appropriately analytic, so that their extension to the full spacetime is uniquely determined by their values in the static patch. We consider Pauli-Villars regulated correlators and show agreement at each value of the Pauli-Villars regulator masses. It follows that the fully renormalized correlators must agree as well. The bulk of this paper is devoted to the details of this argument and to providing some simple checks of the results. Section \[prelim\] quickly reviews the relevant features of de Sitter geometry. Section \[factorization\] then verifies that analytic continuation of Euclidean correlators does indeed give in-in correlators in the static patch for massive scalar fields, while section \[analyticity\] argues that the correlators are sufficiently analytic so as to be determined by their restriction to the static patch. Since the arguments are somewhat involved, we explicitly compute some simple in-in loop diagrams in section \[numerics\] and demonstrate agreement with Euclidean results computed in [@Marolf:2010zp]. We close with some discussion in section \[disc\]. In an appendix we describe a more direct way for the analytic continuation of Euclidean correlators, which gives a slightly different method for demonstrating their equivalence to Poincaré in-in correlators. Preliminaries {#prelim} ============= This section serves to briefly review various features of both Lorentzian and Euclidean de Sitter space, and to introduce notation and conventions. After discussing geometry and the relevant coordinate systems in section \[geom\] we review aspects of de Sitter propagators in section \[prop\]. De Sitter Geometry and Coordinates {#geom} ---------------------------------- Let us begin with Euclidean de Sitter space. As is well known, this is just the sphere $S^D$. Throughout this work, we set the de Sitter length $\ell$ to $1$ and work on the unit sphere. We may thus describe $S^D$ using the metric $$\label{SD} ds_{S^D}^2 = d\Omega_D^2 = d\vartheta^2 + \sin^2\vartheta d\Omega_{D-1}^2 , \ \ \vartheta \in [0,\pi],$$ where $d\Omega_d^2$ is the line element of the unit $S^d$. It is useful to consider the complexified manifold ${\mathbb S}^D$, which may be thought of as the surface $X\cdot X = 1$ in ${\mathbb C}^{D+1}$. Wick rotations of various coordinates correspond to passing from one real section of ${\mathbb S}^D$ to another, e.g. from $S^D$ to $dS_D$. One useful Wick rotation is given by defining $$\label{globalt} \Theta = i\left(\vartheta - \frac{\pi}{2}\right)$$ and taking $\Theta$ real; i.e., by Wick rotating the polar angle. This yields $$\label{globalg} ds^2_{global \ dS_D} = - d\Theta^2 + \cosh^2 \Theta^2 d\Omega_{D-1}^2 , \ \ \Theta \in {\mathbb R},$$ which is the metric of $dS_D$ in so-called global coordinates. Indeed, these coordinates are regular on all of $dS_D$. Making a further coordinate transformation $$\tan T = \sinh \Theta$$ and writing $d\Omega_{D-1}^2 = d\chi^2 + \sin^2\chi d\Omega_{D-2}^2$, we have $$ds^2_{global\ dS_D} = \sec^2 T(-dT^2 + d\chi^2 + \sin^2\chi d\Omega_{D-2}^2) \ \ T \in (-\pi/2,\pi/2), \label{Carter-Penrose}$$ where the factor inside the parentheses is the metric on a piece of the Einstein Static Universe. Note that this piece extends only for a finite amount of Einstein Static Universe time. Figure \[fig:one\] is the corresponding Carter-Penrose diagram. However, one may also arrive at the same real section $dS_D$ by defining $$\label{statict} t = i \phi, \ \ \ {\rm for} \ \ \ \tan \phi = \frac{X^1}{X^2},$$ where $X=(X^1,X^2,\ldots,X^{D+1})$, and taking $t$ real; i.e., by Wick rotating the azimuthal angle. This yields $$\label{staticg} ds^2_{static \ dS_D} = - \cos^2 \theta dt^2 + d\theta^2 + \sin^2 \theta d\Omega_{D-2}^2 , \ \ t \in {\mathbb R}, \ \theta \in [0, \pi/2),$$ with $$\tan\theta = \sqrt{\frac{(X^3)^2+\cdots+(X^{D+1})^2}{(X^1)^2+(X^2)^2}}, \label{theta-def}$$ which is the metric of $dS_D$ in so-called static coordinates. The coordinate range $t \in {\mathbb R}, \theta \in [0, \pi/2)$ describes the static patch of de Sitter. The coordinates $t$ and $\theta$ can be expressed in terms of $T$ and $\chi$ as $$\begin{aligned} \tanh t & = & \sin T\sec \chi,\\ \sin\theta & = & \sec T\sin\chi.\end{aligned}$$ The boundary at $\theta =\pi/2$ is a coordinate singularity that coincides with the past and future cosmological horizons, $T=\pm(\chi-\frac{\pi}{2})$, defined by the observer at $\theta =0$; see figure \[fig:two\]. ![Carter-Penrose diagram of de Sitter space with $\theta = {\rm const}$ surfaces (schematically) indicated by solid lines and $t={\rm const}$ surfaces by dashed lines.[]{data-label="fig:two"}](fig2.pdf) We will also make use of so-called Poincaré (also known as conformally flat) coordinates on $dS_D$ in which the metric takes the form $$\label{Pcoords} ds^2 = \frac{1}{\lambda^2}(-d\lambda^2 + d\mathbf{x}^2),$$ where $\mathbf{x}=(x^1,\ldots,x^{D-1})$. These coordinates are related to the global ones via $$\begin{aligned} \lambda & = & \frac{\cos T}{\sin T + \cos\chi},\\ x^i & = & \frac{\sin \chi}{\sin T + \cos\chi}\hat{X}^i,\end{aligned}$$ where $\hat{X}^i = X^{i+2}/\sqrt{(X^3)^2+\cdots+(X^{D+1})^2}$. The expanding cosmological patch is the region $0 < \lambda < \infty$ with ${\mathbf x} \in {\mathbb R}^{D-1}$, which we also call the conformal or Poincaré patch. Here $\lambda = \infty$ is the (past) cosmological horizon defined by the observer at ${\mathbf x}=0$, which we take to coincide with the geodesic $\theta =0$. With this convention, the Poincaré patch contains the static patch as shown in figure \[fig:one\]. We also take $\lambda=0$ to coincide with both $t=+\infty$ and $\Theta =+\infty$ on this geodesic. (Thus, the variable $\lambda$ runs backwards in time. It is more common to use the variable $\eta = -\lambda$ in the cosmology community.) The remaining relation between Poincaré coordinates and those discussed before is best summarized by using the concept of embedding coordinates. Recall that $dS_D$ can be defined as the locus of points $X\cdot X = 1$ in $D+1$ dimensional Minkowski space. Given two such points, $X$ and $Y$, one may treat them as vectors and compute the invariant Minkowski scalar product $Z = X \cdot Y$, which gives a de Sitter invariant measure of the separation between $X$ and $Y$. In the above coordinate systems one finds $$\begin{aligned} Z &=& - \sinh \Theta_x \ \sinh \Theta_y + \cosh \Theta_x \ \cosh \Theta_y \cos \gamma^{D-1}, \ \ \ {\rm (global)}\ \ \\ &= & \cos\theta_x\cos\theta_y \cosh(t_x-t_y) + \sin\theta_x\sin\theta_y\cos\gamma^{D-2}, \ \label{invS} {\rm (static)}\ \ \\ \label{invP} &=& 1 - \frac{\|\mathbf{x}-\mathbf{y}\|^2 - (\lambda_y-\lambda_x)^2}{2\lambda_x\lambda_y}, \ \ \ (\text{Poincar\'e})\end{aligned}$$ where $\gamma^d$ is the angle between the $X$ and $Y$ on the relevant $S^{d}$. It is useful to note that $Z=1$ for $X=Y$ or for points connected by a null geodesic, $Z > 1$ for points connected by a timelike geodesic, $|Z| < 1$ for points connected by a spacelike geodesic, and $Z < -1$ for points which cannot be connected by any geodesic in real de Sitter space. In the latter case, the points are not causally related; see figure \[fig:three\]. Note that $Z>-1$ in the static patch. Thus, if points $X$ and $Y$ are in the static patch, then there is a geodesic connecting these two points. On complex de Sitter space we may take $t = \sigma + i \tau$ in static coordinates to write $$\begin{aligned} Z & = & \cos\theta_x\cos\theta_y\left[\cosh(\sigma_x-\sigma_y)\cos(\tau_x-\tau_y) - i\sinh(\sigma_x-\sigma_y)\sin(\tau_x-\tau_y)\right] \nonumber \\ & +& \sin\theta_x\sin\theta_y\cos\gamma^{D-2}, \label{invCS} $$ so that $$\begin{aligned} \label{magZ} |Z|^2 & = & |\cos\theta_x\cos\theta_y\cosh(\sigma_x-\sigma_y)\cos(\tau_x-\tau_y)+ \sin\theta_x\sin\theta_y\cos\gamma^{D-2}|^2\nonumber \\ && + \cos^2\theta_x\cos^2\theta_y\sinh^2(\sigma_x-\sigma_y)\sin^2(\tau_x-\tau_y),\ \ \theta_x,\theta_y \in [0,\pi/2).\nonumber \\\end{aligned}$$ ![Carter-Penrose diagram of de Sitter space with timelike geodesics from point $O$ drawn with solid lines and spacelike geodesics from it drawn with dashed lines.[]{data-label="fig:three"}](fig3.pdf) De Sitter Propagators {#prop} --------------------- Consider two points $X,Y$ on Euclidean de Sitter $S^D$. In terms of $Z = X \cdot Y$, the scalar propagator on $S^D$ is [@Bunch:1978yq; @Allen:1985wd] $$\label{sprop} \Delta(X,Y) = \frac{\Gamma(a_+)\Gamma(a_-)}{2(2\pi)^{\frac{D}{2}}\Gamma\left(\tfrac{D}{2}\right)} (1-Z)^{\frac{2-D}{2}}F\left(\tfrac{D}{2}-a_+,\tfrac{D}{2}- a_-;\tfrac{D}{2};\tfrac{1+Z}{2}\right),$$ where $$a_{\pm} = \tfrac{1}{2}\left[D-1\pm\sqrt{(D-1)^2-4m^2}\right].\\$$ Here $F$ is Gauss’ hypergeometric function: $$F(a,b;c;x) = 1 + \sum_{n=1}^\infty \frac{a(a+1)\cdots(a+n-1)b(b+1)\cdots(b+n-1)}{n!c(c+1)\cdots(c+n-1)}x^n.$$ We will be interested in the analytic properties of (\[sprop\]) for general complex $Z$. The only singularities are branch points[^3] at $Z=1$ and $Z = \infty$, and we take the branch cut to connect these points along the positive real axis. It will be particularly important to understand the singularity structure in terms of static coordinates (\[staticg\]). Careful inspection of (\[invCS\]) shows the following: [**Observation**]{}. The Green’s function for two points $X,Y$ with static coordinates $(t_x,\theta_x)$ and $(t_y,\theta_y)$ with $\theta_x,\theta_y \in [0, \pi/2)$ is analytic for all complex $t_x,t_y$ except when $t_x-t_y$ is real modulo $2\pi i$ (so that the two points lie on the same Lorentz-signature real section) and the two points obtained by replacing $t_x$ and $t_y$ by ${\rm Re}\ t_x$ and ${\rm Re}\ t_y$, respectively, are causally related. (within this real section). It will be useful to regulate the divergences of (\[sprop\]) at $Z=1$ using Pauli-Villars subtractions both for the internal and external propagators so that all propagators become bounded functions of $Z$. Because the unbounded nature of the external propagators needs to be taken into account only in the coincidence limit, where the vertex integral is convergent due to the small integration measure, it is in fact possible to show the equivalence of the Poincaré and Euclidean formalisms regulating only the internal propagators. However, since analyzing such issues in detail would make the argument more cumbersome, we choose to regulate the external propagators as well. For each $m,D$ we define a regulated propagator $$\label{Dreg} \Delta^{\rm reg}(X,Y) = \Delta(X,Y) + \sum_{i=1}^{[D/2]} C_i \Delta_{M_i}(X,Y),$$ where $[D/2]$ denotes the integer part of $D/2$, $\Delta_{M_i}(X,Y)$ is the propagator (\[sprop\]) for a particle of mass $M_i$, and $C_i$ are constants. We will always assume $M_i \gg 1$ in units of the de Sitter scale, so that in particular the masses $M_i$ correspond to principal series representations [@Vilenkin91] of the de Sitter group. One may choose the coefficients $C_i$ so that $\Delta^{\rm reg}(X,Y)$ has a well-defined finite limit as $Z \to 1$ (see, e.g., [@Camporesi:1992wn] for $D=4$). For $D =2,3$ we have $[D/2]=1$ and one may take $C_1 = -1$ for any $M_1$. For $D=4,5$ one may choose any $C_1,C_2,M_1,M_2$ which satisfy $C_1 + C_2 = -1$ and $C_1 M_1^2 + C_2 M_2^2 = - m^2$. Nevertheless, $\Delta^{\rm reg}(X,Y)$ is not analytic at $Z=1$. Instead, $Z=1$ remains a branch point analogous to that of the function $x \ln x$ or $x^{1/2}$ at $x=0$. If desired, one can also make further subtractions to define regulated propagators with continuous (and thus bounded) derivatives to any specified order. Such additional subtractions are useful in treating theories with derivative interactions, or for consideration of field-renormalization counter-terms. Below, we will focus on non-derivative interactions for which the above subtractions will suffice. But it will be clear from the argument that the same results hold for derivative interactions so long as an appropriate number of additional Pauli-Villars subtractions have been made. Finally, it is useful to study $\Delta(X,Y)$ at large $|Z|$. There, $\Delta$ behaves either like $|Z|^{-a_-}$ (for $m^2 < (D-1)^2/4$) or $|Z|^{-(D-1)/2}$ (for $m^2\geq (D-1)^2/4$). Hence for given choices of regulator parameters $C_i,M_i$ the modulus of the regularized propagator $|\Delta^{\rm reg}(Z)|$ is bounded. It is useful to take each $C_i,M_i$ to be a given function of the smallest regulator mass $M$, so that the regulator is removed as $M \to \infty.$ We may then take the bound on $|\Delta^{\rm reg}(Z)|$ to be $B(M)$, determined only by $m$ and the lightest regulator mass $M$. Euclidean correlators vs. thermal static patch correlators {#factorization} ========================================================== We now turn to step 1 of the argument sketched in the introduction. Our task here is to show that the analytic continuation of Euclidean correlators is equivalent to those computed using in-in perturbation theory (defined using the propagator of the free Euclidean vacuum) in the so-called static patch of de Sitter. This essentially amounts to checking that conditions are right for the usual relation between Euclidean field theory and Lorentz-signature thermal field theory to hold; i.e., that the Hartle-Hawking correlators are indeed thermal correlators in the static patch. At a formal level, this follows from the fact that correlation functions $Tr [\phi(x_1)...\phi(x_n) e^{-\beta H}]$ in the canonical ensemble are given by an imaginary-time path integral; see e.g., [@Landsman:1986uw]. However, in order not to miss any subtleties (perhaps due to IR divergences of the sort predicted in [@Polyakov:2009nq]) and because of the many controversies surrounding dS quantum field theory, we will proceed slowly through an explicit perturbative argument. Below, we consider diagrams using the Pauli-Villars regularized propagators (\[Dreg\]) so that $|\Delta^{\rm reg}(Z)| \le B(M)$. We restrict attention to connected diagrams since vacuum bubbles are automatically excluded both in the Euclidean and in-in formalisms. Because the desired result is trivial for the diagram with two external points connected by a single propagator, we also exclude this diagram from our discussion. Non-derivative interactions are assumed for simplicity, though the argument is readily extended to derivative interactions so long as additional Pauli-Villars subtractions are made as described in section \[prop\] above. Recall that in static coordinates (\[staticg\]) points of de Sitter space are labeled by a pair $(t, \hat X)$ where $\hat X$ is a point in the (open) northern hemisphere of $S^{D-1}$. We will use these coordinates for both the static patch of Lorentz-signature $dS_D$ (where $t \in {\mathbb R})$ and on Euclidean-signature de Sitter $S^D$ (where $-i t \in (-\pi,\pi)$.) We imagine that the integrals over the time coordinates $t_i$ of the internal vertices will be performed first, followed later by the integrals over $\hat X_i$. So for the moment we consider the $\hat X_i$ to be fixed. We also assume that all internal vertices and external points correspond to distinct spatial points $\hat X$; i.e., $\hat X_i \neq \hat X_j$ for $i \neq j$. Due to our Pauli-Villars regularization, we can always recover information at coincidence by continuity. Let us first review the general argument relating Euclidean correlators to in-in correlators (see e.g. [@Landsman:1986uw]) using our de Sitter static patch notation. In the Euclidean approach the time integrals of the internal vertices are all from $i\pi$ to $-i\pi$. The external points are taken to lie on this contour and, at least for the moment, we take them to all lie close to (though not necessarily precisely at) $t=0$. Since the $\hat X_i$ are distinct, it follows from the Observation of section \[prop\] that the integrand is analytic in all time coordinates $t_i$ in a region containing the contour of integration. Thus the contour can be deformed. In fact, taking all internal coordinates $t_i$ to be integrated along the same contour $C$, we note that the contour can be freely deformed so long as i) it begins at some $t = t_0 +i \pi$ with $t_0$ real and ends at $t = t_0 -i \pi$, ii) the imaginary part of $t$ is strictly decreasing everywhere (so that no two points on the path have the same value of ${\rm Im} \ t$) and iii) the path continues to pass through the external points. In particular, we are free to take the limit $t_0 \rightarrow - \infty$. Now, these rules allow us to choose the contour $C = A_1+C_1+B +C_2+A_2$ to be as in figure \[fig:four\]. ![A deformation of the Euclidean contour.[]{data-label="fig:four"}](fig4.pdf) Here $\epsilon\, (<\pi)$ is a nonzero positive number. The imaginary part is increasing infinitesimally on the horizontal portions of the contour. This is equivalent to using the Feynman propagator when the two points are both on the upper horizontal portion and the Dyson (or anti-Feynman) propagator if the two points are both on the lower horizontal portion. In general, in the $\epsilon \rightarrow 0$ limit (and where the imaginary parts of the times for all external points are also taken to zero), one may say that the above contour computes correlators using the free path-ordered two-point function as the propagator, just as occurs in the in-in formalism. Furthermore, since all integrals converge after Pauli-Villars regularization, it is clear that the integral along $B$ is of order $\epsilon$ and can be neglected in the limit $\epsilon \to 0$. As a result, the Euclidean correlators (evaluated at $t=0$) agree with the corresponding in-in correlators in the static patch (computed using the propagators of the free Euclidean vacuum) so long as a property called [*factorization*]{} [@Landsman:1986uw] holds, which states that the $A_1,A_2$ pieces of the contour $C$ can be neglected in the $t_0 \rightarrow -\infty$ limit. We now establish this property for our systems, diagram by diagram[^4]. For each Feynman diagram, let us choose one external point $X = (t_e, \hat X_e)$ and one internal point $Y$ that lies on either segment $A_1$ or $A_2$. To show that the integral of $Y$ over the above segments can be neglected, we also choose a path through the diagram from $X$ to $Y$; i.e., a particular chain of propagators. Now, recall from section \[prop\] that at fixed Pauli-Villars regulator mass $M$ all propagators are bounded by some $B(M)$. To establish a bound on the integrals, we may thus replace the integrand with its magnitude and replace all propagators [*not*]{} on the chosen path by $B(M)$. Next consider the propagators on the chosen path. For at least one such propagator, external or internal, the (static-patch) time coordinates of its two arguments have real parts differing by at least $({t}_e-t_0)/K$, where $K$ is the number of propagators in the chain. From (\[magZ\]) and the asymptotics of the propagators discussed in section \[prop\], this means that this propagator is of order $[\cos \theta_1 \cos \theta_2 e^{({t}_e-t_0)/K}]^{-\nu}$ or smaller for some positive $\nu$ determined by the mass $m$ of the quantum fields, where $\theta_1$ and $\theta_2$ are the $\theta$-coordinates of the two arguments of this propagator, if ${t}_e - t_0$ is large enough. Replacing all other propagators on this chain with $B(M)$, we integrate the time coordinate $\tau$ of $Y$ along the segments from $t_0+i\pi$ to $t_0+i\epsilon$ and from $t_0-i\epsilon$ to $t_0-i\pi$. We also perform all other $t$-integrals at the vertices. The result is clearly bounded by $$c_2[B(M)]^{n_1}({t}_e-t_0)^{n_2}[\cos \theta_1 \cos \theta_2 e^{({t}_e-t_0)/K}]^{-\nu} \label{bound}$$ for some constants $c_2,n_1,n_2$, where the factors of $({t}_e-t_0)^{n_2}$ come from the measure. It is important to note that $c_2,n_1,n_2$ are independent of the positions of all vertices, as well as $t_0$. To complete the argument, we divide the integrals over $\theta_1,\theta_2$ (or, say, just $\theta_1$ if the 2nd point is external so that $\cos \theta_2$ is fixed and independent of $t_0$) into two regions. In the first, we take $\cos\theta_1, \cos \theta_2 > e^{-({t}_e-t_0)/3K}$. The bound in (\[bound\]) then shows that the integral over this region tends zero at least like $({t}_e-t_0)^{n_2}e^{-2\nu({t}_e-t_0)/3K}$ as $t_0\to -\infty$. The remaining region of integration is small since one of the variables to be integrated ($\theta_1$ and/or $\theta_2$) satisfies $\cos\theta < e^{-({t}_e-t_0)/3K}$ and the integration measure $\sin^{D-2}\theta\cos\theta\,d\theta$ contains a factor of $\cos \theta$. We note that the length of the interval on which $\theta$ is integrated is of order $e^{-(t_e-t_0)/3K}$ as well. We may therefore replace [*all*]{} propagators by the bound $B(M)$ and find that the contribution from this region is again bounded by a number of the form $c_3({t}_e-t_0)^{n_3}e^{-2({t}_e-t_0)/3K}$, which of course tends to zero as $t_0\to -\infty$. This establishes the fact that sections $A_1$ and $A_2$ can be neglected in the desired limit for any (finite or infinitesimal) choice of $\epsilon$ in figure \[fig:four\]. In particular, this demonstrates the agreement of Euclidean and static patch in-in correlators (computed using the propagator of the free Euclidean vacuum) when the external points are located at $t=0$. To demonstrate agreement for more general external points, we need only analytically continue the correlators as a function of the time coordinates of the external points. This is in fact the [*definition*]{} of the Euclidean correlators evaluated at more general times, and we will show that it also gives the static patch in-in correlators. For this step, it is convenient to take the external points to have distinct (and fixed) values of ${\rm Im} \ t$. At the end of the argument we will take the limit where all of these imaginary parts vanish. We first consider the analyticity of the integrand for some given diagram in the time coordinate $t_1$ of some external point with the time coordinates of all other points (both internal and external) held fixed and taken to lie on one of the contours $C$ discussed above. We also take the spatial coordinates of all points to be fixed and distinct. Due to the observation of section \[prop\], the singularities are then a finite distance from the contour $C$. For example, for the original Euclidean integral, if the external point with the time coordinate $t_1$ is connected to a vertex with time coordinate $t$ which is also connected to two other vertices, the singularities and associated branch cuts on the complex $t$-plane are similar to those shown in figure \[fig:five\]. ![Singularities and branch cuts of propagators. []{data-label="fig:five"}](fig5.pdf) We may thus analytically continue $t_1$ to any complex value so long as we avoid the branch cuts. Let us do so holding ${\rm Im} \ t_1$ fixed and distinct from the imaginary parts of all other external time coordinates. Then the only singularities which are of concern are those due to the vertex connected to $t_1$ with the same imaginary part of $t$; i.e., for which ${\rm Im} \ t = {\rm Im} \ t_1$. As indicated in figure \[fig:five\], for a fixed contour $C$ this will in general allow only a finite range over which the integrand can be analytically continued in ${\rm Re} \ t_1$. However, as noted earlier, we are also free to further deform the contour. For example, by shifting the contour for all vertices a bit to the right at ${\rm Im} \ t_1$, we shift the allowed window for analytic continuation a bit to the right, and we do so without changing the size of this window. It is thus clear that, by dragging the contour along with the external point in this way, we may analytically continue the result of the time integrations to arbitrary values of ${\rm Re} \ t_1$ for any given distinct set of spatial coordinates. But as before, our Pauli-Villars regularization scheme implies the same result holds for general spatial coordinates by continuity[^5]. It follows that the analytic continuation of Euclidean correlators can be computed via the usual Feynman diagrams associated with any contour which i) begins at some $t = -\infty + i\epsilon$ with any real and positive $\epsilon$ ends at $t = -\infty - i\epsilon$, ii) has the imaginary part of $t$ strictly decreasing everywhere (so that no two points on the path have the same value of ${\rm Im} \ t$) and iii) passes through all external points[^6]. An example is shown in figure \[fig:six\]. ![Deformed contour for external points at $t_i$ with finite imaginary parts. []{data-label="fig:six"}](fig6.pdf) Taking the limit $\epsilon \rightarrow 0$ (and taking the limit in which all external time coordinates now become real) gives the usual closed-time-path representation of the static patch in-in correlators (defined using the propagators of the free Euclidean vacuum) just as described above for external points at $t=0$; see figure \[fig:seven\]. ![$t$-integration contour for the in-in formalism in the static patch. The open circles denote external points at times $ t_i$.[]{data-label="fig:seven"}](fig7.pdf) Analyticity of in-in correlators {#analyticity} ================================ Recall that our goal is to demonstrate the equivalence of the Poincaré and Euclidean formalisms for perturbation theory. We outlined a three-step argument in the introduction. As described there, it is clear that the in-in formalism in the static patch is a restriction of that in the Poincaré patch (Step 2). Since we have now shown that the static patch in-in correlators agree with those of the Euclidean formalism (Step 1), it remains only to show that Poincaré in-in correlators are appropriately analytic in their arguments (Step 3). The desired result then follows since two analytic functions that agree in any non-empty open subset of a real section must in fact agree everywhere. In this section we will establish analyticity of Poincaré in-in correlators as functions of the conformal-time coordinates with space coordinates fixed. This will turn out to be sufficient for our purpose. Recall that, for given external points $X_j=(\tilde\lambda_j,\mathbf{x}_j)$, in the coordinates of (\[Pcoords\]) any Poincaré correlator is a sum of terms of the form $$\mathcal{A}_{P} = c_1\left(\prod_{k=1}^n \int d^{D-1}\mathbf{y}_k\int_C \frac{d\lambda_k}{\lambda_k^{D}}\right) F(Y_1,\ldots,Y_n)\prod_{j=1}^{m}\Delta^{\rm reg}(X_j,Y_k), \label{ininA}$$ where a typical contour $C$ is shown in figure \[fig:eight\] and we have used the Pauli-Villars regulated propagators $\Delta^{\rm reg}(X_j,Y_k)$. The contour is infinitesimally away from the real line, and the imaginary part of $\lambda$ *increases* infinitesimally everywhere along the contour, even on the horizontal sections. Time-ordered correlators are obtained by putting the $\lambda$-coordinates, $\tilde\lambda_j$, of the external points on the lower horizontal line, whereas anti-time-ordered correlators are obtained by putting them on the upper horizontal line. We will refer to any such ${\mathcal A}_P$ as an amplitude, and we will again refer to the associated diagram as a Feynman diagram even though diagrams include Dyson (or other) propagators in computing ${\mathcal A}_P$. ![Typical $\lambda$-contour for the in-in formalism in the Poincaré patch. All external points have $\lambda > \lambda_f$[]{data-label="fig:eight"}](fig8.pdf) Like the in-in amplitude in the static patch, the Poincaré in-in amplitude $\mathcal{A}_P$ is obtained by first considering the corresponding amplitude with finite and distinct imaginary parts ${\rm Im} \ \tilde \lambda_j$ of the conformal-time coordinates of the external points and then taking the limit ${\rm Im} \ \tilde\lambda_j \to 0$. For this reason we let $\tilde{\lambda}_j$ satisfy ${\rm Im} \ \tilde{\lambda}_1 < {\rm Im} \ \tilde{\lambda}_2 < \cdots < {\rm Im}\ \tilde\lambda_m$ without loss of generality and use the contour analogous to that considered for the static patch. Figure \[fig:nine\] shows an example with $m=4$ with finite imaginary parts, before taking the $\epsilon \rightarrow 0$ limit. ![Deformed contour for external points with finite imaginary parts in the Poincaré patch[]{data-label="fig:nine"}](fig9.pdf) It is important to note that, in general, one must integrate over the conformal-time coordinates $\lambda_k$ first in (\[ininA\]), before integrating over the spatial coordinates as the integrand may otherwise decay too slowly at large $\|\mathbf{y}_k\|$ for the $\mathbf{y}_k$-integrals to converge if each $\lambda_k$ is fixed on the contour. We will show below that, with our Pauli-Villars regulators, all integrals converge so long as the $\lambda_k$-integrals are performed first. We then use this result to demonstrate the desired analyticity of ${\mathcal A}_P$. Convergence of ${\mathcal A}_P$ {#CAP} ------------------------------- We now verify that integrals defining the amplitude $\mathcal{A}_P$ converge with the contour $C$ chosen as in figure \[fig:nine\] so long as we perform the $\lambda_k$-integrations before the $\mathbf{y}_k$-integrations. The general strategy is deform the $\lambda$-contour at each vertex as much to the right as possible, while avoiding singularities, so that the regions of spacetime over which the vertices are integrated become small enough to guarantee absolute convergence. The structure of singularities in the complex $\lambda$-plane is directly analogous to that discussed in the complex $t$-plane in section \[factorization\]. We again fix the spatial coordinates of all points, both internal and external, and take them to be distinct. An example for the conformal-time $\lambda_1$ of the vertex $Y_1 =(\lambda_1,\mathbf{y}_1)$ is shown in figure \[fig:ten\], where dashed lines again indicate branch cuts. Of the two singularities with the same imaginary part, we call the one with the larger (smaller) real part a past (future) singularity. For example, the singularities due to vertex $(\lambda_3,\mathbf{y}_3)$ are at $$\lambda_\pm = \lambda_3 \pm \|\mathbf{y}_1-\mathbf{y}_3\|.$$ The points $\lambda_+$ and $\lambda_-$ are a past singularity and a future singularity, respectively. Notice that $({\rm Re}\,\lambda_+,\mathbf{y}_1)$ and $({\rm Re}\,\lambda_-,\mathbf{y}_1)$ are on the past and future light-cones of $({\rm Re}\,\lambda_3,\mathbf{y}_3)$, respectively. Also in the same way as in section \[factorization\], each $\lambda_k$-contour can be deformed as we like so long as it encloses all past singularities and avoids all future singularities. In particular, for the given values of all spatial coordinates ${\mathbf x}_i, {\mathbf y}_k$, the portion of the contour to the left of a vertical line segment connecting two points on the contour can be replaced by this line segment provided that all past singularities lie to its right. For example, the $\lambda_1$ contour in figure \[fig:ten\] can be deformed as in figure \[fig:eleven\]. Note that this contour may no longer pass through certain $\lambda$-values corresponding either to external points or to other contours which were not similarly deformed. ![Singularities in the complex $\lambda_1$ plane. Two external points lie at $\tilde \lambda_1, \tilde \lambda_3$ and two internal points lie at $\lambda_2,\lambda_3$ as indicated by the open circles. Filled circles are singularities and dashed lines are branch cuts.[]{data-label="fig:ten"}](fig10.pdf) ![The $\lambda_1$-contour in figure \[fig:ten\] deformed as much to the right as possible for the given values of the spatial coordinates as described in the text.[]{data-label="fig:eleven"}](fig11.pdf) Using this observation, we deform the contours as follows. We begin with an integral where all $\lambda_k$ are integrated over the same contour $C$ of the form shown in figure \[fig:nine\] for some given values of the spatial coordinates ${\mathbf x}_j, {\mathbf y}_k$. We deform [*all*]{} of the $\lambda_k$-contours [*in the same way*]{} as follows. We choose a vertical line segment connecting two points of the contour such that all past singularities on the complex $\lambda_k$-plane for all $k$ is to its right. Then we let this line segment replace the portion of the contour to its left. We keep deforming the contour in this manner by moving the vertical line segment to the right until it encounters a past singularity, say, on the $\lambda_{k_1}$-plane, $\lambda_{k_1}$ being the conformal-time for $Y_{k_1}$, due to some external point, say $X_{j_1}$. We then stop deforming the contour for $Y_{k_1}$ (since we cannot deform it beyond the singularity) and hold it fixed. We describe this relationship between $Y_{k_1}$ and $X_{j_1}$ by saying that $Y_{k_1}$ is past-related to $X_{j_1}$ (for the given values of all spatial coordinates) and writing[^7] $Y_{k_1} \to X_{j_1}$. We then choose some value of $\lambda_{k_1}$ on its fixed contour and deform the remaining contours by moving the vertical line segment to the right with $\lambda_{k_1}$ fixed until one of them, say a contour for $Y_{k_2}$, hits a past singularity due to, say $X_{j_2}$, which is either an external point or the vertex $Y_{k_1}$ whose contour is being held fixed. We write $Y_{k_2}\to X_{j_2}$ and hold the contour for $Y_{k_2}$ fixed from now on. We continue in this manner until each vertex is past-related to another point, so that all contours have been fixed[^8]. To understand the resulting structure, we now use the above past-relations to decorate the Feynman diagram under discussion for each fixed set of spatial coordinates. Note that any pair $(A,B)$ of vertices with $A$ past-related to $B$ must be connected by at least one line on the diagram[^9]. If there is one line from $A$ to $B$, we decorate it with an arrow pointing from $A$ to $B$ (i.e., toward the future). If there is more than one such line, we decorate only one of them. Once all past-relations have been indicated in this way, we replace all remaining undecorated propagators with dashed lines. An example is shown in figure \[fig:twelve\]. ![A Feynman diagram in which arrows indicate past-relations as described in the text, determined by some particular set of spatial coordinates.[]{data-label="fig:twelve"}](fig12.pdf) In the deformation of contours described above, the contours are deformed until one of them encounters a past singularity. Although this procedure is sufficient to show the convergence of ${\mathcal A}_P$ itself, we need to modify it slightly for proving convergence of the derivatives of ${\mathcal A}_P$ with respect to the external coordinates, which diverge at past singularities. Here we briefly describe this modification. The main difference is that the modified deformation keeps the contours away from past singularities. We choose the initial contour common to all $\lambda_k$ as before. We define an *effective past singularity* as follows: if $\lambda$ is a past singularity, then the corresponding effective past singularity is $\lambda - b$, where $b$ is a small but positive constant. We deform the contours in the same way as before except that they are deformed until one of the contours encounters an effective past singularity rather than a true one. We define the past-relation as before. It may happen that some effective past singularities are outside the contour though the true ones must be inside. If the effective past singularity on the complex $\lambda_{k_3}$-plane due to a point $X_{j_3}$, external or internal, is outside the contour, we stop deforming the contour for $\lambda_{k_3}$, fix the value of $\lambda_{k_3}$ on this contour, and let $Y_{k_3} \to X_{j_3}$. (If there are two or more effective past singularities outside the contour, we choose one to define the past-relation.) The rest is the same as the original deformation[^10]. As noted in the introduction, it is well-known that in-in diagrams can be computed by integrating only over the past light cones of external points. The choice of contours above gives a similar result, but one which is clearly valid for finite $\epsilon$. To see the similarity, note that when $A$ is past-related to $B$ the real part of point $A$ lies in the casual past of the real part of point $B$ over most of the contour for $A$. The exception is a finite piece near the minimum value of ${\rm Re}\ \lambda$ due to the use of effective past singularities in the modified contour deformation. Since any internal point is connected by some chain of arrows to some external point, except for a set of finite-sized pieces as noted above, the projection of the integration region onto the real $\lambda$-axis lies in the causal past of at least one external point. We will find it useful below to break up the integration region into such past light cones and finite-sized protruding segments. Now we establish convergence using the modified deformation of contour. Recall that each internal point $A$ is past-related to precisely one point $B$ (see footnote \[tree\]), which may be either internal or external. Also recall that, starting at any internal point, one may always follow a chain of arrows upwards until one arrives at an external point. As a result, deleting all dashed lines results in a set of disconnected subdiagrams for which each connected component is a tree whose root (which in this case means that future-most point) is an external point. As a result, if we replace every dashed-line propagator by the bound $B(M)$, our amplitude ${\mathcal A}_P$ factorizes into a product of tree amplitudes in which all points are connected by a chain of past-relations[^11]. But each such tree amplitude is easy to bound. We begin by bounding the integrals corresponding to some past-most vertex $Y = (\lambda, {\mathbf y})$ in a given tree (e.g., $Y_2,$ $Y_4$ or $Y_6$ in figure \[fig:twelve\]). Taking the magnitude of the integrand, this integral takes the form $$I = \int d^{D-1}\mathbf{y}\int_{C_{\mathbf y}} \frac{|d\lambda|}{|\lambda|^D}|\Delta^{\rm reg}(Y',Y)|, \label{Istart}$$ where the notation indicates that the contour $C_{\mathbf y}$ over which we integrate $\lambda$ can depend on the spatial coordinates ${\mathbf y}$. Recall that $\Delta^{\rm reg}(Y',Y)$ behaves like $|Z|^{-\nu}$, $\nu>0$, for large $Z = Y \cdot Y'$. As a result, $|\Delta^{\rm reg}(Y',Y)|$ behaves at most like $Z_R^{-\nu}$, where $Z_R:= {\rm Re}\ Z$. Let us choose some $Z_0$ large enough that for $Z_R > Z_0$ our $|\Delta^{\rm reg}(Y',Y)|$ is bounded by $\alpha Z_R^{-\nu}$ for some real constant $\alpha$. It is now useful to break up the integration domain into several pieces. First consider the portion of $C_{\mathbf{y}}$ noted above that protrudes from the past light cone of ${\rm Re}\ Y'$. The past singularity which has stopped this contour from being deformed further is at $\lambda^{+}_y = \lambda_{y'} + \|\mathbf{y}-\mathbf{y'}\|$ and the corresponding effective past singularity is at $\lambda^{+,{\rm eff}}_y = \lambda^{+}_y - b$ on the complex $\lambda_{y}$-plane. Hence the length of this portion of the contour is bounded by a constant, which is larger than $2b$ because the contour has a finite width. We also find that ${\rm Re}\ \lambda_y \geq \lambda_0 + c\|\mathbf{y}-\mathbf{y}'\|$, where $\lambda_0$ and $c$ are some positive constants, on this portion of the contour. This is because ${\rm Re}\ \lambda_y \geq \lambda_{\rm min}$, where $\lambda_{\rm min}$ is the minimum of the real part of $\lambda_y$ at $\mathbf{y}=\mathbf{y}'$, and that ${\rm Re}\,\lambda^{+,{\rm eff}}_y/\|\mathbf{y}-\mathbf{y'}\|\to 1$ as $\|\mathbf{y}\|\to \infty$. The contribution to $I$ from the protruding portions is thus bounded by a constant times $B(M)\int d^{D-1}\mathbf{y}(\lambda_{0}+c\|\mathbf{y}\|)^{-D}$. Next consider the contribution to $I$ from the region $0 < Z_R < Z_0$. This is bounded by $\beta B(M)$ times the total measure $\int d^{D-1}\mathbf{y}\int |d\lambda|\,|\lambda|^{-D}$ of this region, where $\beta$ is a constant, assuming that this measure is finite. To see that this is so, consider any point $X$ in the Poincaré patch of real de Sitter space and, furthermore, consider the part of its past light cone that is both within embedding distance $Z_0$ and which also lies to the future of the cosmological horizon. This region is compact and thus has finite volume. Since widening of the contour described above for large ${\rm Re}\ \lambda$ has little effect at large $Z$, for fixed $\epsilon$, we may therefore choose $Z_0$ large enough that the measure of the desired region in complex de Sitter is within, say, a factor of $2$ of the volume of the region just discussed in real de Sitter space. Thus this part of our integral is easily bounded. We can similarly bound the contribution from the region $Z_R > Z_0$. For large enough $Z_0$, this contribution is no more than, say, a factor of 2 times the integral of $\alpha Z_R^{-\nu}$ over the region of real de Sitter space lying to the future of the cosmological horizon but more than an embedding distance $Z_0$ to the past of the point $({\rm Re}\ \lambda_{y'},\mathbf{y}')$. To proceed further, one should compute the volume of surfaces lying a constant embedding distance $Z_R$ to the past of the given point but to the future of the cosmological horizon. In the limit of large $Z_R$, this volume turns out to approach the constant $1/(D-1)$. We also note that the proper time difference between the two surfaces at $Z_R$ and $Z_R+dZ_R$ is $dZ_R/Z_R$ for large $Z_R$. As a result, for large enough $Z_0$ the contribution from the region ${\rm Re \ Z} > Z_0$ is bounded by, say, $4 \alpha (D-1)^{-1} \int_{Z_0}^{\infty} dZ_R Z_R^{-1-\nu}$. Combining this with our observations above shows that (\[Istart\]) is bounded by some constant $B(I)$ which (for, say, $|\epsilon| < 1$) depends only on the mass of our field and which in particular is independent of both $\epsilon$ and the location of the point $Y'$. As a result, we can bound the integral corresponding to any of the above tree diagrams by $B(I)$ times the integral corresponding to the diagram shortened by cutting off a lowest line. We can clearly repeat this procedure and continue to remove the lowest lines until we are left with no lines at all. Thus, the integral corresponding to each arrowed tree diagram is bounded by $(B(I))^n$, where $n$ is the number of lines in the given tree. Hence the integral for the amplitude $\mathcal{A}_P$ given by (\[ininA\]) is (absolutely) convergent after translating the contours appropriately. Analyticity of the amplitude ${\mathcal A}_{P}$ {#regA} ----------------------------------------------- To complete the argument for equivalence between the Euclidean and Poincaré in-in correlators, we now establish the desired analyticity property of the amplitude ${\mathcal A}_P$, which we have shown above to be well-defined. Specifically, we will show that $\mathcal{A}_P$ is analytic as a function of the conformal times $\tilde\lambda_i$ of the external points if $(\tilde\lambda_1,\ldots,\tilde\lambda_m)\in U = \{(\mu_1,\ldots,\mu_m)\in \mathbb{C}^m: {\rm Im}\ \mu_i < {\rm Im}\ \mu_{i+1}, i=1,\ldots,m-1\}$, or more generally if the imaginary parts of $\tilde\lambda_i$ are all distinct, for any given spatial coordinates $\mathbf{x}_i$. For this purpose we introduce an additional regulator defined by some $s > 0$ and show that the regulated correlators are analytic functions on $U$. We then show that this analyticity property persists in the $s\to 0$ limit. Our choice of regulator is straightforward to introduce. We define the amplitude $\mathcal{A}_{P,s}$ for $s > 0$ by simply replacing each (already Pauli-Villars regulated) propagator $\Delta^{\rm reg}(Z)$ with $\Delta^{\rm reg}_s(Z) = \Delta^{\rm reg}(Z-s)$, where these propagators are written as functions of the embedding distance $Z$ defined by (\[invP\]). Note that $s$ is indeed a regulator in the sense that it widens the gap between any pair of past and future singularities such as those shown in figures \[fig:ten\] and \[fig:eleven\]. As a result, any contour that can also be used to compute the unregulated ${\mathcal A}_{P}$ can be used to compute ${\mathcal A}_{P,s}$ for $s > 0$. Thus, contours similar to figure \[fig:nine\] are again allowed for $(\tilde\lambda_1,\ldots,\tilde\lambda_m)\in U$. The (absolute) convergence of the integrals for ${\mathcal A}_{P,s}$ can be established in exactly the same way as in the $s=0$ case. Now consider complex $\tilde{\lambda}_i$-derivatives of ${\mathcal A}_{P,s}$ computed formally by differentiating the integrand, which is a product of propagators, and then integrating over the contours. Our $s$-regularization makes the integrand analytic in an open neighborhood of $(\tilde\lambda_1,\ldots,\tilde\lambda_m)$ with the contours fixed so that complex derivatives of the integrand are well-defined. Furthermore, differentiated propagators are bounded at fixed $s$ and their behavior as $Z\to \infty$ is not worse than that of un-differentiated propagators. Hence the argument for the (absolute) convergence of the integrals defining ${\mathcal A}_{P,s}$ applies equally well to integrals of the differentiated integrands. But absolute convergence guarantees that these latter integrals do in fact give the complex $\tilde{\lambda}_i$-derivatives of ${\mathcal A}_{P,s}$. It follows that such integrals are well-defined and that each ${\mathcal A}_{P,s}$ is analytic in $U$. Now, since the integrals defining ${\mathcal A}_P$ converge, it is clear that ${\mathcal A}_{P,s}$ tends to ${\mathcal A}_P$ as $s\to 0$. As for the $\lambda_i$-derivative of ${\mathcal A}_{P,s}$, the integrand will be divergent in the $s\to 0$ limit only where the arguments of the differentiated external (regulated) propagator, become coincident. However, due to our Pauli-Villars regularization this divergence is very mild and does not spoil absolute convergence. It follows that the $\lambda_i$-derivative of ${\mathcal A}_{P,s}$ has a finite limit as $s\to 0$ which gives the $\lambda_i$-derivative of ${\mathcal A}_{P}$. In particular, these derivatives are well-defined on $U$, so that ${\mathcal A}_{P}$ is analytic in this domain. This completes our step 3. Let us now assemble the facts demonstrated above to establish the equivalence of the Euclidean and Poincaré in-in correlators. The amplitude ${\mathcal A}_P$ and the corresponding Euclidean amplitude, which we call ${\mathcal A}_E$, are both analytic functions of the conformal-time variables $\tilde{\lambda}_i$ of the external points $(\tilde{\lambda}_i,\mathbf{x}_i)$ if $(\tilde\lambda_1,\ldots,\tilde\lambda_m)\in U$ (Step 3). These amplitudes coincide in the limit where the imaginary parts of the conformal-time variables tend to zero if the limits of the external points all lie in the static patch of real de Sitter space. (This was established in two steps: In step 1 we established that ${\mathcal A}_E$ agrees with the static in-in amplitude, and in step 2 we established (rather trivially) that the latter agrees with the Poincaré in-in amplitude if the limits of the external points are all in the static patch of real de Sitter space.) Hence, by uniqueness of analytic continuation[^12], ${\mathcal A}_P = {\mathcal A}_E$ for all $\tilde\lambda_i$ wherever these amplitudes are well-defined. Then, ${\mathcal A}_P$ and ${\mathcal A}_E$ have, of course, the same limit as ${\rm Im}\ \tilde\lambda_i\to 0$, producing the same physical amplitude for any points $X_i$ in the Poincaré patch. Explicit checks in simple examples {#numerics} ================================== As a check on our arguments, we now explicitly compare the Euclidean and Poincaré in-in results for one-loop corrections to propagators from $\phi^4$ and $\phi^3$ interactions. As the Euclidean computations (including the analytic continuation to Lorentz-signature de Sitter) were performed in [@Marolf:2010zp], we focus on the in-in calculations here. $\phi^4$ correction {#sec:phi4} ------------------- ![The 1-loop corrections to the propagator.[]{data-label="fig:1loop"}](fig13.pdf) Consider the 1-loop correction to the propagator due to an interaction term of the type $\mathcal{L}_{\rm int}[\phi] = -\frac{\lambda}{4!} \phi(X)^4$. The relevant Feynman diagram is shown in Fig. \[fig:1loop\] (a). The in-in correlation function is given by Here $\int_Y\dots$ denotes an integral over the Poincaré patch, and we remind the reader that $\D_{m^2}(X,Y)$, $\D_{m^2}^*(X,Y)$, and $W_{m^2}(X,Y)$ are the time-ordered, anti-time-ordered, and Wightman 2-point functions of the Gaussian theory. It is convenient to let each line in the Feynman diagram have a distinct mass; one may take the limit of equal masses later. This expression has a UV divergence for $D \ge 4$ which we control by using Pauli-Villars regularization. For simplicity, we regulate only the internal lines, though we could of course also regulate the external lines as well. To simplify (\[eq:phi4InIn\]) we first note that the regulated Feynman function $\D_{m^2}^{\rm reg}(X,Y)$ evaluated at coincident points is real and independent of position, so $\D_{m^2}^{\rm reg}(Y,Y) = \D_{m^2}^{{\rm reg}\,*}(Y,Y) =: \D_{m^2}^{\rm reg}(1)$. After removing a common factor of $\D_{m^2_3}^{\rm reg}(1)$ from the integrand the remaining integral is The integral (\[eq:I\]) can be quickly performed as follows. Consider a theory of two free massive scalar fields $\Phi_{1,2}(X)$ with masses $M_1^2 \neq M_2^2$. We can re-write this theory in terms of two new fields $\phi_{1,2}(X)$ by performing an $SO(2)$ rotation in field space: The fields $\phi_{1,2}(X)$ have masses $m_{1,2}^2$ that are functions of $M_{1,2}^2$ and $\omega$, and also an interaction $-g \phi_1(X)\phi_2(X)$ in the Lagrangian with the coupling $g=(M_1^2-M_2^2)\sin \omega\cos\omega$. Now consider the correlation function $\C{T \phi_1(X_1)\phi_2(X_2)}$. We may compute this correlation function using standard in-in perturbation theory; the term at lowest order in $g$ (or equivalently, in $\omega$) is On the other hand, by simply using (\[eq:rotation\]) we can compute $\C{T \phi_1(X_1)\phi_2(X_2)}$ exactly[^13] : We can then write $M_1^2$, $M_2^2$ and $\omega$ in terms of $m_1^2$, $m_2^2$ and $g$, expand the right-hand side of (\[eq:trick\]) in a power series in $g$, and equate the $O(g)$ term with the right-hand side of (\[eq:gI\]). The result is the equality Returning to (\[eq:phi4InIn\]), we may use (\[eq:Ians\]) to obtain It is clear that the same steps can be used to compute the Euclidean expression. The analogue of (\[eq:I\]) then involves only Euclidean propagators, but these are just what are needed to arrive at the analogue of (\[eq:Ians\]). After analytic continuation to real de Sitter space, the result is precisely (\[eq:phi4LFinal\]). We note that the above calculations could be performed equally well using dimensional regularization rather than the Pauli-Villars scheme. In dimensional regularization the computation is performed in an arbitrary real dimension which is sufficiently small such that there are no ultraviolet divergences. As in Pauli-Villars regularization, the values of de Sitter-invariant Green’s functions $\D_{m^2}(1)$, etc., are divergent but de Sitter-invariant constants. By the usual arguments [@Collins:1984xc], the manipulations we performed to derive (\[eq:Ians\]) and its Euclidean analogue are valid for arbitrary real dimension. $\phi^3$ correction {#sec:phi3} ------------------- Next we turn to the 1-loop correction to the propagator that arises from the interaction $\mathcal{L}_{\rm int}[\phi] = - \frac{g}{3!} \phi^3(X)$. The relevant Feynman diagram is shown in Fig \[fig:1loop\] (b). Once again it is convenient to let each leg of this diagram have a distinct mass. This correction has a UV divergence for spacetime dimension $D \ge 4$. Both to draw on results of [@Marolf:2010zp] and to simplify the arguments, we carry out the computations below using dimensional regularization. In particular, we will compute this correction in arbitrary $D < 2$, then analytically continue $D$ to extend the result to higher dimensions. However, we also explain how similar results (with more complicated explicit forms) can be obtained via Pauli-Villars techniques. It is useful to introduce a so-called linearization formula for the Green’s functions $\D_{m^2}(X,Y)$, $\D_{m^2}^*(X,Y)$ and $W_{m^2}(X,Y)$. We use the variable $\a := (D-1)/2$ to keep track of spacetime dimension and the mass variable $\s$ defined by the equation $-\s(\s+2\a) = m^2\ell^2$. All three Green’s functions are proportional to the Gegenbauer function $C^{\a}_\s(Z)$. The following linearization formula for the Gegenbauer function allows us to replace a product of Gegenbauer functions with an integral of a single Gegenbauer function [@Marolf:2010zp]: In this equation $C^\a_\s(Z)$ is the Gegenbauer function which is analytic in the complex $Z$ plane cut along $Z \in (-\infty,-1]$. We assume ${\rm Re\,}\s_1 < 0$ and ${\rm Re\,}\s_2 < 0$, which is valid for $m_{1,2}^2 > 0$. The shorthand $\int_\mu \dots$ denotes a contour integral in the complex $\mu$ plane with measure $d\mu/2\pi i$. The integration contour runs from $-i\infty$ to $+i\infty$ within the strip ${\rm Re}(\s_1+\s_2) < {\rm Re}\,\mu < 0$. Within this strip the integrand is analytic and the contour integral converges absolutely. From (\[eq:linearization\]) we may write the following linearization formula for the Green’s functions, with $H_\s(X,Y)$ standing for $\D_\s(X,Y)$, $\D_\s^*(X,Y)$, or $W_\s(X,Y)$: Of course, most of the content of (\[eq:Hlin\]) is contained in the details of the function $\rho^\a_{\s_1\s_2}(\mu)$. The explicit form of $\rho^\a_{\s_1\s_2(\mu)}$ can be found in [@Marolf:2010zp][^14]; we will not need the explicit form. We need only note that: 1. $\rho^\a_{\s_1\s_2}(\mu)$ is itself analytic in the region ${\rm Re\,} \mu > {\rm Re}(\s_1+\s_2)$ and that in this region the function behaves at large $|\mu|\gg 1 $ like $|\mu|^{2\a-3}\log(\mu)$. In particular, it follows that for $m^2 > 0$ with the $\mu$ contour lying to the right of poles at $\mu = -\a \pm \sqrt{\a^2 - m^2}$ (both of which lie in the left half-plane). 2. The function $\rho^\a_{\s_1\s_2}(\mu)$ is proportional to $\Gamma(2-2\a)$ and so has simple poles as a function of $\a$ at $\a = 1,3/2,2,\dots$. Of course, the left-hand sides of (\[eq:linearization\]) and (\[eq:Hlin\]) are regular for these values of $\a$; the integral over $\mu$ cancels these poles. However, the integral of an arbitrary function of $\mu$ times $\rho^\a_{\s_1\s_2}(\mu)$ will generically not cancel this divergence and so will diverge at these values of $\a$. The $O(g^2)$ correction to the propagator in this theory is given in the in-in formalism by the expression The first two terms in (\[eq:phi3InIn\]) contain the integral over $Y_1$: To compute $T_1$ we first use the linearization formula (\[eq:Hlin\]) in each term, then use (\[eq:Ians\]) to integrate over $Y_1$: We compute with $\a < 3/2$, so the final equality follows from (\[eq:muInt\]). The latter two terms in (\[eq:phi3InIn\]) contain the integral over $Y_1$: To compute $T_2$ we again use the linearization formula (\[eq:Hlin\]), then use the integral This integral may be derived in the same manner as $I(X_1,X_2)$ by examining the Wightman correlation function $\C{\phi_1(x_1)\phi_2(x_2)}$ in the $SO(2)$-rotated theory. Inserting (\[eq:J\]) into (\[eq:T2tmp\]) yields Once again the last equality follows from (\[eq:muInt\]). Assembling (\[eq:T1\]) and (\[eq:T2\]) we may write the propagator correction as The remaining integral over $Y_2$ may be performed using (\[eq:Ians\]): The expected UV divergence of this expression is in the factor $\Gamma(2-2\a)$ contained in $\rho^\a_{\s_1\s_2}(\mu)$. The Euclidean computation is essentially identical, using the analogue of (\[eq:J\]) involving only Euclidean propagators[^15], so that the results agree under analytic continuation as desired. The details of the Euclidean calculation were given in [@Marolf:2010zp], where it is also shown that both the final expression and the counterterms used to render a finite expression in higher dimensions agree with the standard flat-space results in the limit $\ell\to\infty$. One can perform essentially the same computations using Pauli-Villars regularization instead of dimensional regularization. Note that the key steps above were the linearization formula (\[eq:linearization\]), the property (\[eq:muInt\]) of the form factor $\rho^{\alpha}_{\sigma_1\sigma_2}$, and the composition rules (\[eq:Ians\]) and (\[eq:J\]). But it is clear from the derivation in [@Marolf:2010zp] that a similar linearization formula can be used to express the product of two Pauli-Villars regularized Green’s functions as an integral over (un-regularized) Gegenbauer functions. In this case, the corresponding form factor $\rho^{\alpha,M}_{\sigma_1\sigma_2}$ is manifestly finite for all $\alpha$, but depends on the Pauli-Villars regulator mass $M$. While $\rho^{\alpha,M}_{\sigma_1\sigma_2}$ is analytic as above, it falls off faster at large $\mu$ so that the analogue of (\[eq:muInt\]) is in fact satisfied for all $\alpha$. Expanding any remaining regularized propagators as a sum of un-regularized propagators then allows us to apply the composition rules (\[eq:Ians\]) and (\[eq:J\]) and to complete the calculation. The result is similar to that above with the replacement $\rho^{\alpha}_{\sigma_1\sigma_2} \rightarrow \rho^{\alpha,M}_{\sigma_1\sigma_2}$ and with extra terms coming from the regulators. The Euclidean Pauli-Villars computation proceeds in precisely the same way and again agrees after analytic continuation. Finally, we note that the analogous 1-loop correction to the Wightman function $\langle \phi(X_1)\phi(X_2) \rangle$ of this theory was recently considered by Krotov and Polyakov (see §6 of [@Krotov:2010ma]; the same correlation function is considered in §7, but with respect to a different state). Our result for this correlation function is simply the right-hand side of (\[eq:phi3InInFinal\]) with the replacement $\Delta_\mu(X_1,X_2) \to W_\mu(X_1,X_2)$. It is difficult to compare these two results exactly because the result of [@Krotov:2010ma] has not been renormalized (our renormalized result is presented in [@Marolf:2010zp]). However, we can safely compare the behavior of the two results in the infrared where the effect of renormalization is clear. To compare with [@Krotov:2010ma] we set all masses to be equal. Using techniques presented in [@Marolf:2010zp] we find the leading behavior at large $|Z_{12}|\gg 1$ to be Here $\delta m^2$ and $\delta \phi$ are the real, divergent, coefficients of the mass and field renormalization counterterms which cancel the divergent terms in $\rho^\a_{\s\s}(\s)$. We find the same asymptotic dependence on $Z_{12}$ as [@Krotov:2010ma]; in particular, while the Wightman function of the free theory has two asymptotic branches which decay like $Z_{12}^\s$ and $Z_{12}^{-(\s+2\a)}$, the $O(g^2)$ correction has two asymptotic branches that each decay slower by a multiplicative factor of $\log Z_{12}$. The authors of [@Krotov:2010ma] interpret the appearance of the logarithm in the asymptotic behavior (\[eq:Polyakov\]) as an indication of an “infrared correction” to the correlator. Indeed, the logarithm indicates that the 1-loop correction induces an $O(g^2)$ correction to the mass parameter $\s$; as a result, the asymptotic expansion of the correlator is altered in perturbation theory like $(Z)^{\s+O(g^2)} = O(g^2) (Z_{12})^\s \log Z_{12} + O(g^4)$. The $O(g^2)$ correction to $\s$ can be computed by performing the sum over 1PI diagrams of the form of Figure \[fig:1loop\] (b). This analysis was performed in detail [@Marolf:2010zp]. There it was found that, at least for scalar fields with bare masses belonging to the principal series of $SO(D,1)$, the $O(g^2)$ correction to $\s$ has a finite negative real part (equivalently, the correction introduces a finite negative imaginary part to the self-energy) which cannot be removed with a local Hermitian counterterm. Thus the $O(g^2)$ correction unambiguously *increases* the rate of decay of the 1PI-summed correlator so that this correlator decays *faster* than any free Wightman function. This agrees with the analogous computation in flat-space where the 1PI-summed correlator also enjoys an enhanced exponential rate of decay at large separations [@Srednicki:2007qs]. Discussion {#disc} ========== We have shown that Euclidean techniques and in-in perturbation theory on the Poincaré (a.k.a. cosmological) patch of de Sitter yield identical correlation functions for scalar field theories with positive masses. This is in contrast with the situation for the in-in perturbation theory defined by global coordinates on de Sitter, where the corresponding factorization property fails [@Krotov:2010ma] and the in-in scheme contains infra-red divergences. Our equivalence holds diagram by diagram and for any finite value of appropriate Pauli-Villars regulator masses. It thus also holds for the fully renormalized diagrams. While we focussed on non-derivative interactions, interactions involving derivatives can be handled in precisely the same way so long as additional Pauli-Villars subtractions are made as described in section \[prop\]. We used a 3-step argument in the main text, though a more direct analytic continuation is described in appendix A. As a check on the above arguments, we also explicitly calculated the one-loop propagator corrections due to both $\phi^3$ and $\phi^4$ interactions for all masses and in all dimensions in section \[numerics\]. The Poincaré in-in and Euclidean calculations agreed precisely [^16]. We suspect that methods similar to those used in section \[numerics\], perhaps combined with Mellin-Barnes techniques as in [@Marolf:2010nz], could be used to give a rather direct diagram-by-diagram proof of the equivalence of Euclidean and Poincaré in-in techniques, but we have not explored the details. A number of points merit further discussion. First, some physicists have conjectured that in-in calculations in the Poincaré patch lead to IR divergences, even for fields with $m^2 > 0$ due to contributions with vertices at large conformal time $\lambda$. But there are clearly no such divergences in Euclidean signature. So how can the two forms of perturbation theory agree diagram by diagram? We believe that, if there are such divergences, they are better classified as ultra-violet (UV) divergences and are associated with the fact that the limit $\lambda \rightarrow \infty$ defines a null surface (the cosmological horizon) so that light-cone singularities can arise even at what appear to be large separations between points. To a certain extent, the classification of these divergences as UV or IR in the cosmological patch may be a matter of semantics. What is important is that any divergences may be cancelled using only local counter-terms. This much is clear from our analysis: We have seen that adding a Pauli-Villars regulator $M^2$ removes all divergences, and that the in-in and Euclidean calculations agree at all finite values of $M^2$. This means that they have the same divergence structure in the limit $M^2 \rightarrow \infty$, and that divergences can be removed using the same sets of counter-terms. But all divergences for massive theories on $S^D$ are clearly ultra-violet in nature and so are the same as on ${\mathbb R}^d$. Local counter terms suffice to remove them. Second, the reader will recall that the argument given in section \[factorization\] to show factorization (i.e., that the vertical sections of the contour at infinite past may be neglected) required the propagators to fall off at large timelike separations. Without such fall-off, the two formalisms should not agree. Instead, analytic continuation of the Euclidean perturbation theory would give the terms of the in-in formalism, together with terms associated with integrals over some contour at infinity in the complex $t$-plane. How then should we interpret this disagreement? If the propagators do not fall off at large times, then integrals over the contour at infinity will generally diverge. Thus, one would expect at most one formalism to give finite results. Let us suppose that the Euclidean formalism is well-defined and finite. If one can establish the appropriate positivity properties, then analytic continuation will define a good quantum state. In this case, it would appear that any divergences of the in-in formalism are an unphysical artifact of this particular perturbative framework, and one might hope to better relate the two formalisms through an appropriate resummation of the divergent in-in formalism. There is some potential for this scenario to hold in perturbative gravity. For example, the tree-level three-point correlator constructed by Maldacena in [@Maldacena:2002vr] in the momentum space is IR divergent when inverse Fourier-transformed to position space. On the other hand, the three-point function constructed on $S^D$ using Euclidean propagators would have no IR divergences (see, e.g. [@Higuchi:2000ge; @Higuchi:2001uv] for $D=4$). It may therefore be interesting to re-examine the three-point function in Euclidean gravity. However, we note that some physicists have raised objections to these propagators [@Miao:2009hb; @Miao:2010vs]. In addition, at least with generic gauge choices the Euclidean gravitational action is not bounded below (though see [@loll]) . This means that one cannot rely on Osterwalder-Schräder arguments [@GJ] to guarantee that analytic continuation of the Euclidean correlators defines a positive-definite Hilbert space, and positivity would need to be verified. The other possibility when propagators do not fall off is that both forms of perturbation theory are ill-defined. This is the case for massless scalars on de Sitter. But even here the divergences can be an artifact of the particular scheme for perturbation theory. In [@Rajaraman:2010xd], Rajaraman showed that, in the presence of a $\phi^4$ interaction with positive coefficient in the Hamiltonian, the Euclidean scheme can be resummed to give a new well-defined perturbation theory. Since the Euclidean action is bounded below, the resulting Euclidean correlators will satisfy reflection-positivity and can be analytically continued to give a good state of the Lorentzian theory. We close with a brief comment on other generalizations. Recall that our first step was to verify that the usual connection between Euclidean methods and thermal in-in field theory on a static spacetime holds in the context of the de Sitter static patch. It is clear that similar arguments will hold in the static regions of generic spacetimes with bifurcate Killing horizons, so long as the propagators again fall off sufficiently quickly at large separations. For a particularly amusing application, consider the standard Minkowski space correlators (in the Minkowski vacuum) for which the usual perturbation theory integrates the vertices of Feynman diagrams over all of Minkowski space. We now see that, so long as their arguments are taken to lie in, say, the right Rindler wedge, these correlators can in fact be computed using in-in perturbation theory in the Rindler wedge, and thus by integrating vertices of the in-in diagrams only over this Rindler wedge. One would expect this fact to be well-known, but we have been unable to find any discussions in the literature. Acknowledgements {#acknowledgements .unnumbered} ---------------- We thank Chris Fewster, Stefan Hollands, Ian McIntosh, Sasha Polyakov, Arvind Rajaraman, Albert Roura, Mark Srednicki and Takahiro Tanaka for useful discussions and correspondence. DM and IM were supported in part by the US National Science Foundation under NSF grant PHY08-55415 and by funds from the University of California. AH thanks the Astro-Particle Theory and Cosmology Group and Department of Applied Mathematics at University of Sheffield and Physics Department at UCSB for kind hospitality while part of this work was carried out. His work at UCSB was supported by a Royal Society International Travel Grant. Direct analytic continuation ============================ If an analytic function $f(z_1,\ldots,z_N)$ of $N$ variables is integrated over a real $N$-dimensional compact surface ${\mathcal S}$ with no boundary in $\mathbb{C}^N$ as $$I = \int_{\mathcal S} f(z_1,\ldots,z_N)dz_1\wedge \cdots \wedge dz_N,$$ we have $I=0$ as long as $f$ has no singularities on or inside ${\mathcal S}$ because the differential form $f(z_1,\ldots,z_N)dz_1\wedge \cdots \wedge dz_N$ is closed. This generalization of Cauchy’s theorem can be used for the analytic continuation of correlators in the Euclidean formalism to those in the Poincaré in-in formalism. In either formalism the integration is over a manifold of the form $M^n$, where $M$ is a real $D$-dimensional surface in complexified sphere, $\mathbb{S}^D$, and where $n$ is the number of internal vertices. We showed in section \[factorization\] that we can take $M=C_E\times S_h^{D-1}$ where $C_E$ is a contour similar to that shown in figure \[fig:six\] and where $S_h^{D-1}$ is a $D-1$ dimensional half-sphere in the Euclidean formalism. On the other hand, we take $M=C_P\times \mathbb{R}^{D-1}$ where $C_P$ is a contour on the complex $\lambda$-plane with measure $d\lambda/\lambda^D$ (see figure \[fig:nine\]) in the Poincaré in-in formalism. The generalized Cauchy’s theorem together with the regularization of the propagator in section \[regA\] can be used to show that the amplitude, which is an integral over $M^n$, is analytically continued as an analytic function of the external points on $\mathbb{S}^D$ if $M$ can be deformed, with the external points moving and remaining on $M$, without letting it cross any singularities of the integrand[^17]. In this appendix we demonstrate that this deformation of the surface $M$ of integration from $S_1=C_E\times S_h^{D-1}$ (Euclidean formalism) to $S_2= C_P\times\mathbb{R}^{D-1}$ (Poinaré in-in formalism) can indeed be achieved. We start with the surface $S_1$ for the Euclidean formalism. It can be given in Poincaré coordinates as follows: $$S_1 = \{(\Lambda e^{i\tau},\mathbf{X}e^{i\tau}): \tau \in (-\epsilon,\epsilon),\,\, \Lambda^2 - \|\mathbf{X}\|^2 = f(\tau)>0,\,\,\Lambda >0, \mathbf{X}\in \mathbb{R}^{D-1}\}, \label{app:S1}$$ where $f(\tau)\to \infty$ as $\tau \to \pm\epsilon$. This can be shown using the following relationship between the static and Poincaré coordinates, $(t,\theta,\hat{X})$ with $\hat{X}\cdot\hat{X} = 1$, and $(\lambda,\mathbf{x})$, respectively: $$\begin{aligned} e^{-2t} & = \lambda^2-\mathbf{x}\cdot\mathbf{x},\\ \hat{X^i}\sin\theta & = x^i/\lambda.\end{aligned}$$ On the other hand the contour in figure \[fig:nine\], which is the $\lambda$-contour for the Poincaré in-in formalism before taking the limit ${\rm Im}\ \lambda\to 0$, corresponds to $$S_2=\{(\left[f(\tau)\right]^{1/2}+ i\tau,\mathbf{X}): \tau \in(-\epsilon,\epsilon),\,\,\mathbf{X}\in\mathbb{R}^{D-1}\}.$$ If the points $X_1=(\lambda_1,\mathbf{x}_1)$ and $X_2 = (\lambda_2,\mathbf{x}_2)$ are the arguments of a propagator, then (\[invP\]) shows that it is singular if and only if $$(X_1-X_2)^2 = -(\lambda_1-\lambda_2)^2 + (\mathbf{x}_1-\mathbf{x}_2)\cdot(\mathbf{x}_1-\mathbf{x}_2) = 0. \label{app:zero}$$ It can readily be seen that this equation is not satisfied by any pair of distinct points on $S_1$ or $S_2$. Since the integrand is a product of propagators with arguments on the surface of integration, what we need to show is that there is a continuous deformation from $S_2$ to $S_1$ such that no intermediate surfaces contain two distinct points satisfying (\[app:zero\])[^18]. We note that, if the vector ${\rm Im}\ X_1 - {\rm Im}\ X_2$ is timelike, then (\[app:zero\]) does not hold. First consider the following one-parameter family of surfaces: $$S_{2,\gamma} = \{(\Lambda + i\tau,\mathbf{X}): \Lambda^2 - \gamma \|\mathbf{X}\|^2 = f(\tau)\},$$ where $f(\tau)$ is the same positive function as in (\[app:S1\]) and where $0 \leq \gamma \leq 1$. Note that $S_{2,0} = S_2$. For any two points $X_j = (\Lambda_j+i\tau_j,\mathbf{X}_j)$, $j=1,2$, on $S_{2,\gamma}$, we have $${\rm Im}\ X_1 - {\rm Im}\ X_2 = (\tau_1-\tau_2,\mathbf{0}),$$ which is timelike if $\tau_1\neq \tau_2$. If $\tau_1 = \tau_2$, then $(X_1 - X_2)^2>0$ because $X_1=(\Lambda_1,\mathbf{X}_1)$ and $X_2=(\Lambda_2,\mathbf{X}_2)$ are both on the hyperboloid $\Lambda^2 - \gamma\|\mathbf{X}\|^2 = f(\tau_1)$ with $0\leq \gamma\leq 1$. Thus, the deformation of $S_2$ to $S_{2,1}$ leads to analytic continuation of the integral. Next we consider the following two-parameter family of surfaces: $$S_{(\alpha,\beta)} = \{((\Lambda + i\alpha \tau)e^{i\beta \tau}, \mathbf{X}e^{i\beta\tau}): \Lambda^2 - \|\mathbf{X}\|^2 = f(\tau)\},$$ where $0\leq \alpha,\beta \leq 1$. We note that $S_{2,1}=S_{(1,0)}$ and $S_1 = S_{(0,1)}$. Consider two points on $S_{(\alpha,\beta)}$: $$\begin{aligned} X_1 & = ((\Lambda_1+i\alpha\tau_1)e^{i\beta\tau_1},\mathbf{X}_1e^{i\beta\tau_1}),\\ X_2 & = ((\Lambda_2+i\alpha\tau_2)e^{i\beta\tau_2},\mathbf{X}_2e^{i\beta\tau_2}).\end{aligned}$$ Define $\tilde{X}_j :=e^{-i\beta\tau_2}X_j$, $j=1,2$. Since (\[app:zero\]) is invariant under multiplication of $X_1$ and $X_2$ by a common factor, it is not satisfied if ${\rm Im}\ \tilde{X}_1 - {\rm Im}\ \tilde{X}_2$ is timelike. We find $${\rm Im}\ \tilde{X}_1 - {\rm Im}\ \tilde{X}_2 = (\Lambda_1\sin\beta(\tau_1-\tau_2)+ \alpha\tau_1\cos\beta(\tau_1-\tau_2) - \alpha \tau_2, \mathbf{X}_1 \sin\beta(\tau_1-\tau_2)).$$ If $\epsilon$ is sufficiently small — recall $|\tau_1|,|\tau_2| < \epsilon$ — then this vector is timelike for $0\leq \alpha, \beta\leq 1$ provided that at least one of them is nonzero and that $\tau_1\neq \tau_2$. If $\tau_1=\tau_2$, then we have $(\tilde{X}_1-\tilde{X}_2)^2>0$ because $\tilde{X}_1=(\Lambda_1,\mathbf{X}_1)$ and $\tilde{X}_2=(\Lambda_2,\mathbf{X}_2)$ are both on the hyperboloid $\Lambda^2 - \|\mathbf{X}\|^2 = f(\tau_1)$. Thus, the deformation of $S_{2,1}$ to $S_1$ gives analytic continuation of the integral if $\epsilon$ is sufficiently small, and, hence, so does the deformation of $S_2$ to $S_1$. When combined with the various convergence and fall-off arguments from the main text, this result implies that the correlators computed using the contour of figure \[fig:nine\] in the Poincaré in-in formalism is equal to the corresponding analytic continuation of the Euclidean correlators. [99]{} B. Allen, “Vacuum States In De Sitter Space,” Phys. Rev.  D [**32**]{}, 3136 (1985). A. A. Starobinsky, “Spectrum of relict gravitational radiation and the early state of the universe,” JETP Lett.  [**30**]{}, 682 (1979) \[Pisma Zh. Eksp. Teor. Fiz.  [**30**]{}, 719 (1979)\]. E. Mottola, “Particle Creation In De Sitter Space,” Phys. Rev.  D [**31**]{}, 754 (1985); E. Mottola, “Fluctuation - dissipiation theorem in general relativity and the cosmological constant,” [*Physical Origins of Time Asymmetry*]{} (Cambridge, Cambridge Univ. Press 1993) ed by J. J. Halliwell et al, pp. 504-515; I. Antoniadis, P. O. Mazur and E. Mottola, “Cosmological dark energy: Prospects for a dynamical theory,” New J. Phys.  [**9**]{}, 11 (2007) \[arXiv:gr-qc/0612068\]; E. Mottola, “New Horizons in Gravity: The Trace Anomaly, Dark Energy and Condensate Stars,” arXiv:1008.5006 \[gr-qc\]. B. L. Hu and D. J. O’Connor, “Infrared Behavior And Finite Size Effects In Inflationary Cosmology,” Phys. Rev. Lett.  [**56**]{}, 1613 (1986). B. L. Hu and D. J. O’Connor, “Symmetry Behavior in Curved Space-Time: Finite Size Effect and Dimensional Reduction,” Phys. Rev.  D [**36**]{}, 1701 (1987). N. C. Tsamis and R. P. Woodard, “Relaxing The Cosmological Constant,” Phys. Lett. B [**301**]{}, 351 (1993); N. C. Tsamis and R. P. Woodard, “Strong infrared effects in quantum gravity,” Annals Phys. [**238**]{}, 1 (1995); N. C. Tsamis and R. P. Woodard, “Quantum Gravity Slows Inflation,” Nucl. Phys.  B [**474**]{}, 235 (1996) \[arXiv:hep-ph/9602315\]; “The quantum gravitational back-reaction on inflation,” Annals Phys.  [**253**]{}, 1 (1997) \[arXiv:hep-ph/9602316\]; “Stochastic quantum gravitational inflation,” Nucl. Phys.  B [**724**]{}, 295 (2005) \[arXiv:gr-qc/0505115\]. A. M. Polyakov, “De Sitter Space and Eternity,” Nucl. Phys.  B [**797**]{}, 199 (2008) \[arXiv:0709.2899 \[hep-th\]\]. G. Perez-Nadal, A. Roura, E. Verdaguer, “Backreaction from non-conformal quantum fields in de Sitter spacetime,” Class. Quant. Grav.  [**25**]{}, 154013 (2008). \[arXiv:0806.2634 \[gr-qc\]\]. M. Faizal and A. Higuchi, “On the FP-ghost propagators for Yang-Mills theories and perturbative quantum gravity in the covariant gauge in de Sitter spacetime,” Phys. Rev.  D [**78**]{}, 067502 (2008) \[arXiv:0806.3735 \[gr-qc\]\]. E. T. Akhmedov, P. V. Buividovich, “Interacting Field Theories in de Sitter Space are Non-Unitary,” Phys. Rev.  [**D78**]{}, 104005 (2008). \[arXiv:0808.4106 \[hep-th\]\]. A. Higuchi, “Decay of the free-theory vacuum of scalar field theory in de Sitter spacetime in the interaction picture,” Class. Quant. Grav.  [**26**]{}, 072001 (2009) \[arXiv:0809.1255 \[gr-qc\]\]. A. Higuchi and Y. C. Lee, “A conformally-coupled massive scalar field in de Sitter expanding universe with the mass term treated as a perturbation,” arXiv:0903.3881 \[gr-qc\]. E. T. Akhmedov, “Real or Imaginary? (On pair creation in de Sitter space),” \[arXiv:0909.3722 \[hep-th\]\]. A. M. Polyakov, “Decay of Vacuum Energy,” arXiv:0912.5503 \[hep-th\]. C. P. Burgess, R. Holman, L. Leblond and S. Shandera, “Breakdown of Semiclassical Methods in de Sitter Space,” arXiv:1005.3551 \[hep-th\]. S. B. Giddings and M. S. Sloth, “Semiclassical relations and IR effects in de Sitter and slow-roll space-times,” arXiv:1005.1056 \[hep-th\]. D. Krotov, A. M. Polyakov, “Infrared Sensitivity of Unstable Vacua,” \[arXiv:1012.2107 \[hep-th\]\]. P. Hájíček, “A new generating functional for expectation values of field operators,” Bern preprint, 1978 (unpublished). B. S. Kay, “Linear spin-zero quantum fields in external gravitational and scalar fields. II. Covarivant perturbation theory”, Commun. Math. Phys. [**71**]{}, 29 (1980). R. D. Jordan, “Effective Field Equations for Expectation Values,” Phys. Rev.  [**D33**]{}, 444-454 (1986). E. Calzetta, B. L. Hu, Phys. Rev.  [**D35**]{}, 495 (1987). J. B. Hartle and S. W. Hawking, “Path Integral Derivation Of Black Hole Radiance,” Phys. Rev.  D [**13**]{}, 2188 (1976). D. Marolf and I. A. Morrison, “The IR stability of de Sitter: Loop corrections to scalar propagators,” Phys. Rev. [**D82**]{}, 105032 (2010) \[arXiv:1006.0035 \[gr-qc\]\]. D. Marolf, I. A. Morrison, “The IR stability of de Sitter QFT: results at all orders,” arXiv:1010.5327 \[gr-qc\]. S. Hollands, “Correlators, Feynman diagrams, and quantum no-hair in deSitter spacetime,” \[arXiv:1010.5367 \[gr-qc\]\]. A. Rajaraman, “On the proper treatment of massless fields in Euclidean de Sitter space,” arXiv:1008.1271 \[hep-th\]. D. Schlingemann, “Euclidean field theory on a sphere,” arXiv:hep-th/9912235. J. Glimm and A. Jaffe, [*Quantum Physics*]{} (Springer-Verlag, New York, 1987), sections 6.1 and 10.4. G. W. Gibbons, M. J. Perry, “Black Holes and Thermal Green’s Functions,” Proc. Roy. Soc. Lond.  [**A358**]{}, 467-494 (1978). G. W. Gibbons, S. W. Hawking, “Cosmological Event Horizons, Thermodynamics, and Particle Creation,” Phys. Rev.  [**D15**]{}, 2738-2751 (1977). J. S. Schwinger, “Brownian motion of a quantum oscillator,” J. Math. Phys.  [**2**]{}, 407-432 (1961). L. V. Keldysh, “Diagram technique for nonequilibrium processes,” Zh. Eksp. Teor. Fiz.  [**47**]{}, 1515-1527 (1964). N. P. Landsman and C. G. van Weert, “Real and Imaginary Time Field Theory at Finite Temperature and Density,” Phys. Rept.  [**145**]{}, 141 (1987). T. S. Bunch, P. C. W. Davies, “Quantum Field Theory in de Sitter Space: Renormalization by Point Splitting,” Proc. Roy. Soc. Lond.  [**A360**]{}, 117-134 (1978). B. Allen and T. Jacobson, “Vector Two Point Functions In Maximally Symmetric Spaces,” Commun. Math. Phys.  [**103**]{}, 669 (1986). N. Ya. Vilenken, and A. U. Klimyk, “Representations of Lie Groups and Special Functions,” vols 1-3. (Dordrecht: Kluwer Acad. Publ. 1991-1993). R. Camporesi and A. Higuchi, “Stress Energy Tensors In Anti-De Sitter Space-Time,” Phys. Rev.  D [**45**]{}, 3591 (1992). R. F. Streater and A. S. Wightman, “PCT, spin and statistics, and all that,” Redwood City, USA: Addison-Wesley (1989) 207 p. (Advanced book classics). J. C. Collins, “Renormalization. An Introduction To Renormalization, The Renormalization Group, And The Operator Product Expansion,” [*Cambridge, Uk: Univ. Pr. (1984) 380p*]{} M. Srednicki, “Quantum field theory,” [*Cambridge, UK: Univ. Pr. (2007) 641 p*]{} J. M. Maldacena, “Non-Gaussian features of primordial fluctuations in single field inflationary models,” JHEP [**0305**]{}, 013 (2003) \[arXiv:astro-ph/0210603\]. A. Higuchi and S. S. Kouris, “On the scalar sector of the covariant graviton two-point function in de Sitter spacetime,” Class. Quant. Grav.  [**18**]{}, 2933 (2001) \[arXiv:gr-qc/0011062\]. A. Higuchi and S. S. Kouris, “The covariant graviton propagator in de Sitter spacetime,” Class. Quant. Grav.  [**18**]{}, 4317 (2001) \[arXiv:gr-qc/0107036\]. S. P. Miao, N. C. Tsamis and R. P. Woodard, “Transforming to Lorentz Gauge on de Sitter,” J. Math. Phys.  [**50**]{}, 122502 (2009) \[arXiv:0907.4930 \[gr-qc\]\]. S. P. Miao, N. C. Tsamis and R. P. Woodard, “De Sitter Breaking through Infrared Divergences,” J. Math. Phys.  [**51**]{}, 072503 (2010) \[arXiv:1002.4037 \[gr-qc\]\]. A. Dasgupta, R. Loll, “A Proper time cure for the conformal sickness in quantum gravity,” Nucl. Phys.  [**B606**]{}, 357-379 (2001). \[hep-th/0103186\]. [^1]: This has been rigorously shown in $D=2$ dimensions for standard kinetic terms and polynomial potentials; see e.g., [@GJ]. [^2]: This follows immediately from the fact that the Poincaré patch is a homogeneous space in and of itself. Any spacetime point in the patch can be mapped to any other using only the symmetries of the patch. [^3]: These are poles if $D$ is even and if the scalar is conformally coupled and massless. [^4]: For a general contour $C$, we will refer to the associated diagrams below as Feynman diagrams, even though they may sometimes involve Dyson (or other) propagators as noted above. [^5]: Continuity of the integrand is clear from the regularization scheme. Continuity of the result of the time integrations follows from the fact that these integrals converge absolutely. This in turn follows from the same estimates used to show factorization above. [^6]: \[Esreg\] The reader may ask if the analytic continuation of a full diagram (after all integrals, including space integrals, have been performed) coincides with the result described above (in which the integrand is first continued, before performing the spatial integrals). The potential obstacle is the fact that spatial coordinates will necessarily coincide somewhere during the integrals over space, and such coincidences shrink the windows (used to enact the analytic continuation above) between past- and future-branch cuts to zero size. One may show that this is not an issue by performing a further regularization in which all propagators $\Delta(Z)$ are replaced by $\Delta(Z-s)$ for some positive $s$. This regularization maintains windows of finite size even at coincidence. Furthermore, so long as one drags the contour along with the external point as described above, one finds that the resulting integral is analytic in the external time variables for all positive $s$ on the domain where the external times have distinct imaginary parts. Then we find that the full diagram is analytic at $s=0$, and its analytic continuation is given by the prescription above. The argument is very similar to that given in Section \[regA\] to establish the analyticity of the Poincaré in-in correlators in the conformal-time variables. [^7]: \[tree\]For certain spatial coordinates, our contour will encounter two singularities due to distinct external points $X_{j_1}$ and $X_{j_2}$ at the same time. Since this happens only on a set of spatial coordinates of measure zero, we will ignore such cases and assume below that $Y_{k_1}$ is past-related to only one point, and similarly for other vertices in the diagram. [^8]: \[ldep\]It may be that $X_{j_2} = Y_{k_1}$ for some values of $\lambda_{k_1}$ while for other values $X_{j_2}$ is an external point. In this way, our definition of new past-relations can depend on the positions of integration variables along contours that have already been fixed. It is straightforward to deal with this seeming complication as discussed in footnote \[forest\] below. [^9]: Otherwise the location of point $B$ could not produce singularities in the propagators evaluated at $A$. In particular, our notion of past-relation is [*not*]{} transitive. [^10]: Notice that, since the contour is separated from the past singularities due to external points by a finite distance for large $|\lambda|$, the $Z$ in (\[invP\]) for an external propagator is bounded away from $1$ as $|\lambda|\to \infty$. This means that a differentiated external propagator is bounded on the contours and that the proof for convergence of $\mathcal{A}_P$ below can be used virtually unaltered for the derivative of $\mathcal{A}_P$. [^11]: \[forest\] Since past-relations depend on both the spatial coordinates and the conformal time coordinates of the previously-fixed contours (see footnote \[ldep\]), the tree structure exhibits a similar dependence. It would therefore be better to say that each amplitude can be written as a finite sum of products of tree amplitudes, where the amplitudes for any given term in the product are integrated only over some subset of the spacetime coordinates. But since we wish only to establish absolute convergence of the amplitude, it does no harm to extend the spatial integrations for each tree to the full space ${\mathbb R}^{n(D-1)}$ and each $\lambda$-integrations over the whole of the appropriate contour and to then abuse language by referring to the amplitude as a ‘product’ of tree amplitudes without mentioning the remaining sum explicitly. [^12]: Here, we are using the agreement of ${\mathcal A}_P$ and ${\mathcal A}_E$ on an open subset of a real section, $B=\{(\mu_1,\ldots,\mu_m)\in \mathbb{C}^m:{\rm Im}\ \mu_i=0, i=1,\ldots,m\}$, on the boundary of the region of analyticity $U$ to conclude ${\mathcal A}_P={\mathcal A}_E$ in $U$. This is a simple corollary of Bogolubov’s edge-of-the-wedge theorem (see, e.g., Theorem 2-17 in [@Streater:1989vi]). [^13]: A truly skeptical reader might ask whether (\[eq:gI\]) must necessarily give the vacuum correlator of the theory defined by (\[eq:rotation\]). But at this order the result must be a Gaussian state invariant under translations, rotations, and the scaling symmetry of the Poincaré patch. This determines the state uniquely, assuming that the results are finite. Finiteness in turn can be shown by either a careful direct analysis or by using the results of [@Higuchi:2009ew] to expand both $m_1^2$ and $m_2^2$ about the conformal coupling value $m_c^2 = \frac{1}{4}D(D-2)$ and then using the explicit calculations of that reference. [^14]: The definition of $\rho^\a_{\s_1\s_2}(\mu)$ used here is $(-2)$ times the $\rho^\a_{\s_1\s_2}(L)$ of that paper. [^15]: The Euclidean analogue of (\[eq:J\]) is identical to the Euclidean analogue of (\[eq:I\]). [^16]: We have also used a combination of analytic and numerical techniques to check agreement of Poincaré in-in and Euclidean correlators for the tree-level 3-point function for $D=4$ for $m^2 = 2$ (conformal coupling) and also for the one-loop correction to the 4-point function for $D=3$ and $m^2 = 3/4$ (also conformal coupling) evaluated at two pairs of coincident points. Both of these diagrams are finite and require no regularization. Our numerics indicate agreement to at least one part in $10^7$. As these calculations do not yield significant insights, we have refrained from presenting the details. [^17]: We expect that the integrals on all intermediate surfaces can be shown to converge by methods similar to those employed in sections \[factorization\] and \[analyticity\]. [^18]: We can show as in section \[regA\] that coincidence singularities do not spoil the analytic continuation argument here.
--- abstract: | We present the Mediatrix filamentation method, a novell iterative procedure that decomposes elongated objects in filaments along their main direction over their intensity peak. From this decomposition, the method measures the object’s length and thickness. This technique is applied in preliminary tests to arc-shaped objects (simulated gravitational arcs) to recover their curvature center. [: image processing, astronomy.]{} --- [**Mediatrix method for filamentation of objects in images:\ application to gravitational arcs**]{} [Clécio R. Bom$^{a,b,}$[^1], Martín Makler$^{a,b}$, Marcelo P. Albuquerque$^a$ ]{} [$^a$Centro Brasileiro de Pesquisas Físicas\ Rua Dr. Xavier Sigaud, 150, 22290-180 Rio de Janeiro – RJ, Brazil\ $^b$Laboratório Interinstitucional de e-Astronomia - LIneA\ Rua Gal. José Cristino 77, Rio de Janeiro – RJ - 20921-400, Brazil ]{} **1. INTRODUCTION** Shape analysis and detection are fundamental issues in image processing. For curved shapes, it is of interest to define a curvature, with an associated center and radius. We may find this kind of shape, among others, in astronomical images. The quintessential example in this context is that of gravitational arcs. These objects are the result of the Strong Lensing effect [@Schneider; @Mollerach] which occurs when a distant source object is aligned with some intervening distribution of matter in the trajectory of light towards us. This matter distribution distorts the space-time, acting like a lens. The image of astronomical objects is distorted and magnified, forming the curved shapes mentioned above. We have developed a novell technique to decompose images of elongated objects in a set of filaments and measure their length, width, and assign a curvature center. This paper is organized as follows: In section 2\[method\], we introduce the Mediatrix Filamentation method explaining its basic elements and procedure. Section 3\[measurements\] presents measurements derived from the Mediatrix method. In section 4\[app\], we apply the method on simulated arcs and discuss the results. In section 5\[discuss\] we present our concluding remarks and future perspectives. **2. MEDIATRIX FILAMENTATION METHOD \[method\]** In the following we will describe the method to decompose elongated objects into a set of line segments. For concreteness we will consider the object to be composed by a set of pixels with given intensity, as for a digital image. However, in principle, the method can be applied to any intensity distribution, even if not pixelated. The only requirement is that the object should have a clearly defined boundary, in other words, the intensity must be zero outside the object. The Mediatrix method was originally designed to search for gravitational arcs. It was inspired on a basic geometrical property of the perpendicular bisector of pairs of points on a circle. Namelly that these lines, for any set of pairs of points, intersect at the circle center. Therefore if an elongated object can be decomposed into a set of points along its longer direction, and if this object has a shape close to an arc segment the perpendicular bisectors of pairs of these points will intersect in nearby points (i.e. close to the center of curvature). It turns out, however, that this method can be used to assign segments along the longer direction of any elongated object, i.e. to “filament” the object, or to determine its “spine”, regardless of the presence of curvature. The key procedure to filament the object is to recursively obtain the perpendicular bisector of pairs of points on the image. Given the points $P_1=(x_1,y_1)$ and $P_2=(x_2,y_2)$, the perpendicular bisector is a straight line $y = mx + b$ perpendicular to $\overline{P_1 P_2}$ that crosses the middle point of this segment and whose coefficients given by: $$\label{coeficiente} m= - \frac{ x_2-x_1}{y_2-y_1},$$ $$\label{coeficiente2} b= \frac{ (y_1 + y_2)-m(x_1+x_2)}{2}.$$ \ The Mediatrix method is a recursive method that operates on several iteration steps. Each step is a new Mediatrix level and the method can be performed up to an arbitrary level $n$. In the following, we describe the first few levels as an example. In the first step we determine the extreme points, $E_1$ and $E_2$ of the object. Several methods have been considered to determine the extreme points of an object (see, e.g., ref [@MaxDistTest]). Here we use the “farthest-of-farthest” method, by which $E_1$ is defined as the most distant point from a reference point on the object (e.g., the brightest pixel on the image or its geometrical center), whereas $E_2$ is defined as the pixel on the object farthest from $E_1$. Next, the first perpendicular bisector of these two points is calculated. The first Mediatrix point $M^1$ is defined as the brightest pixel of the object along the perpendicular bisector. In pratice, we take the brightest pixel located at a distance $d \leq \alpha \Delta p$ from the perpendicular bisector, where $\Delta p$ is the pixel scale and $\alpha$ is a free parameter choosen as $\alpha =\frac{\sqrt{2}}{2}$. The first Mediatrix Point $M^1$ is shown in Fig. 1(A)\[Step1\] for an arc-shaped object (more specifically, an ArcEllipse [@arcfitting]). ![Steps in the Mediatrix Filamentation method. After $n$ iterations, the method determines a set of $2^n$ points defined by the maximum of intensity along the $2^n$ perpendicular bisector and the $2^n$ vectors perpendicular to neighboring points with magnitude given by the distance between these points. For clarity, only some points are shown on the figure, which illustrates the steps for $n=3$.[]{data-label="Step1"}](Step_full_il.eps){width="13.5cm"} In the second step the perpendicular bisector is now calculated with respect to $E_1$, $M^1$ and $M^1$, $E_2$. These two perpendicular bisectors now define two other Mediatrix Points: $M^{2}_{1}$ and $M^2_2$ using the same criteria we used to define $M^1$ (Fig 1B). The upper index refers to the iteration level and the lower index is a label to identify the points. Proceeding to the third step, presented in Fig 1(C), we have the set of Mediatrix Points $M^1$, $M^{2}_{1}$, $M^2_2$ and the two extremes $E_1$ and $E_2$. Those points are used to define new Mediatrix Points $M^3_i$ calculated by picking the highest intensity pixel near to the perpendicular bisector between two neighboring points. The algorithm continues defining new Mediatrix Points $M^j_i$, corresponding to the $i$-est point in the $j$-est iteration level, in higher iteration levels until reaching a specified final step $n$. In Fig 1(D), we present the last step for $n=3$ (as in the previous panels some points were omitted not to crowd the figure). The collection of Mediatrix Points together with the two extreme points are then named keydots. From the keydots, the object is decomposed in $N=2^n$ segments or filaments. Each segment connects a keydot to its neighbors. The algorithm outputs a set of vectors $\vec{n}_j$, where $j$ varies from $1$ to $N$. Those vectors are perpendicular to the segment which connects a keydot to its neighbor with origin in the middle point of its segment and norm equal to the length of this segment. This is shown in Fig. 1(D) for $\vec{n}_7$, where $|\vec{n}_7|$=$|\overline{M^3_4 M^2_2}|$. **3. MEASUREMENTS DERIVED FROM THE MEDIATRIX METHOD \[measurements\]** Using the outputs of the proposed algorithm, the object length, $L$, is defined as: $$\label{length} L= \sum^N_{j=1} |\vec{n}_j|.$$ The points $\vec{F}^1_j$ and $\vec{F}^2_j$, represented in Fig. 1(D) for $j=7$, are defined as the two extreme pixels from the set of points along the perpendicular bisector associated to $\vec{n}_j$. Using those points, it is possible to measure the average thickness $W$: $$\label{thickness} W= \frac{1}{N} \sum^N_{j=1} |\vec{F}^1_j-\vec{F}^2_j|.$$ For arcs constructed from circle segments, all perpendicular bisectors intercept at the center of curvature. Therefore, we may define as the object center of curvature the average of the points $C_{ik}=(x_{ik},y_{ik})$ generated by the intersection of the lines in the $\vec{n}_i$ and $\vec{n}_k$ directions. The center coodinates $C_a=(x_a,y_a)$ are given by: $$\label{Media1} x_a= \frac{(N-2)!}{N!} \sum^N_{i=1}\sum^N_{k\neq i} x_{ik}$$ $$\label{Media2} y_a= \frac{(N-2)!}{N!} \sum^N_{i=1}\sum^N_{k\neq i} y_{ik}$$ An alternative way to define a curvature center is using the median intercept point $C_m=(x_m,y_m)$, where $x_m$ is defined as the median of the values of $x_{ik}$ and $y_m$ is given by the median of $y_{ik}$. From these two “center of curvature” definitions we may assign a curvature radius $R_a$ and $R_m$, associated to $C_m$ and $C_a$, as the distance from the first Mediatrix filamentation point $M^1$ to $C_m$ or $C_a$ respectively, i.e., $R_a= |\overline{M^1 C_a}| $ and $R_m= |\overline{M^1 C_m}| $. The Mediatrix filamentation method was implemented in the Python programming language [@python]. The code was developed as part of the SLtools library[^2]. **4. APPLICATION TO SIMULATED GRAVITATIONAL ARCS \[app\]** As an example of application of the Mediatrix filamentation method, we consider 3 arc-shaped objects produced through the gravitational lensing effect (see Fig. 2\[ex02\]). These arcs were generated using the *AddArcs* [@addarcs] pipeline to simulate gravitational arcs. *AddArcs* is a code that simulates realistic gravitational arcs using the galaxy cluster abundance (i.e. the lenses) provided by cosmological simulations and background galaxies (the sources) with morphological parameters and redshift distribution obtained from the Hubble Ultra Deep Field Survey. The source brightness distribution is modeled by Sérsic profiles [@sersic], which in principle extend to infinity, with elliptical isophotes. Given the input models for the source and the lens, *AddArcs* controls the software [*gravlens*]{} [@gravlens] to perform the gravitational lensing calculations. The segmentation of the simulated image (i.e. the definition of borders) is carried out with the [*SExtractor*]{} [@sextractor] software, in a procedure similar to the one described in [@arcfitting] To perform the Mediatrix filamentation we used the function\ `run_mediatrix_decomposition_on_stamp` from the `mediatrix_decomposition` module in [*SLtools*]{}, which was developed to apply the method to an image matrix with one isolated arc. The points $(x_{ik},y_{ik})$ are shown for the tested arcs in Fig. 3. In order to avoid lines that do not intercept or are almost parallel to each other, the algorithm ignores combinations of $\vec{n}_i$ and $\vec{n}_k$ which are almost in the same direction, more specifically, we discard pairs with $|tan\theta_i-tan\theta_k|\leq 10^{-3}$, where $\theta_i$ and $\theta_k$ are the angles determined by $\vec{n}_i$ and $\vec{n}_k$ directions with respect to the $x$ axis. The center of the mass distribution that generated the the lensing effect, the lens center $C_l=(x_l,y_l)$ is given by *AddArcs*. Although from gravitational lensing theory there is no need for the curvature center and the center of mass to coincide, they are usually close. We define the arc radius $R$ as the distance from the first Mediatrix filamentation point $M^1$ to $C_l$, so $R= |\overline{M^1 C_l}| $. The distance between $C_l$ and the measured centers of curvature $C_m$ or $C_a$ is $\Delta R_m$ and $\Delta R_a$ respectively. We present the results for $R_a$, $R_m$, discrepancy in $R$ using $R_a$, $R_m$, $L$, $L/W$ and curvature $R/L$ in Table \[table1\] for the $3$ arcs. ![ Arcs A, B, and C respectively used as input for the Mediatrix method.[]{data-label="ex02"}](arc_shapes.eps){width="7cm"} ![Straight lines defined by the vectors $\vec{n}_j$ and their intercepts, for $N=4$. From top to bottom: arcs A, B and C.[]{data-label="ex03"}](ex03v_full.eps){width="9cm"} [ccccc]{} $N$ & $\frac{\Delta R_m}{R}$ & $\frac{\Delta R_a}{R}$ & $L$(pixels) & $L/W$\ \ 4 & 0.036 & 0.062& 294.7& 5.6\ 8 & 0.153 & 0.126& 295.3& 5.6\ 16 & 0.144 & 0.092 & 295.7 & 5.6\ 32 & 1.09 & 0.230 & 296.8 & 5.7\ \ 4 & 0.100 & 0.096 & 362.4 & 12.6\ 8 & 0.211 & 0.68 & 363.1 & 12.7\ 16 & 0.166 & 0.170 & 363.3 & 12.7\ 32 & 0.307 & 0.168 & 363.9 & 12.7\ \ 4 & 0.101 & 0.097 & 192.9 & 5.5\ 8 & 0.051 & 0.093 & 193.1 & 5.5\ 16 & 0.085 & 0.126 & 193.5 & 5.6\ 32 & 0.198 & 0.209 & 194.7 & 5.6\ **4.CONCLUSIONS AND FUTURE APPLICATIONS \[discuss\]** The Mediatrix filamentation method is a technique that decomposes and measures elongated shapes, and may be used in the detection of shapes such as gravitational arcs. The technique can also define an expected center for curved objects. Previous definitions of the curvature center in the gravitational arc characterization problem assumed that the arcs are circular [@arc_catalog]; with the Mediatrix method there is no need to use this hypothesis. The $\vec{n}_j$ vectors can be used as input for other morphological estimators for arc-shaped objects. For example, in ref. [@arcfitting] the Mediatrix output is used as a starting point for a method that fits the brightness distribution of arcs using analytical templates to derive structural parameters of the arcs. Another appication is to use parameters derived from the Mediatrix decomposition as input for a neural network to classify objects [@Bom]. In the current case, the extreme points were used as the first Mediatrix step but other definitions for the first step can be used, e.g. object edges, in order to filament more complex objects not necessarily with a single preferred direction. [**ACKNOWLEDGMENTS:**]{} C. R. Bom is funded by the Brazilian agency FAPERJ (“Nota 10” fellowship). M. Makler is partially supported by CNPq (grant 312876/2009-2) and FAPERJ (grant E-26/110.516/2012). We acknowledge the Laboratório Interinstitucional de e-Astronomia (LIneA) operated jointly by the Centro Brasileiro de Pesquisas Físicas (CBPF), the Laboratório Nacional de Computação Científica (LNCC), and the Observatório Nacional (ON) and funded by the Ministério da Ciência e Tecnologia e Inovação (MCTI). We thank Bruno Rosseto for reminding us of the geometrical proprieties of the perpendicular bisector in a circle and Anupreeta More and Bruno Moraes for useful suggestions to the manuscript. [43]{} Schneider P, Ehlers J, Falco EE. Gravitational Lenses, 1992, Springer-Verlag, Berlin. Mollerach, S, Roulet, E, 2002. Gravitational Lensing and Microlensing, World Scientific, Singapore. Brandt, CH, Bom, CR, Ferreira, PC, Makler, M., 2012, A comparison among methods to determine image extremities: python implementation, in preparation. Furlanetto, C, Santiago, BX, Makler, M, Bom, CR, Brandt, CH et al., 2012. A simple prescription for simulating and characterizing gravitational arcs, accepted for publication in A&A; arXiv:1211.2771. Van Rossum, G. 1993. An Introduction to Python for UNIX/C Programmers, Proceedings of the NLUUG najaarsconferentie. Brandt, CH, Ferreira, PC, Makler, M, Neto, AF, et al, 2012. AddArcs: software for gravitational arc simulations, in preparation. Sérsic, JL, 1968. Atlas de Galaxias Australes, Cordoba, Argentina: Observatório Astronómico. Keeton, CR, 2001. Computational Methods for Gravitational Lensing, arXiv:astro-ph/0102340. Bertin, E. & Arnouts, S. 1996. SExtractor: Software for source extraction, A&AS, 117, 393. Brandt, CH, et al., SLtools: a library for Strong Lensing Applications, in preparation. Luppino, GA, Gioia, IM, Hammer, F, Le F‘evre, O, Annis, JA, 1999. A search for gravitational lensing in 38 X-ray selected clusters of galaxies, A&AS 136, 117. Bom, CR, Brandt, CH, Makler, M, Albuquerque, MP, Ferreira, PC, 2012. A Neural Network Arcfinder based on the Mediatrix method, in preparation. [^1]: E-mail: debom@cbpf.br [^2]: This pipeline is part of `SLtools`, a library for image processing, catalog manipulation, and strong lensing applications [@SLtools], and is available at `http://che.cbpf.br/sltools/`.
--- abstract: 'The local topological dynamics of subgroups of ${{{\rm Diff}\, ({\mathbb C^n}, 0)}}$, with special emphasis on ${{{\rm Diff}\, ({\mathbb C}^2, 0)}}$, is discussed with a view towards integrability questions. It is proved in particular that a subgroup of ${{{\rm Diff}\, ({\mathbb C}^2, 0)}}$ possessing locally finite orbits is necessarily solvable. Other results and examples related to higher-dimensional generalizations of Mattei-Moussu’s celebrated topological characterization of integrability are also provided. These examples also settle a fundamental question raised by the previous work of Camara-Scardua.' author: - 'Julio C. Rebelo & Helena Reis' title: 'Discrete orbits and special subgroups of ${{{\rm Diff}\, ({\mathbb C^n}, 0)}}$' --- [^1] Introduction ============ In many senses, this paper is motivated by the celebrated topological characterization of integrable holomorphic vector fields/foliation in dimension $2$ obtained in [@M-M]. The fundamental issue singled out in [@M-M] being the fact that the existence of holomorphic first integrals can be read off the topological dynamics of the foliation, and in particular of its holonomy pseudogroup. It then follows that the existence of (non-constant) holomorphic first integrals is a property invariant by topological conjugation. Only recently, however, extensions of Mattei-Moussu’s results to higher dimensions have started being investigated, partly due to recent progress made in the understanding of the local dynamics of diffeomorphisms of $({\mathbb{C}}^2,0)$ tangent to the identity, cf. [@abate], [@BM], [@raissy] and their references. One basic question was to know whether a (local) singular holomorphic foliation on $({\mathbb{C}}^n,0)$ topologically conjugate to another holomorphic foliation possessing $n-1$ independent holomorphic first integrals should possess $n-1$ independent holomorphic first integrals as well. Though an affirmative answer seemed to be expected, in [@thesis], the authors exhibited two topologically conjugate foliations on $({\mathbb{C}}^3, 0)$ such that one admits two independent holomorphic first integrals but not the other. It became then clear that the extension of Mattei-Moussu theorem to higher dimensions was a far more subtle problem. Ultimately the purpose of this paper is to contribute to the understanding of the above mentioned problem by presenting “counterexamples” as well as affirmative statements that parallel some fundamental results known in the classical low-dimensional case. These results concern either the topological dynamics of subgroups of ${\rm Diff}\, ({\mathbb{C}}^2,0)$ (sometimes ${\rm Diff}\, ({\mathbb{C}}^n,0)$) or, in the case of Theorem B below, singular foliations with a simple singularity as previously discussed in [@scardua]. Indeed, the main result of [@scardua] as well as the question left open about “holonomy maps with non-isolated fixed point” were the starting points of the present work. To state our main results, let us place ourselves in the context of pseudogroups of ${{{\rm Diff}\, ({\mathbb C^n}, 0)}}$ or of ${{{\rm Diff}\, ({\mathbb C}^2, 0)}}$ (the reader may check Section 2 for further details). Our first result is a simple elaboration of the corresponding statement in [@M-M] that turns out to generalize the corresponding result in [@scardua] as it dispenses with the use of the deep theorem on parabolic domains, valid in dimension $2$ and due to M. Abate [@abate]. [**Theorem A**]{}. [*Let $G \subset {{{\rm Diff}\, ({\mathbb C^n}, 0)}}$ be a finitely generated pseudogroup on a small neighborhood of the origin in ${\mathbb{C}}^n$. Given $g \in G$, let ${\rm Dom}\, (g)$ denote the domain of definition of $g$ as element of the pseudogroup in question. Suppose that for every $g \in G$ and $p \in {\rm Dom}\, (g)$ satisfying $g(p) =p$, one of the following holds: either $p$ is an isolated fixed point of $g$ or $g$ coincides with the identity on a neighborhood of $p$. Then the pseudogroup $G$ has finite orbits on a neighborhood of the origin if and only if $G$ itself is finite.*]{} [**Remark**]{}. [When $G$ is a subgroup of ${\rm Diff}\, ({\mathbb{C}},0)$ the assumption of Theorem A is automatically verified so that the statement is reduced to Mattei-Moussu’s corresponding result in [@M-M]. On the other hand, it is proved in [@M-M] that a subgroup of ${\rm Diff}\, ({\mathbb{C}},0)$ is not only finite but also cyclic. In full generality the second part of the statement cannot be generalized to higher dimensions since every finite group embeds into a matrix group of sufficiently high dimension. In Section 2 the reader will find simple examples showing that, in fact, the group need not be cyclic already in dimension $2$ and even if the additional assumption of Theorem A is satisfied.]{} In [@scardua] this statement is essentially proved in dimension $2$ by resorting to Abate’s theorem, cf. Section 2 for a detailed comparison between the corresponding statements. The authors of [@scardua] then go ahead to turn their statement into an application about “complete integrability” of differential equations. A similar application holds in arbitrary dimensions as it will be seen in Theorem B below. Before stating Theorem B, it is however convenient to mention a minor issue concerning the formulation of the results in [@scardua], as pointed out by Y. Genzmer in his review to the article in question. In fact, the authors have failed to mention they assumed the corresponding holonomy maps to have only isolated fixed points, whereas the assumption is clearly needed from the corresponding proof. In particular, if no assumption concerning isolated fixed points is put forward, it becomes unclear whether or not a cyclic (pseudo-) group having finite orbits should be finite and whether a corresponding Siegel singularity associated to a holonomy map with finite orbits must be “completely integrable”. Though an affirmative answer to the latter question was expected, as pointed out by Genzmer [@yohann], both statements turned out to be false as it follows from the “Complement to the Theorem B” below. The reader will find below accurate statements in these directions. [**Theorem B**]{}. [*Let ${\mathcal{F}}$ be a singular foliation associated to a holomorphic vector field $X$ with an isolated singularity at the origin. Suppose that the linear part of $X$ has determinant different from zero, belongs to the Siegel domain and satisfy the conditions 3 and 4 in Section 2. Suppose also that the holonomy map associated to each separatrix of ${\mathcal{F}}$ has finite orbits and satisfies the conditions of Theorem A (in the sense of the cyclic (pseudo-) group they generate). Then ${\mathcal{F}}$ admits $n-1$ independent holomorphic first integrals.*]{} [**Remark**]{}. [Concerning the statement of Theorem B, note that in the case $n=3$, the above mentioned conditions 3 and 4 in Section 2 are automatically verified provided that $X$ is of [*strict Siegel type*]{}. In particular Theorem B covers the main result in [@scardua]. In particular it suffices to have finite orbit for the holonomy map associated to [*one of the separatrices of ${\mathcal{F}}$*]{}. Also the reader will find in Section 2 a slightly more accurate version of the statement of Theorem B showing, in particular, that the assumption concerning holonomy maps has to be verified for a certain separatrix of ${\mathcal{F}}$, i.e. it does not have to be checked over all the separatrices]{}. As mentioned, if the main assumption in Theorem A is dropped, then the conclusions will not longer hold. In particular, we shall prove the following: [**Complement to Theorem B**]{}. [*Let ${\mathcal{F}}$ denote the foliation associated to the vector field $$X = x(1 + x^2yz^3) \frac{\partial }{\partial x} + y(1 - x^2yz^3) \frac{\partial }{\partial y} - z \frac{\partial }{\partial z} \, .$$ The foliation ${\mathcal{F}}$ does not possess two independent holomorphic first integrals (though it possesses one non-constant holomorphic first integral). Besides the holonomy map associated to the axis $\{ x=y=0\}$ has finite orbits whereas it does not generate a finite subgroup of ${{{\rm Diff}\, ({\mathbb C}^2, 0)}}$.*]{} In view of the previous examples, and given that the assumption on isolated fixed points of Theorem A is not fully satisfactory, we may go back to the fundamental question about finding the “correct” generalization of Mattei-Moussu theorem. In other words, what type of algebraic conditions a subgroup of ${{{\rm Diff}\, ({\mathbb C^n}, 0)}}$ possessing finite orbits should verify. To simplify the discussion, we shall content ourselves of dealing with subgroups of ${{{\rm Diff}\, ({\mathbb C}^2, 0)}}$. Differential Galois theory, as well as Morales-Ramis theory cf. [@moralesetal], suggests that a (pseudo-) group having finite orbits may be solvable. The main result of this paper, namely Theorem C below, confirms this suggestion. In fact, it suffices to deal with pseudo-groups possessing [*locally discrete orbits*]{} (also called locally finite orbits) which allows us to apply the statement also to foliations admitting meromorphic first integrals. Naturally a point $p$ is said to have locally discrete orbit, or equivalently, locally finite orbit, under a (pseudo-) group $G$ if, for every point $q$ in the $G$-orbit $G.p$ of $p$, there exists a neighborhood $U$ of $q$ such that the set $U \cap G.p$ is finite. A group is said to have locally finite orbits if all its orbits are locally finite. With this terminology, we state: [**Theorem C**]{}. [*Suppose that $G$ is a finitely generated (pseudo-) subgroup of ${{{\rm Diff}\, ({\mathbb C}^2, 0)}}$ with locally discrete orbits. Then $G$ is solvable.*]{} Theorem C is the most elaborate result of this paper and to our knowledge the first general result concerning the dynamics of solvable/non-solvable subgroups of ${{{\rm Diff}\, ({\mathbb C^n}, 0)}}$ for $n \geq 2$. Several comments are needed to properly place this statement in perspective concerning previous works. First the highly developed case $n=1$ must be singled out. In this case, the structure of solvable subgroups of ${\rm Diff}\, ({\mathbb{C}},0)$ is well-understood [@cerveaumoussu], [@russians] and, formally, our statement is a consequence of the much stronger results of Shcherbakov and Nakai [@shcherbakov], [@nakai]. For higher dimensions, however, there are new dynamical phenomena related, for instance, to the existence of non-solvable discrete subgroups of ${\rm GL}\, (n , {\mathbb{C}})$, $n \geq 2$. These phenomena prevent us from extending the results of Shcherbakov and Nakai without additional assumptions. In fact, the “sharp” extension of their theory to higher dimensions remains and outstanding problem despite some significant progress made in [@belliart], [@lorayandI]. Another point to be made about Theorem C is that its proof does not rely on any typically two-dimensional phenomenon and hence can probably be extended to arbitrary dimensions, though we have not pursued this direction. Theorem C is actually constituted by two main ideas which nicely complement each other. On one hand, there is the standard theory of Kleinian groups and stable manifolds that essentially allows us to reduce the proof of the theorem in question to the case of subgroups of ${{{\rm Diff}_1 ({\mathbb C}^2, 0)}}$, the group of holomorphic diffeomorphisms tangent to the identity. To deal with the latter group, we then adapt the “recurrence theorem” established by Ghys in [@ghysBSBM] by means of his notion of “pseudo-solvable” group. An important remark concerning this adaptation is that elements of ${{{\rm Diff}\, ({\mathbb C}^2, 0)}}$ tangent to the identity are automatically “close to the identity” in a sense suitable to ensure convergence of sequences of commutators. Still considering the use Ghys’s ideas to subgroups of ${{{\rm Diff}_1 ({\mathbb C}^2, 0)}}$, it is necessary to clarify the connection between solvable and pseudo-solvable subgroups of ${{{\rm Diff}_1 ({\mathbb C}^2, 0)}}$. For $n=1$, Ghys showed in [@ghysBSBM] that these notions coincide and this result is extended to $n=2$ here. Although the strategy followed is similar to the employed by Ghys, this extension is not immediate since the structure of solvable subgroups of ${{{\rm Diff}\, ({\mathbb C}^2, 0)}}$ is not nearly as developed as in the one-dimensional case. We are therefore led to work out several algebraic aspects on the solvable subgroups of ${{{\rm Diff}\, ({\mathbb C}^2, 0)}}, \, {{{\rm Diff}_1 ({\mathbb C}^2, 0)}}$ and to deal with the new phenomena concerning existence of non-constant first integrals and/or with rank $2$ abelian groups. Theorem C raises a number of interesting questions, in particular connections with Morales-Simó-Ramis theory [@moralesetal] seems very promising. Another more specific question that may turn out to be quite deep concerns the classification of solvable non-abelian subgroups of ${{{\rm Diff}\, ({\mathbb C}^2, 0)}}$ possessing locally finite orbits. The reader is reminded that, for $n=1$, the corresponding result is due to Birkhoff, though it was independently re-discovered by Loray in [@Loray]. Since this beautiful result possesses a number of applications, we believe that its generalization to dimension $2$ is a problem worth further investigation. Let us finish this Introduction with an outline of the structure of this paper. Section 2 contains the proofs of Theorems A and B along with the relevant definitions. As mentioned, Theorem A is a simple elaboration of the arguments in [@M-M]. In turn, Theorem B is an application of Theorem A going through useful results due to P. Elizarov-Il’yashenko and to Reis, [@EI], [@helena]. Section 3 contains a few interesting examples of local dynamics of diffeomorphisms tangent to the identity along with local foliations realizing some of them as local holonomy map. In particular the example appearing in the Complement to Theorem B is detailed so as to settle the main issue left open in [@scardua]. Section 4 is the most technical part of the paper. It involves a detailed algebraic study of abelian and solvable (formal) groups of germs of diffeomorphisms in dimension $2$. Campbell-Hausdorff type formulas are widely used in this study which, ultimately, aims at showing that “pseudo-solvable” subgroups of ${{{\rm Diff}_1 ({\mathbb C}^2, 0)}}$ are, actually, solvable (Proposition \[commuting9\]). A reader willing to take for grant Proposition \[commuting9\] or, alternatively, content himself/herself with a statement involving “pseudo-solvable” groups in Theorem C can skip the whole of Section 4. Yet it may be pointed out that the algebraic description of solvable subgroups of ${{{\rm Diff}_1 ({\mathbb C}^2, 0)}}$ developed in the course of the mentioned section is original and likely to have further interest. Finally in Section 5, ideas from Ghys [@ghysBSBM] are combined to Proposition \[commuting9\] and to standard results on Kleinian groups to yield the proof of Theorem C. Theorems A and B ================ In the sequel, $G$ denotes a finitely generated subgroup of ${{{\rm Diff}\, ({\mathbb C^n}, 0)}}$, where ${{{\rm Diff}\, ({\mathbb C^n}, 0)}}$ stands for the group of germs of local holomorphic diffeomorphisms of ${\mathbb{C}}^n$ fixing the origin. Assume then that $G$ is generated by the elements $h_1, \ldots, h_k \in {{{\rm Diff}\, ({\mathbb C^n}, 0)}}$. A natural way to make sense of the local dynamics of $G$ consists of choosing representatives for $h_1, \ldots, h_k$ as local diffeomorphisms fixing $0 \in {\mathbb{C}}$. These representatives are still denoted by $h_1, \ldots, h_k$ and, once this choice is made, $G$ itself can be identified to the [*pseudogroup*]{} generated by these local diffeomorphisms on a (sufficiently small) neighborhood of the origin. It is then convenient to begin by briefly recalling the notion of [*pseudogroup*]{}. For this, consider a small neighborhood $V$ of the origin where the local diffeomorphisms $h_1, \ldots, h_k$, along with their inverses $h_1^{-1}, \ldots, h_k^{-1}$, are defined and one-to-one. The pseudogroup generated by $h_1, \ldots, h_k$ (or rather by $h_1, \ldots , h_k, h_1^{-1}, \ldots, h_k^{-1}$ if there is any risk of confusion) on $V$ is defined as follows. Every element of $G$ has the form $F = F_s \circ \ldots \circ F_1$ where each $F_i$, $i \in \{1, \ldots, s\}$, belongs to the set $\{h_i^{\pm 1}, i=1, \ldots, k\}$. The element $F \in G$ should be regarded as a one-to-one holomorphic map defined on a subset of $V$. Indeed, the domain of definition of $F = F_s \circ \ldots \circ F_1$, as an element of the pseudogroup, consists of those points $x \in V$ such that for every $1 \leq l < s$ the point $F_l \circ \ldots \circ F_1(x)$ belongs to $V$. Since the origin is fixed by the diffeomorphisms $h_1, \ldots, h_k$, it follows that the domain of definition of every element $F$ is a non-empty open set containing the origin. This open set may however be disconnected. Whenever no misunderstanding is possible, the pseudogroup defined above will also be denoted by $G$ and we are allowed to shift back and forward from $G$ viewed as pseudogroup or as group of germs. Let us continue with some definitions that will be useful throughout the text. Suppose we are given local holomorphic diffeomorphisms $h_1, \ldots, h_k, h_1^{-1}, \ldots, h_k^{-1}$ fixing the origin of ${\mathbb{C}}^n$. Let $V$ be a neighborhood of the origin where all these local diffeomorphisms are defined and one-to-one. From now on, let $G$ be viewed as the pseudogroup acting on $V$ generated by these local diffeomorphisms. Given an element $h \in G$, the domain of definition of $h$ (as element of $G$) will be denoted by ${\rm Dom}_V (h)$. The $V_G$-orbit ${{\mathcal{O}}}_V^G (p)$ of a point $p \in V$ is the set of points in $V$ obtained from $p$ by taking its image through every element of $G$ whose domain of definition (as element of $G$) contains $p$. In other words, $${{\mathcal{O}}}_V^G (p) = \{q \in V \; \, ; \; \, q = h(p), \; h \in G \; \; {\rm and} \; \; p \in {\rm Dom}_V (h) \} \, .$$ Fixed $h \in G$, the $V_h$-orbit of $p$ can be defined as the $V_{<h>}$-orbit of $p$, where $<h>$ denotes the subgroup of ${{{\rm Diff}\, ({\mathbb C^n}, 0)}}$ generated by $h$. We can now define “pseudogroups with finite orbits" and “pseudogroups with locally discrete orbits”. \[def\_finiteorbits\] A pseudogroup $G \subseteq {{{\rm Diff}\, ({\mathbb C^n}, 0)}}$ is said to have finite orbits if there exists a sufficiently small open neighborhood $V$ of $0 \in {\mathbb{C}}^n$, where $h_1, \ldots , h_k, h_1^{-1}, \ldots, h_k^{-1}$ are well-defined injective maps, such that the set ${{\mathcal{O}}}_V^G (p)$ is finite for every $p\in V$. Analogously, $h \in G$ is said to have finite orbits if the pseudogroup $\langle h \rangle$ generated by $h$ has finite orbits. Similarly, a pseudogroup is said to have locally discrete orbits (or equivalently locally finite orbits) if for every $p \in V$ and for every point $q \in {{\mathcal{O}}}_V^G (p)$, there exists a neighborhood $W \subset {\mathbb{C}}^n$ of $q$ such that $W \cap {{\mathcal{O}}}_V^G (p) = \{ q \}$. Fixed $h \in G$, the [*number of iterations of $p$ by $h$*]{} is the cardinality of the set $\{ n \in {\mathbb{Z}}\; \, ; \; \, p \in {\rm Dom}_V (h^{n}) \}$, where ${\rm Dom}_V (h^{n})$ stands for the domain of definition of $h^n$ as element of the pseudogroup in question. The number of iterations of $p$ by $h$ is denoted by $\mu_V^h (p)$ and belongs to ${\mathbb{N}}\cup \{\infty\}$. The lemma below is attributed to Lewowicz and it can be found in [@M-M]. \[lemmalewowicz\] Let $K$ be a compact connected neighborhood of $0 \in {\mathbb{R}}^n$ and $h$ a homeomorphism from $K$ onto $h(K) \subseteq {\mathbb{R}}^n$ verifying $h(0) = 0$. Then there exists a point $p$ on the boundary $\partial K$ of $K$ whose number of iterations in $K$ by $h$ is infinite, i.e. $p$ satisfies $\mu_K^h (p) = \infty$. Fixed an open set $V$, note that the existence of points in $V$ such that $\mu_K^h (p) = \infty$ does not imply that $p$ is a point with infinite orbit, i.e. there may exist points $p$ in $V$ such that $\mu_V^h(p)=\infty$ but $\# {{\mathcal{O}}}_V^{<h>}(p)<\infty$, where $\#$ stands for the cardinality of the set in question. These points are called [*periodic for $h$ on $V$*]{}. A local diffeomorphism is said to be [*periodic*]{} if there is $k \in {\mathbb{N}}^{\ast}$ such that $f^k$ coincides with the identity on a neighborhood of the origin. Clearly periodic diffeomorphisms possess finite orbits. To prove Theorem A, we first need to show the following. \[propperiodic\] Let $h$ be an element of a subgroup $G \subseteq {{{\rm Diff}\, ({\mathbb C^n}, 0)}}$, where $G$ is as stated in Theorem A. Then $h$ is periodic. Assuming that Proposition \[propperiodic\] holds, then Theorem A can be derived as follows: We want to prove that $G$ is finite (for example at germs level). So, let us consider the homomorphism $\sigma : G \rightarrow GL(n, {\mathbb{C}})$ assigning to an element $h \in G$ its derivative $D_0 h$ at the origin. The image $\sigma (G)$ of $G$ is a finitely generated subgroup of $GL(n, {\mathbb{C}})$ all of whose elements have finite order. According to Schur’s theorem concerning the affirmative solution of Burnside problem for linear groups, the group $\sigma (G)$ must be finite, cf. [@burnside]. Therefore, to conclude that $G$ is itself finite, it suffices to check that $\sigma$ is one-to-one or, equivalently, that its kernel is reduced to the identity. Hence suppose that $h \in G$ lies in the kernel of $\sigma$, i.e. $D_0 h$ coincides with the identity. To show that $h$ itself coincides with the identity, note that $h$ must be periodic since it has finite orbits, cf. Proposition \[propperiodic\]. Therefore $h$ is conjugate to its linear part at the origin, i.e. it is conjugate to the identity map. Thus $h$ coincides with the identity on a neighborhood of the origin of ${\mathbb{C}}^n$. The theorem is proved. Before proving Proposition \[propperiodic\], let us make some comments concerning the proof of Theorem A. Concerning the case $n=1$, Leau theorem immediately implies that the above considered homomorphism $\sigma$ is one-to-one so that $G$ will be abelian and, indeed, cyclic. This fact does not carry over higher dimensions since, as already mentioned, every finite group can be realized as a matrix group and therefore as a pseudogroup of ${{{\rm Diff}\, ({\mathbb C^n}, 0)}}$ having finite orbits. Yet, in general, groups obtained in this manner contain non-trivial elements with non-isolated fixed points. Therefore, if we are dealing with pseudogroups satisfying the conditions of Theorem A, the question on whether $G$ is abelian may still be raised. However, even in this restricted setting the group $G$ need not be abelian. For example, let $G$ be the subgroup of ${{\rm Diff}\, ({\mathbb C^2}, 0)}$ generated by $h_1(x,y) = (e^{\pi i/3} x, e^{2\pi i/3} y)$ and $h_2(x,y) = (y,x)$. Every element of $G$ has finite orbits and possesses a single fixed point at the origin but the group $G$ is not abelian. Let us now prove Proposition \[propperiodic\]. As already pointed out, the proof amounts to a careful reading of the argument supplied in [@M-M] for the case $n=1$. Let $h$ be a local diffeomorphims in ${{{\rm Diff}\, ({\mathbb C^n}, 0)}}$ whose periodic points are isolated unless the corresponding power of $h$ coincides with the identity on a neighborhood of the mentioned point. Let us assume that $h$ is not periodic. To prove the statement, we are going to show the existence of an open neighborhood $U$ of $0 \in {\mathbb{C}}^n$ such that the set of points $x \in U$ with infinite $U_{<h>}$-orbit is uncountable and has the origin as an accumulation point. It will then result that $h$ cannot have finite orbits, thus proving the proposition. Let $U$ be an arbitrarily small open neighborhood of $0 \in {\mathbb{C}}^n$ contained in the domains of definition of $h, \, h^{-1}$. Suppose also that $h, \, h^{-1}$ are one-to-one on $U$. Consider $\rho_0 > 0$ such that $D_{\rho_0} \subseteq U$, where $D_{\rho_0}$ stands for the closed ball of radius $\rho_0$ centered at the origin. Following [@M-M], we define the following sets $$\begin{aligned} {\bf P} & = & \{x \in D_{\rho_0} : \; \mu_{D_{\rho_0}}(x) = \infty , \; \# {{\mathcal{O}}}_{D_{\rho_0}}^{<h>}(x) < \infty\} \, , \\ {\bf F} & = & \{x \in D_{\rho_0} : \; \mu_{D_{\rho_0}}(x) < \infty , \; \# {{\mathcal{O}}}_{D_{\rho_0}}^{<h>}(x) < \infty\} \, , \\ {\bf I} & = & \{x \in D_{\rho_0} : \; \mu_{D_{\rho_0}}(x) = \infty , \; \# {{\mathcal{O}}}_{D_{\rho_0}}^{<h>}(x) = \infty\} \, . \\\end{aligned}$$ In other words, ${\bf P}$ is the set of periodic points in $D_{\rho_0}$ for $h$, ${\bf F}$ denotes the set of points leaving $D_{\rho_0}$ after finitely many iterations and ${\bf I}$ stands for the set of non-periodic points with infinite orbit. Naturally, $D_{\rho_0} = {\bf P} \cup {\bf F} \cup {\bf I}$ and Lewowicz’s lemma implies that $$({\bf P} \cup {\bf I}) \cap \partial D_\rho\neq \emptyset \, .$$ for every $\rho \leq \rho_0$. Thus, at least one between ${\it P}$ and ${\bf I}$ is uncountable. In what follows, the diffeomorphism $h$ is supposed to be non-periodic. With this assumption, our purpose is to show that ${\bf I}$ must be uncountable. For $n \geq 0$, let $A_n$ denote the domain of definition of $h^n$ viewed as an element of the pseudogroup [*generated on $D_{\rho_0}$*]{}. Clearly $A_{n+1} \subseteq A_n$. Next, let $C_n$ be the connected (compact) component of $A_n$ containing the origin and set $$C = \bigcap_{n \in {\mathbb{N}}} C_n \, .$$ Note that $C$ is the intersection of a decreasing sequence of compact connected sets. Therefore $C$ is non-empty and connected. [*Claim*]{}: Without loss of generality, the set $C$ can be supposed countable. Suppose that $C$ is uncountable. The reader is reminded that our aim is to conclude that ${\bf I}$ is uncountable provided that $h$ is not periodic. Therefore we suppose for a contradiction that ${\bf I}$ is countable. Since ${\bf I}$ is countable so is ${\bf I} \cap C$. Consider now $C \cap {\bf P}$ and note that this intersection must be uncountable, since $C \subset {\bf P} \cup {\bf I}$. Let $$C \cap {\bf P} = \bigcup_{n\in{\mathbb{N}}} P_n \, ,$$ where $P_n$ is the set of points $x \in C \cap {\bf P}$ of period $n$. Note that there exists a certain $n_0 \in {\mathbb{N}}$ such that $P_{n_0}$ is infinite, otherwise all of the $P_n$ would be finite and $C \cap {\bf P}$ would be countable. Being infinite, $P_{n_0}$ has a non-trivial accumulation point $p$ in $C_{n_0}$. The map $h^{n_0}$ is holomorphic on an open neighborhood of $C_{n_0}$ and it is the identity on $P_{n_0} \cap C_{n_0}$. Since $p$ is not an isolated fixed point of $h^{n_0}$, it follows that $h^{n_0}$ coincides with the identity map on $C_{n_0}$, i.e. on the connected component of the domain of definition of $h^{n_0}$ that contains the origin. This contradicts the assumption of non-periodicity of $h$ (modulo reducing the neighborhood of the origin). Hence ${\bf I}$ is uncountable as desired. In view of the preceding, in the sequel $C$ will be supposed to consist of countably many points. The purpose is still to conclude that the set ${\bf I}$ is uncountable. Since $C$ is connected, it follows that $C$ is reduced to the origin. Then, for every $\rho < \rho_0$, we have $C \cap \partial D_\rho = \emptyset$. Now note that, for a fixed $\rho > 0$, the sets $$C_1 \cap \partial D_\rho, \, \, (C_1 \cap C_2) \cap \partial D_\rho, \, \, (C_1 \cap C_2 \cap C_3)\cap \partial D_\rho, \, \, \ldots$$ form a decreasing sequence of compact sets. Hence the intersection $\bigcap_{n\in{\mathbb{N}}} C_n \cap \partial D_\rho$ is non-empty, unless there exists $n_0 \in {\mathbb{N}}$, such that $C_{n_0} \cap \partial D_\rho = \emptyset$. The latter case must occur since $C \cap \partial D_\rho = \emptyset$. However, the value of $n_0$ for which the mentioned intersection becomes empty may depend on $\rho$. Fix $\rho > 0$ and let $n_0$ be as above. Let $K$ be a compact connected neighborhood of $C_{n_0}$ that does not intersect the other connected components of $A_{n_0}$, if they exist. The set $K$ can be chosen so that $\partial K \cap A_{n_0} = \emptyset$. The inclusion $A_{n+1} \subseteq A_n$ guarantees that $\partial K$ does not intersect $A_n$, for every $n \geq n_0$. Therefore $$\partial K \cap {\bf P} = \emptyset \, .$$ In fact, if there were a periodic point $x$ of $D_{\rho_0}$ on $\partial K$, then $x$ would belong to every set $A_n$. In particular, it would belong to $A_{n_0}$, hence leading to a contradiction. Nonetheless, Lewowicz’s lemma guarantees the existence of a point $x$ on the boundary $\partial K$ of $K$ such that the number of iterations in $K$ is infinite, i.e. such that $\mu_K (x) = \infty$. Since $K \subseteq D_{\rho} \subseteq D_{\rho_0}$, it follows that $$\partial K \cap {\bf I} \ne \emptyset \, .$$ By construction, it is clear that a compact set $K$ satisfying the properties above is not unique. Indeed, for $K$ as above, denote by $K_{\varepsilon}$ the compact connected neighborhood of $K$ whose boundary has distance to $\partial K$ equal to $\varepsilon$. Then, there exists $\varepsilon_0 > 0$ such that $K_{\varepsilon}$ satisfies the same properties as $K$ for every $0 \leq \varepsilon \leq \varepsilon_0$ with respect to $A_{n_0}$. In particular, $$\partial K_{\varepsilon} \cap {\bf I} \ne \emptyset$$ for all $0 \leq \varepsilon \leq \varepsilon_0$. Therefore ${\bf I}$ must be uncountable. Finally, it remains to prove that $0 \in {\mathbb{C}}^n$ is an accumulation point of ${\bf I}$. This is, however, a simple consequence of the fact that a compact set $K \subseteq D_{\rho}$ as above can be considered for all $\rho > 0$. This completes the proof of Proposition \[propperiodic\]. [Concerning Proposition \[propperiodic\] in dimension $2$ and in the case where $h$ is tangent to the identity, the use made in [@scardua] of Abate’s theorem [@abate] has an advantage compared to Proposition \[propperiodic\], namely: Abate’s theorem shows that it suffices to check that the origin is an isolated fixed point of $h$ itself whereas, in more general cases, Proposition \[propperiodic\] requires all non-trivial powers of $h$ to have only isolated fixed points.]{} We can now move on to prove Theorem B. The proof of this theorem follows from the combination of our Theorem A with the results in [@EI] or in [@helena]. To begin with, let ${\mathcal{F}}$ be a singular foliation on $({\mathbb{C}}^n, 0)$ and let $X$ be a representative of ${\mathcal{F}}$, i.e. $X$ is a holomorphic vector field tangent to ${\mathcal{F}}$ and whose singular set has codimension at least $2$. Suppose that the origin is a singular point of ${\mathcal{F}}$ and denote by ${\lambda}_1, \ldots, {\lambda}_n$ the corresponding eigenvalues of $DX$ at the origin. Assume the following holds: 1. ${\mathcal{F}}$ has an isolated singularity the origin. 2. The singularity of ${\mathcal{F}}$ is of Siegel type. 3. The eigenvalues ${\lambda}_1, \ldots, {\lambda}_n$ are all different from zero and there exists a straight line through the origin, in the complex plane, separating ${\lambda}_1$ from the remainder eigenvalues. 4. Up to a change of coordinates, $X = \sum_{i=1}^n {\lambda}_ix_i(1+f_i(x)) \partial /\partial x_i$, where $x=(x_1,\ldots,x_n)$ and $f_i(0)=0$ for all $i$ The fourth condition above amounts to assuming the existence of $n$ invariant hyperplanes through the origin. This condition as well as condition (3) are always verified when $n=3$ provided that the singular point is of [*strict Siegel type*]{} cf. [@C]. Also, recall that the singular point is said to be of strict Siegel type if $0 \in {\mathbb{C}}$ is contained in the interior of the convex hull of $\{{\lambda}_1,\ldots,{\lambda}_n\}$. Next we shall need Theorem \[TMMhigher\] below. This theorem generalizes to higher dimensions a unpublished result of Mattei which, in turn, improved on an earlier version appearing in [@M-M]. \[TMMhigher\] [([**\[EI\], \[Re\]**]{})]{} Let $X$ and $Y$ be two vector fields satisfying conditions (1), (2), (3) and (4) above. Denote by $h^X$ (resp. $h^Y$) the holonomy of $X$ (resp. $Y$) relatively to the separatrix of $X$ (resp. $Y$) tangent to the eigenspace associated to the first eigenvalue. Then $h^X$ and $h^Y$ are analytically conjugate if and only if the foliations associated to $X$ and $Y$ are analytically equivalent. The proof of Theorem \[TMMhigher\] can be found in either [@EI] or [@helena], a particularly detailed exposition can be found in [@monograph]. With this theorem in hand, the proof of Theorem B goes as follows. Let $X$ be as in the statement and denote by ${\mathcal{F}}$ the foliation associated to $X$. Let $x_1$ be the invariant axis corresponding to the eigenvalue ${\lambda}_1$ as in item (3) above. Consider also the local holonomy map $h$ relative to this invariant axis and to the foliation ${\mathcal{F}}$. The map $h$ is defined on a suitable local section and it can also be identified to a local diffeomorphism fixing the origin of ${\mathbb{C}}^{n-1}$. By assumption, all iterates of $h$ have isolated fixed points. Therefore Theorem A implies that the local orbits of $h$ are finite if and only if $h$ is periodic. Naturally, we may assume this to be the case. Let then $N$ be the [*period*]{} of $h$, namely the smallest strictly positive integer for which $h^N$ coincides with the identity on a neighborhood of the origin of ${\mathbb{C}}^{n-1}$ (with the above mentioned identifications). Denote also by $T$ the derivative of $h$ at the origin, which is itself identified to a linear transformation of ${\mathbb{C}}^{n-1}$. The fact that $h$ is periodic of period $N$ ensures that $T$ is also periodic with the same period $N$. In fact, $h$ and $T$ are analytically conjugate as already mentioned (i.e. $h$ is linearizable). Moreover, $T$ is the holonomy map with respect to the axis $x_1$ associated to the foliation ${\mathcal{F}}_Z$ induced by the linear vector field $$Z = \sum_{i=1}^n {\lambda}_ix_i \partial /\partial x_i \, .$$ It follows from Theorem \[TMMhigher\] that the foliation ${\mathcal{F}}$ is analytically equivalent to the foliation ${\mathcal{F}}_Z$. However, for ${\mathcal{F}}_Z$ (i.e. a foliation induced by a linear diagonal vector field), it is immediate to check that complete integrability is equivalent to the periodic character of the holonomy map $T$. Since ${\mathcal{F}}$ and ${\mathcal{F}}_Z$ are analytically equivalent, we conclude from what precedes that the condition of having a local holonomy $h$ with finite orbits forces ${\mathcal{F}}$ to be completely integrable. The converse is clear, since having ${\mathcal{F}}$ completely integrable ensures at once that the holonomy map $h$ must be periodic. This finishes the proof of Theorem B. Examples of dynamics and the complement to Theorem B ==================================================== This section contains some interesting examples of diffeomorphisms of $({\mathbb{C}}^2,0)$ tangent to the identity and possessing “special” local dynamics along with examples of foliations where these diffeomorphisms are realized as holonomy maps. Among these examples, the vector field mentioned in the “complement to Theorem B” will be discussed in detail. Naturally we are, in particular, interested in examples of diffeomorphisms tangent to the identity at $(0,0) \in {\mathbb{C}}^2$ and having finite orbits. According to Theorem A, none of these diffeomorphisms may have an isolated fixed point at the origin. Recall also that diffeomorphisms tangent to the identity are realized as time-one maps of [*formal vector fields*]{}. This formal vector field is unique and it is referred to as the [*infinitesimal generator*]{} of the diffeomorphism in question, cf. Section 3. In particular, it is natural to look for examples among diffeomorphisms that are time-one maps of actual holomorphic vector fields, or at least that leafwise preserve some singular holomorphic foliation. Here as usual, it is convenient to distinguish between a vector field $X$ and its associated foliation ${\mathcal{F}}$ obtained by eliminating non-trivial common factors among the components of $X$: since our diffeomorphisms do not have isolated fixed points, their infinitesimal generators will not have isolated singularities either. Local diffeomorphisms --------------------- Recall that ${{{\rm Diff}_1 ({\mathbb C}^2, 0)}}$ denotes the normal subgroup of ${{{\rm Diff}\, ({\mathbb C}^2, 0)}}$ consisting of diffeomorphisms tangent to the identity. Let us begin with some examples of diffeomorphisms in ${{{\rm Diff}_1 ({\mathbb C}^2, 0)}}$ with interesting local dynamics, including examples possessing finite orbits. The simplest case where $F(x,y) = (x + f(y) , y)$, with $f(0) =f'(0) =0$, can be set aside in what follows. Note that the foliation associated to the infinitesimal generator of $F$ is regular in this case. Examples of diffeomorphisms whose infinitesimal generator provides a singular foliation can also be produced by successively blowing-up $F$. Nonetheless other examples of diffeomorphisms associated to linear foliations are described below. [**Example 1**]{}: Linear vector fields. Consider the vector field $Y$ given by $Y = x \partial /\partial x - \lambda y \partial /\partial y$ where $\lambda =n/m$ with $m,n \in {\mathbb{N}}^{\ast}$. The foliation associated to $Y$ will be denoted by ${\mathcal{F}}$ and it should be noted that the holomorphic function $(x,y) \mapsto x^n y^m$ is a first integral for ${\mathcal{F}}$. Let $\phi_Y$ denote the time-one map induced by $Y$. The local dynamics of $\phi_Y$ can easily be described as follows. The vector field $Y$ can be projected on the axis $\{ y=0\}$ as the vector field $x \partial /\partial x$. Therefore the (real) integral curves of $Y$ coincide with the lifts in the corresponding leaves of ${\mathcal{F}}$ of the (real) trajectories of $x \partial /\partial x$ on $\{ y=0\}$. The latter trajectories are radial lines being emanated from $0 \in \{ y=0\} \simeq {\mathbb{C}}$ so that the local dynamics of $\phi_Y$ restricted to $\{ y=0\}$ is such that, whenever $x_0 \neq 0$, the sequence $\{\phi_Y^n (x_0) \}$ marches off a uniform neighborhood of $0 \in \{ y=0\} \simeq {\mathbb{C}}$ as $n \rightarrow \infty$ and it converges to $0 \in \{ y=0\} \simeq {\mathbb{C}}$ as $n \rightarrow -\infty$. Consider now the orbit of a point $(x_0, y_0)$, $x_0y_0 \neq 0$, by $\phi_Y$. Since this is simply the lift in the leaf of ${\mathcal{F}}$ through $(x_0, y_0)$ of the dynamics of $x_0 \in \{ y=0\} \simeq {\mathbb{C}}$, it follows that $\phi_Y^n (x_0, y_0)$ leaves a fixed neighborhood of $(0,0) \in {\mathbb{C}}^2$ since the first coordinate increases to uniformly large values provided that $n \rightarrow \infty$. Similarly, when $n \rightarrow -\infty$, the first coordinate of $\phi_Y^n (x_0, y_0)$ must converge to [*zero*]{} so that the second coordinate becomes “large” due to the first integral $x^ny^m$. Thus, fixed a (small) neighborhood $U$ of $(0,0) \in {\mathbb{C}}^2$, every orbit of $\phi_Y$ that is not contained in $\{ x=0\} \cup \{ y=0\}$ is bound to intersect $U$ at finitely many points. Clearly the time-one map induced by $Y$ is not tangent to the identity. However, examples of time-one maps tangent to the identity and satisfying the desired conditions can be obtained, for example, by considering the vector field $X = x^ny^m Y$ and taking the time-one map $\phi_X$ induced by $X$. Clearly the linear part of $X$ at $(0,0)$ equals zero so that $\phi_X$ must be tangent to the identity. Furthermore, the multiplicative factor $x^ny^m$ annihilates the dynamics of $\phi_X$ over the coordinate axes so that only the orbits of points $(x_0, y_0)$, with $x_0y_0 \neq 0$, have to be considered. The leaf of ${\mathcal{F}}$ through $(x_0, y_0)$ will be denoted by $L$. Also let $c \in {\mathbb{C}}$ be the value of $x^ny^m$ on $L$. The restriction of $X$ to $L$ is nothing but the restriction of $Y$ to $L$ multiplied by the scalar $c \in {\mathbb{C}}$. Therefore the real orbits of $X$ in $L$ coincide with the lift to $L$ of the real orbits of the vector field $cx \partial /\partial x$ defined on $\{ y=0\}$. The geometric nature of the orbits of $cx \partial /\partial x$ depends on the argument of $c \in {\mathbb{C}}$, i.e. setting $c= \vert c \vert e^{2\pi i \alpha}$, this geometry depends on $\alpha \in [0, 2\pi)$. If $\alpha = \pi/2$, then the orbits of $cx \partial /\partial x$ are contained in circles about the origin. After finitely many tours, these circles lift into the corresponding leaf (i.e. the leaf on which $xy$ equals $c$) as closed paths invariant by $\phi_X$. In addition for a “generic” choice of $c$ satisfying $\alpha = \pi/2$, the resulting time-one map restricted to the corresponding invariant path will be conjugate to an irrational rotation. Thus $\Phi_X$ does not have finite orbits. Let us now briefly discuss the slightly more general case where $X =x^a y^b Y$ with $a,b \in {\mathbb{N}}^{\ast}$. Setting $d = am-bn$, we can suppose without loss of generality that $d\geq 1$. Next, by considering the system $$\begin{cases} \frac{dx}{dt} = mx^{a+1} y^b \\ \frac{dy}{dt} = -nx^a y^{b+1} \, , \end{cases}$$ we conclude that $dy/dx = -ny/mx$ so that $y = c x^{-n/m}$ in ramified coordinates. In turn, this yields $dx/dt = c^b mx^{1 + d/m} \partial /\partial x$. Since $d \geq 1$ by construction, the orbits of the latter vector field defines the well-known “petals” associated to Leau flower in the case of periodic linear part, cf. [@carlerson]. For example, setting $m=1$ to simplify, the orbits of the vector field $x^{1+d} \partial /\partial x$ consists of $d+1$ “petals” in non-ramified coordinates. In any event, the sequence of points in $\{ y=0\}$ consisting of the first coordinates of the full orbits of $\phi_X$ either marches straight off a neighborhood of $0 \in \{ y=0\} \simeq {\mathbb{C}}$ or it converges to $(0,0)$. Converging to $(0,0)$ will force the second coordinates of the points in the $\phi_X$-orbit to increase uniformly so that the orbit in question must leave a fixed neighborhood of $(0,0) \in {\mathbb{C}}^2$. Summarizing, we conclude: [*Claim 1*]{}. Fixed a neighborhood $U$ of $(0,0) \in {\mathbb{C}}^2$ and given $p = (x_0, y_0)$, $x_0y_0 \neq 0$, the set $$U \cap \left\{ \bigcup_{n=-\infty}^{\infty} \phi_X^n (p) \right\}$$ is finite. [**Example 2**]{}: Diffeomorphisms leaving the function $(x,y) \mapsto xy$ invariant. Let us see here two cases similar to Example 1 that can easily be realized as the holonomy of foliations as in Theorem B. First, let $F \in {{{\rm Diff}_1 ({\mathbb C}^2, 0)}}$ be given by $$F (x,y) = [ x(1+xy f(xy)), \, y (1 +xy f(xy))^{-1}] \, , \label{forholonomy1}$$ where $f (z)$ is a holomorphic function defined about $0 \in {\mathbb{C}}$ and satisfying $f(0) \neq 0$. Note that $F$ leaves the function $(x,y) \mapsto xy$ invariant since the product of its first and second components equals $xy$. Next, consider an initial point $(x_0,y_0)$ with $x_0y_0 =C \neq 0$. The orbit of $(x_0,y_0)$ under $F$ is hence contained in the curve defined by $\{ xy = C\}$. However, for a point $(\tilde{x} , \tilde{y})$ lying in $\{ xy = C\}$, the value of $F (\tilde{x} , \tilde{y})$ takes on the form $$F (\tilde{x} , \tilde{y}) = [\tilde{x} (1 + C f(C)) \, , \, \tilde{y} (1 + C f(C))^{-1} ] \, .$$ In particular, those values of $C$ for which $\vert 1 + C f(C) \vert =1$ give rise to a rotation in the first coordinate. Therefore the lifts of these circles in the corresponding leaves are loops. Besides for a generic choice of $C$ satisfying $\vert 1 + C f(C) \vert =1$ the dynamics induced on one of these invariant loops is conjugate to an irrational rotation so that $F$ does not have finite orbits. Consider now the local diffeomorphism $H$ which is given by $$H (x,y) = [ x(1+x^2y f(x^2y)), \, y (1 +x^2y f(x^2y))^{-1}] \, , \label{forholonomy2}$$ where $f$ is as above. It follows again that $H$ preserves the function $(x,y) \mapsto xy$. Our purpose is to show that, unlike $F$, $H$ has finite orbits. For this, fix again an initial point $(x_0,y_0)$ with $x_0y_0 =C \neq 0$ so that the orbit of $(x_0,y_0)$ under $H$ is contained in the curve $\{ xy = C\}$. Next note that, if $(\tilde{x} , \tilde{y})$ lies in $\{ xy = C\}$, we have $$H (\tilde{x} , \tilde{y}) = [\tilde{x}(1 + \tilde{x} C f( \tilde{x} C) \, , \, \tilde{y} (1 + \tilde{x}C f(\tilde{x} C))^{-1} ] \, .$$ The dynamics of the first component of $H$ behaves again as the Leau flower. Therefore, by resorting to an argument totally analogous to the one employed in Example 1 for $X =x^a y^b Y$ with $d = am-bn \neq 0$, we conclude that all the orbits of $H$ are finite as desired. [**Example 3**]{}: Resonant vector fields. This example consists of a diffeomorphism with finite orbits that is associated to a non-linear vector field. Indeed, the previous examples of diffeomorphisms preserved foliations admitting non-constant holomorphic first integrals. To obtain a non-linear example possessing no holomorphic first integral, consider a resonant vector field $Y$ about $(0,0) \in {\mathbb{C}}^2$ as in [@M-RamisENS], the simplest example being $$Y = x \partial /\partial x - y(1 + xy) \partial /\partial y \, . \label{bernoulli}$$ The vector field $Y$ is not linearizable and, indeed, possesses only constant holomorphic first integrals. Next let $X =xyY$ and denote by $\phi_X$ the time-one map induced by $X$. We shall prove the following: [*Claim 2*]{}. Fixed a neighborhood $U$ of $(0,0) \in {\mathbb{C}}^2$ and given $p = (x_0, y_0)$, $x_0y_0 \neq 0$, the set $$U \cap \left\{ \bigcup_{n=-\infty}^{\infty} \phi_X^n (p) \right\}$$ is finite. Considering the vector field $Y$ in (\[bernoulli\]), it follows that the resulting equation for $dy/dt$ is a classical Bernoulli equation. This equation can explicitly be integrated to yield the solutions $$x(t) = x_0 e^t \; \; \, {\rm and} \; \; \, y(t) = \frac{y_0}{e^t (1 + x_0y_0 t)}$$ corresponding to the integral curves of $Y$. In particular, it follows that $$xy = \frac{x_0y_0}{1 +x_0y_0 t} \, .$$ Inputting the last equation into the equation for $dx/dt$ arising from the vector field $X$, it follows that $x (t) = x_0 (1 +x_0y_0 t)$. Therefore, setting $t=1$, the map $\phi_X$ must be given by $$\phi_X (x_0, y_0) = [ x_0 +x_0^2y_0 , y_0 (1 +x_0y_0)^{-2} ] \, . \label{diffeosaddlenode}$$ Thus, whenever the iteration $\phi_X^n$ is defined, its first component equals $x_0 + n x_0^2y_0$. Since, by assumption $x_0^2y_0 \neq 0$, this component behaves as a non-trivial translation whose orbits necessarily march off a neighborhood of $0 \in {\mathbb{C}}$. The claim follows at once. [**Example 4**]{}: Dynamics on a pencil of elliptic curves: Invariant sets and no parabolic domain. Let us now discuss a more elaborate case where the vector field $Y$ itself has [*zero*]{} linear part at the origin. This means that the time-one map induced by $Y$ is already tangent to the identity. This contrasts with the previous examples for which we needed to use vector fields with non-isolated singularities to obtain diffeomorphisms tangent to the identity. The choice of $Y$ to be made below is such that the leaves of the associated foliation ${\mathcal{F}}$ have [*more topology*]{} in the sense that they are punctured elliptic curves. The influence of this topology will significantly change the nature of the results obtained. Consider the vector field $Y = x(x-2y) \partial /\partial x + y(y-2x) \partial /\partial y$ admitting $xy(x-y)$ as holomorphic first integral. In particular, $Y$ possesses exactly three separatrices, namely the coordinate axes and the line $\{ x=y\}$. Also consider the vector field $X = xy (x-y) Y$, since vector fields vanishing identically over all its invariant curves through $(0,0) \in {\mathbb{C}}^2$ are needed to yield diffeomorphisms tangent to the identity and having locally closed orbits. Note that the blow-up of $Y$ yields a foliation having three linearizable singularities on the corresponding exceptional divisor. About each of these singularities, the vector field $X$ is as in Example 1, with $d=am-bn \neq 0$. In particular, the corresponding time-one map has finite orbits about each of these three singular points. However, it will be seen that, globally, this time-one map does not have finite orbits due to the influence of the topology of the leaves. To describe the dynamics of $X$, let us first consider the case of $Y$. Note that the orbits of $Y$ are the fibers of the pencil induced on ${\mathbb{C}}P(2)$ by the first integral $xy(x-y)$. Except for the tree invariant lines determined by $xy(x-y) =0$, these fibers are smooth (projective) elliptic curves which are pairwise isomorphic since they are permuted by the flow of radial vector field $x \partial /\partial x + y \partial /\partial y$. The pencil has $3$ singular points at the “line at infinity” $\Delta_{\infty}$ corresponding to the invariant directions $\{ x=0\}$, $\{y=0\}$ and $\{ x=y\}$ and all the elliptic curves pass through each of these singular points. Whereas $X$ has poles on $\Delta_{\infty}$, a straightforward change of coordinates shows that [*the restriction $X_L$ of $X$ to every elliptic curve $L$*]{} extends holomorphically to a vector field defined on all of $L$. Thus $X_L$ is a constant vector field on $L$. A slightly more detailed analysis, cf. [@ghys-r] page 1150, shows that $X_L$ does not depend on $L$ in the sense that, if $\sigma : L \rightarrow L'$ is a holomorphic diffeomorphism between two elliptic curves as above, then $h^{\ast} X_{L'} = X_L$. With the preceding information in hand, fix $L$ as above and consider now the Weierstrass representation of $L$ as the parallelogram $\mathcal{P}$ in ${\mathbb{C}}$ defined by $1$ and $\tau \in C^{\ast}$ (actually $\tau = e^{\pi \sqrt{-1}/3}$) identified to $L$ through the corresponding Weierstrass $\wp$-function. Modulo multiplying $Y$ by a constant, the vector field induced by $Y_L$ on $\mathcal{P}$ is simply $\partial /\partial T$, where $T$ is a global coordinate on ${\mathbb{C}}$. Let us now fix a neighborhood $U \subset {\mathbb{C}}^2$ of the origin and consider (a connected component of) the intersection $L_U$ of $L$ and $U$, where $L$ is an elliptic curve as above. The parameterization $\wp$ yields an identification of $L_U$ with $\mathcal{P} \setminus (B_1 \cup B_2 \cup B_3)$ the three-times holed parallelogram $\mathcal{P}$, where the holes $B_1\, , B_2, \, B_3$ are in natural correspondence with the invariant lines $\{ x=0\}$, $\{y=0\}$ and $\{ x=y\}$. Furthermore, when the leaf $L$ varies, $\mathcal{P} \setminus (B_1 \cup B_2 \cup B_3)$ does not change since the leaves are pairwise isomorphic. Nonetheless, since the first integral $xy(x-y)$ varies, the restriction of $X$ to $L$ viewed in $\mathcal{P} \setminus (B_1 \cup B_2 \cup B_3)$ becomes a constant times the vector field $\partial /\partial T$. The value of this constant depends on the leaf and goes to zero if the leaf is close to any of the three invariant lines. Let us now consider the dynamics of $Z = \partial /\partial T$ on $\mathcal{P} \setminus (B_1 \cup B_2 \cup B_3)$ for real time. This dynamics consists of horizontal (real) lines in $\mathcal{P} \setminus (B_1 \cup B_2 \cup B_3)$: those lines who intersect $B_1 \cup B_2 \cup B_3$ leave the neighborhood $U$ (and therefore are “ended” from the local point of view). The remaining lines give rise to periodic orbits. In particular, if $f$ is the local diffeomorphism induced as time-one map of $X$, $f$ possesses fully invariant sets contained in $U$ and away from the invariant lines $\{ x=0\}$, $\{y=0\}$ and $\{ x=y\}$. Consider now the vector field $e^{2\pi i \alpha}Y$ where $\alpha \in {\mathbb{C}}$ is a constant. Taking advantage of the previous construction, the new vector field $e^{2\pi i \alpha}Z$ viewed in $\mathcal{P} \setminus (B_1 \cup B_2 \cup B_3)$ coincides with $e^{2\pi i \alpha} \partial /\partial T$. The value of $\alpha$ is chosen with irrational real part, so that the slope of the “real direction” of $e^{2\pi i \alpha} \partial /\partial T$ is irrational. Therefore the real flow of $e^{2\pi i \alpha}Z$ on $\mathcal{P}$ is a linear irrational flow all of whose orbits are dense. In particular, its restriction to $\mathcal{P} \setminus (B_1 \cup B_2 \cup B_3)$ is such that every orbit will eventually intersect the holes $B_1 \cup B_2 \cup B_3$ and then leave the neighborhood $U$. Denoting by $\phi$ the time-one map induced by $e^{2\pi i \alpha}X$, it follows that the orbit of every point in $L_U$ will intersect $L_U$ in finitely many points only (unless it “jumps over the holes”). In any event, the previous constructed diffeomorphisms do not have finite orbits in $U$ due to the fact that their dynamics restricted to the invariant lines $\{ x=0\}$, $\{y=0\}$ and $\{ x=y\}$ consists of Leau flowers. As previously done, we may then think of multiplying $cX$ by its first integral $xy(x-y)$ so as to annihilate the dynamics over these invariant lines. This strategy, however, does not work in the present case due to the following: [*Claim 4*]{}. There are leaves $L_U$ of the restriction of $cxy(x-y) X$ to $U$ containing fully invariant sets by the dynamics of this vector field in real time. [*Proof*]{}. Keeping the preceding notations, the restriction of $cxy(x-y) X$ to $\mathcal{P} \setminus (B_1 \cup B_2 \cup B_3)$ is given by $cc_L \partial /\partial T$ where $c_L$ is a constant depending on the leaf $L$, more precisely $c_L$ equals the value of the first integral $xy(x-y)$ over $L$. Therefore, regardless of the chosen value of $c$, there will always exist leaves $L$ over which the real dynamics of $cxy(x-y) X$ is constituted by periodic orbits, some of them avoiding the holes $B_1 \cup B_2 \cup B_3$. The claim follows. It follows from this claim that the diffeomorphism $\phi_X$ induced as time-one map of $X = xy(x-y)Y$ has closed invariant sets not containing the origin whereas it does not possess any parabolic domain in the sense of [@hakim], [@abate]. Singular foliations and holonomy -------------------------------- This paragraph contains simple examples of foliations giving rise to holonomy maps with properties similar to the above discussed cases. These examples include the foliation introduced in the “complement to Theorem B”. Let us begin by pointing out a simple observation that shows that every element of ${{{\rm Diff}_1 ({\mathbb C}^2, 0)}}$ can be realized as a local holonomy map for some foliation. Indeed, consider a singular foliation ${\mathcal{F}}$ on $({\mathbb{C}}^3,0)$ admitting a separatrix $S$ through the origin and denote by $h$ the holonomy map associated to ${\mathcal{F}}$, with respect to $S$. Assume that the foliation is locally given by the vector field $A(x,y,z) \partial /\partial x + B(x,y,z) \partial /\partial y + C(x,y,z) \partial /\partial z$. Assume furthermore that the separatrix $S$ is given, in the same coordinates, by $\{x=0, y=0\}$. Setting $z = e^{2\pi it}$, the corresponding holonomy map can be viewed as the time-one map associated to the differential equation $$\begin{cases} \frac{dx}{dt} & = \frac{dx}{dz} \frac{dz}{dt} = 2\pi i e^{2\pi i t} \frac{A(x,y,e^{2 \pi i t})}{C(x,y,e^{2\pi it})}\\ \frac{dy}{dt} & = \frac{dx}{dz} \frac{dz}{dt} = 2\pi i e^{2\pi i t} \frac{B(x,y,e^{2 \pi i t})}{C(x,y,e^{2\pi it})} \end{cases} \, .$$ In the particular case where $A, \, B$ do not depend on $z$ and $C$ is reduced to $C(x,y,z) = z$, the holonomy map of ${\mathcal{F}}$ with respect to $S$ reduces to the time-one map induced by a vector field on $({\mathbb{C}}^2, 0)$, namely by the vector field $$2\pi i \left[ A(x,y) \frac{\partial}{\partial x} + B(x,y) \frac{\partial}{\partial y} \right] \, .$$ Consider, for example, the local diffeomorphism $h$ introduced in Example 3 and recall that $h$ is the time-one map induced by the vector field $Y = xy \left[x \partial /\partial x - y(1 + xy) \partial /\partial y \right]$. To find a vector field on $({\mathbb{C}}^3,0)$ whose foliation has $h$ as holonomy map, it suffices to take $Y$ and “join" the term $\frac{1}{2\pi i}z \partial /\partial z$. Then the holonomy of the foliation associated to the vector field $$X = xy \left[x \frac{\partial }{\partial x} - y(1 + xy) \frac{\partial }{\partial y} \right] + \frac{1}{2pi i} z \frac{\partial }{\partial z} \, ,$$ with respect to the $z$-axis is nothing but $h$ itself. Note that the vector field $X$ above corresponds to a saddle-node vector field of codimension $2$. This is equivalent to say that its linear part admits exactly two eigenvalues equal to zero and a non-zero eigenvalue associated to the direction of the separatrix $\{ x=y=0\}$. The fact that the holonomy of $\{ x=y=0\}$ has finite orbits is a phenomenon without analogue for saddle-nodes in dimension $2$. Let us now provide three examples of foliations on $({\mathbb{C}}^3,0)$ possessing only eigenvalues different from [*zero*]{}, as in the case of Theorem B. [**Example 5**]{}: Let ${\mathcal{F}}$ denote the foliation associated to the vector field $$X = x(1 + xyz^2) \frac{\partial }{\partial x} + y(1 - xyz^2) \frac{\partial }{\partial y} - z \frac{\partial }{\partial z} \, .$$ The $z$-axis corresponds to one of the separatrices of ${\mathcal{F}}$. Taking $z = e^{2\pi it}$, it follows that the holonomy map $h$ associated to ${\mathcal{F}}$, with respect to the $z$-axis, is given by the time-one map associated to the vector field $$\label{eqholonomy} \begin{cases} \frac{dx}{dt} & = \frac{dx}{dz} \frac{dz}{dt} = -2\pi i x(1 + e^{4\pi it}xy)\\ \frac{dy}{dt} & = \frac{dx}{dz} \frac{dz}{dt} = -2\pi i y(1 - e^{4\pi it}xy) \end{cases} \, .$$ To solve this system of differential equations, we should consider the series expansion of $(x(t),y(t))$ in terms of the initial condition. More precisely, if $(x(0), y(0)) = (x_0, y_0)$, then we should let $x(t) = \sum a_{ij}(t) x_0^i y_0^j$ and $y(t) = \sum b_{ij}(t) x_0^i y_0^j$. Clearly $a_{10}(0) = b_{01}(0) = 1$ and $a_{ij} = b_{ij}(0) = 0$ in the other cases. Substituting the series expansion of $x(t)$ and $y(t)$ on (\[eqholonomy\]) and comparing the same powers on the initial conditions, it can be said that the system (\[eqholonomy\]) induces an infinite number of differential equations involving the functions $a_{ij}, \, b_{ij}$ and their derivatives. Each one of the differential equation takes the form $$\begin{aligned} a_{ij}^{\prime}(t) & = & -2\pi i \left[ a_{ij}(t) + \sum e^{4\pi it} a_{p_1q_1}(t) a_{p_2q_2} (t) b_{p_3q_3}(t) \right] \\ b_{ij}^{\prime}(t) & = & -2\pi i \left[ b_{ij}(t) + \sum e^{4\pi it} a_{p_1q_1}(t) b_{p_2q_2} (t) b_{p_3q_3}(t) \right]\end{aligned}$$ where $p_1 + p_2 +p_3 =i$ and $q_1 + q_2 + q_3 = j$. In particular, the terms on the sum in the right hand side of the equation above involves only coefficients of the monomials $x_0^p y_0^q$ of degree less then $i + j$ and such that $p \leq i$ and $q \leq j$. Computing this holonomy map becomes much easier with the following lemma: \[preservingxy\] The holonomy map $h$ preserves the function $(x,y) \mapsto xy$. To check that the level sets of $(x,y) \mapsto xy$ are preserved by $h$, consider the derivative of the product $x(t) y(t)$ with respect to $t$. This gives us $$\begin{aligned} \frac{d}{dt} (xy) &= \frac{dx}{dt} y + x \frac{dy}{dt} \\ &= - \left[ 2\pi i x(1 + e^{4\pi it}xy) \right] y - x \left[ 2\pi i y(1 - e^{4\pi it}xy) \right] \\ &= -4\pi i xy \, .\end{aligned}$$ Thus, by integrating the previous differential equation with respect to the product $xy$, we obtain $$(xy)(t) = x_0y_0 e^{-4 \pi i t} \, .$$ Since the holonomy map corresponds to the time-one map of the system of differential equations (\[eqholonomy\]) and since $e^{-4 \pi i t} = 1$ for all $t \in {\mathbb{Z}}$, it follows that the orbits of $h$ are contained in the level sets of $(x,y) \mapsto xy$ as desired. Lemma \[preservingxy\] implies that it suffices to determine the first coordinate of $h$. By recovering the preceding non-autonomous system of differential equations, a simple induction argument on the value of $i+j$ shows that $h$ has the form $$\label{eqexpressionH} h(x,y) = (x(1 + xy f(xy)), y(1 + xy f(xy))^{-1}) \, ,$$ where $f$ represents a holomorphic function of one complex variable such that $f(0) = 2\pi i$ (the expression for the second coordinate of $h$ is obtained from the first coordinate by means of Lemma \[preservingxy\]). The resulting diffeomorphism $h$ is clearly non-periodic but does have invariant sets giving by “circles”. Besides on some of these invariant “circles” the dynamics is conjugate to an irrational rotation, cf. Example 2. [**Example 6**]{}: [Complement to Theorem B]{}. Let ${\mathcal{F}}$ denote the foliation associated to the vector field $$X = x(1 + x^2yz^3) \frac{\partial }{\partial x} + y(1 - x^2yz^3) \frac{\partial }{\partial y} - z \frac{\partial }{\partial z} \, .$$ Again the $z$-axis corresponds to one of the separatrices of ${\mathcal{F}}$. Taking $z = e^{2\pi it}$, it follows that the holonomy map $h$ associated to ${\mathcal{F}}$, with respect to the $z$-axis, is given by the time-one map associated to the vector field $$\label{eqholonomy2} \begin{cases} \frac{dx}{dt} & = \frac{dx}{dz} \frac{dz}{dt} = -2\pi i x(1 + e^{6\pi it}x^2y)\\ \frac{dy}{dt} & = \frac{dx}{dz} \frac{dz}{dt} = -2\pi i y(1 - e^{6\pi it}x^2y) \end{cases} \, .$$ The same argument employed in Lemma \[preservingxy\] shows that, again, the holonomy map $h$ in question preserves the level sets of the function $(x,y) \mapsto xy$. To solve the corresponding system of equations, we then consider again the series expansion of $(x(t) ,y(t))$ in terms of the initial condition. Let then $x(t) = \sum a_{ij} (t) x_0^i y_0^j$ and $y(t) = \sum b_{ij} (t) x_0^i y_0^j$, where $a_{10} (0) = b_{01} (0) =1$ and $a_{ij} (0) =b_{ij} (0) =0$ in the remaining cases. It can immediately be checked that the functions $a_{ij}, b_{ij}$ vanish identically for $2 \leq i+j \leq 3$. As to the monomials of degree $4$, it can similar be checked that they all vanish identically except $a_{31} (t)$ and $b_{22} (t)$. In fact, the latter functions satisfy $$\begin{cases} a_{31}^{\prime} (t)& = -2\pi i [ a_{31} (t) + e^{6\pi i} a_{10}^3 (t) b_{01} (t)] \\ b_{22}^{\prime} (t)& = -2\pi i [ b_{22} (t) - e^{6\pi i} a_{10}^2 (t) b_{01}^2 (t)] \end{cases} \, .$$ ensuring that $a_{31} (t) = -2\pi i t e^{-2\pi it}$ whereas $b_{22} (t) = 2\pi i t e^{-2\pi it}$. In particular, $a_{31} (1) = -2\pi i$ and $b_{22} (1) = 2\pi i$. By using induction on $i+j$ (and keeping in mind that $h$ preserves the function $(x,y) \mapsto xy$), it can be shown that $h$ takes on the form $$\label{eqexpressionH2} h(x,y) = (x(1 + x^2y f(x^2y)), y(1 + x^2y f(x^2y))^{-1}) \; \; \, {\rm with} \; \; \, f(0) =2\pi i \, .$$ It follows from the discussion in Example 2 that this diffeomorphism possesses finite orbits whereas it is clearly non-periodic. This finishes the proof of the complement to Theorem B. [**Example 7**]{}: A dicritical holonomy map. Let us finish this section with an example of foliation whose holonomy map has a peculiar property related to “dicritical vector fields”. As it will follow from our discussion based on a result due to Abate [@abate], the mentioned holonomy map will possess infinite orbits and, in fact, the basin of attraction of the origin will have non-empty interior. We start with the following vector field $$X = x(1 + xyz^2) \frac{\partial }{\partial x} + y(1 + xyz^2) \frac{\partial }{\partial y} - z \frac{\partial }{\partial z} \, .$$ Compared to Example 5, the reader will note the change of a sign in the second coordinate. The system leading to the holonomy map associated to the $z$-axis, becomes $$\label{eqholonomy3} \begin{cases} \frac{dx}{dt} & = \frac{dx}{dz} \frac{dz}{dt} = -2\pi i x(1 + e^{4\pi it}xy)\\ \frac{dy}{dt} & = \frac{dx}{dz} \frac{dz}{dt} = -2\pi i y(1 + e^{4\pi it}xy) \end{cases} \, .$$ Considering the series expansions $x(t) = \sum a_{ij} (t) x_0^i y_0^j$ and $y(t) = \sum b_{ij} (t) x_0^i y_0^j$, where $a_{10} (0) = b_{01} (0)=1$, it can be show that terms $a_{ij}, \, b_{ij}$ with $i+j =2$ vanish identically. As to $i+j=3$, the only non-identically zero functions are $a_{21}$ and $b_{12}$ which, in turn, satisfy the equations $$\begin{cases} a_{21}^{\prime} (t) & = -2\pi i [ a_{21} (t) + e^{4\pi i} a_{10}^2 (t) b_{01} (t)] \\ b_{12}^{\prime} (t) & = -2\pi i [ b_{12} (t) + e^{4\pi i} a_{10} (t) b_{01}^2 (t)] \end{cases} \, .$$ yielding $a_{21} (t)= b_{12} (t) = -2\pi i t e^{-2\pi it}$. In particular, $a_{21} (1) = b_{12} (1) = -2\pi i$. Therefore $$h (x,y) = (x,y) + xy (-2\pi i x, -2\pi i y) + \cdots \,$$ where the dots stand for higher order terms. The fact that the first non-zero homogeneous component of $h(x,y)-(x,y)$ is a multiple of $(x,y)$ means that $h$ is [*dicritical*]{} in the terminology of [@abate]. This fact is equivalent to saying that the infinitesimal generator of $h$ is a formal [*dicritical*]{} vector field, namely its first non-zero homogeneous component is a multiple of the radial vector field, cf. Section 4. In [@abate], it is proved that these vector fields possess uncountably many ”parabolic domains” implying, in particular, that the basin of attraction of the origin, with respect to $h$, has non-empty interior. Solvable and Pseudo-solvable subgroups of ${{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$ ======================================================================================== As already mentioned, this section owns a large deal to the paper [@ghysBSBM] by E. Ghys and deals with his notion of “pseudo-solvable groups”. As far as Theorem C is concerned, the purpose of this section is to show that a “pseudo-solvable” subgroup of ${{{\rm Diff}_1 ({\mathbb C}^2, 0)}}$ is, indeed, solvable. In dimension $1$ the corresponding result is established in [@ghysBSBM]. The argument presented in this section roughly follows Ghys’s strategy in the one-dimensional case although it becomes far more involved due to the possible existence of non-constant first integrals and to the existence of rank $2$ abelian groups: these phenomena have no one-dimensional analogue. In fact, material involving formal diffeomorphisms, vector fields and solvable groups are widely developed in dimension $1$, cf. for example [@nakai], [@russians] and [@cerveaumoussu], but not so much in higher dimensions as basic results such as Lemmas \[commuting1\] and \[commuting2\] are hardly found in the literature. In the course of the discussion we shall also provide detailed information on the algebraic structure of solvable subgroups ${{{\rm Diff}_1 ({\mathbb C}^2, 0)}}$ and the corresponding material may have interest beyond the use made in this paper. Let us first recall the definition of pseudo-solvable groups. Suppose that $G$ is a subgroup of ${{{\rm Diff}_1 ({\mathbb C}^2, 0)}}$ generated by a finite set $S \subset {{{\rm Diff}_1 ({\mathbb C}^2, 0)}}$. To the generating set $S$ it is associated a sequence of sets $S(j) \subseteq G$ as follows: $S(0) =S$ and $S(j+1)$ is the set whose elements are the commutators written under the form $[F_1^{\pm 1} ,F_2^{\pm 1}]$ where $F_1 \in S(j)$ and $F_2 \in S(j) \cup S(j-1)$ ($F_2 \in S(0)$ if $j=0$). The group $G$ is said to be pseudo-solvable if (for some generating set $S$ as above), the sequence $S(j)$ becomes reduced to the identity for $j$ large enough. As mentioned, this section is wholly devoted to proving the following: \[commuting9\] A pseudo-solvable subgroup $G$ of ${{{\rm Diff}_1 ({\mathbb C}^2, 0)}}$ is necessarily solvable. To begin the approach to Proposition \[commuting9\], let ${{\mathbb{C} [[x,y]]}}$ denote the space of formal series in the variables $x,y$. Similarly ${{\mathbb{C} ((x,y))}}$ will stand for field of fractions (or field of quotients) of ${{\mathbb{C} [[x,y]]}}$. Let ${{\widehat{\mathfrak{X}}}}$ denote the set of formal vector fields at $({\mathbb{C}}^2,0)$. This means that an element (formal vector field) in ${{\widehat{\mathfrak{X}}}}$ has the form $a(x,y) \partial /\partial x + b(x,y) \partial /\partial y$ where $a, \, b \in {{\mathbb{C} [[x,y]]}}$. The space of formal vector fields whose first jet at the origin vanishes is going to be denoted by ${{\widehat{\mathfrak{X}}_2}}$. Formal vector fields as above act as derivations on ${{\mathbb{C} [[x,y]]}}$ by the formula $X_{\ast} f = df. X \in {{\widehat{\mathfrak{X}}}}$, where $f \in {{\mathbb{C} [[x,y]]}}$ and $X \in {{\widehat{\mathfrak{X}}}}$. This action can naturally be iterated so that $(X)^k_{\ast} f$ is inductively defined by $X_{\ast} [ (X)^{k-1}_{\ast} f]$ for $k \in {\mathbb{N}}$. By means of definition, we also set $(X)^0_{\ast} f =f \in {{\mathbb{C} [[x,y]]}}$. Next, let $t \in {\mathbb{C}}$ and $X \in {{\widehat{\mathfrak{X}}}}$ be fixed. The [*exponential of $X$ at time-$t$*]{}, $\exp (tX)$, can be defined as the operator from ${{\mathbb{C} [[x,y]]}}$ to itself given by $$\exp (tX) (h) = \sum_{j=0}^{\infty} \frac{t^j}{j!} (X)^j_{\ast} h \, . \label{betterasformula}$$ Naturally $\exp (0.X)$ is the identity operator and $\exp (t_1 X) \circ \exp (t_2X) = \exp ((t_1+t_2)X)$. Recall that the order of a function (or vector field) at the origin is the degree of its first non-zero homogeneous component. Suppose then that $X \in {{\widehat{\mathfrak{X}}_2}}$ so that $X =a(x,y) \partial /\partial x + b(x,y) \partial /\partial y$ where the orders of both $a, \, b$ at $(0,0) \in {\mathbb{C}}^2$ are at least $2$. It then follows that the order of $X_{\ast} h$ is strictly greater than the order of $h$ itself. In particular, for $h=x$, we conclude that $$\exp (tX) (x) = x + t.a(x,y) + \cdots \; \; \, {\rm and} \; \; \, \exp (tX) (y) = y + t.b(x,y) + \cdots \label{previousformula}$$ where the dots stand for terms whose degrees in $x,y$ are strictly greater than the order of $a$ (resp. $b$) at the origin. Therefore, for every $X \in {{\widehat{\mathfrak{X}}_2}}$ and every $t \in {\mathbb{C}}$, the pair of formal series $(\exp (tX)(x), \exp (tX) (y))$ can be viewed as an element of ${{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$, namely the group of formal diffeomorphisms of $({\mathbb{C}}^2,0)$ that are tangent to the identity at the origin. If the vector field $X$ happens to be holomorphic, as opposed to formal, then $(\exp (tX)(x), \exp (tX) (y))$ is an actual diffeomorphism tangent to the identity and coinciding with the diffeomorphism induced by the local flow of $X$ at time $t$. Next, by letting ${\rm Exp} \, (X) = (\exp (X)(x), \exp (X) (y))$, and more generally, ${\rm Exp} \, (tX) = (\exp (tX)(x), \exp (tX) (y))$, the following well-known lemma holds: \[correspondence\] The map ${\rm Exp}$ settles a bijection between ${{\widehat{\mathfrak{X}}_2}}$ and ${{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$. In the sequel $p_n (x,y), \, q_n (x,y) , \, a_n (x,y), \, b_n (x,y)$ denote homogeneous polynomials of degree $n$ in the variables $x,y$. Let $F \in {{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$ be given by $F(x,y) = (x + \sum_{n=2}^{\infty} p_n (x,y) , y + \sum_{n=2}^{\infty} q_n (x,y))$. Similarly consider a vector field $X \in {{\widehat{\mathfrak{X}}_2}}$ given as $$X = \sum_{n=2}^{\infty} \left[ a_n (x,y) \frac{\partial}{\partial x} + b_n (x,y) \frac{\partial}{\partial y} \right] \, .$$ The equation ${\rm Exp}\, (X) =F$ amounts to $p_{m+1} = a_{m+1} + R_{m+1} (x,y)$ and $q_{n+1} = b_{n+1} + S_{m+1} (x,y)$ where $R_{m+1} (x,y)$ (resp. $S_{m+1} (x,y)$) stands for the homogeneous component of degree $m+1$ of the vector field $$\sum_{j=2}^m \frac{1}{j!} (Z_m)^j (x)$$ (resp. of $\sum_{j=2}^m (Z_m)^j (y) / j!$), where $Z_m = \sum_{n=2}^{m} [ a_n (x,y) \partial /\partial x + b_n (x,y) \partial /\partial y ]$. These equations show that, given $F \in {{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$, there is one unique $X \in {{\widehat{\mathfrak{X}}_2}}$ such that ${\rm Exp} \, (X) =F$. The lemma is proved. For $F \in {{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$, recall that the formal vector field $X$ satisfying ${\rm Exp} \, (X) =F$ is called the [*infinitesimal generator of $F$*]{}. The notation $X = \log \, (F)$ may also be used to state that $X$ is the infinitesimal generator of $F \in {{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$. Note that, in general, the series of $X$ is not convergent even when $F$ is an actual holomorphic diffeomorphism. Now, we have: \[commuting1\] Two elements $F_1,\, F_2$ in ${{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$ commute if and only if so do their infinitesimal generators $X_1, \, X_2$. The non-immediate implication consists of showing that $[X_1, X_2] =0$ provided that $F_1$ and $F_2$ commute. For this, denote by $Z_+$ (resp. $Z_-$) the infinitesimal generator of $F_1 \circ F_2$ (resp. $F_1^{-1} \circ F_2^{-1}$). The diffeomorphisms $F_1, F_2$ commute if and only if $F_1 \circ F_2 \circ F_1^{-1} \circ F_2^{-1} = {\rm Exp} \, (Z_+) {\rm Exp} \, (Z_-) = {\rm id}$. Denoting by $Z$ the infinitesimal generator of $F_1 \circ F_2 \circ F_1^{-1} \circ F_2^{-1}$, we have $$\begin{aligned} Z & = & \log \, ( {\rm Exp} \, (Z_+) {\rm Exp} \, (Z_-) ) = \\ & = & Z_+ + Z_- + \frac{1}{2} [Z_+, Z_-] + \frac{1}{12} [Z_+ ,[Z_+,Z_-]] - \frac{1}{12} [Z_- ,[Z_+,Z_-]] + {\rm h.o.t.}\end{aligned}$$ as it follows from Campbell-Hausdorff formula, see [@who??]. In turn, $$Z_+ = \log \, ( F_1 \circ F_2 ) = \log \, ( {\rm Exp} \, (X_1) {\rm Exp} \, (X_2)) = X_1 + X_2 + \frac{1}{2} [X_1, X_2] + \cdots \, .$$ Analogously $$Z_- = -X_1 -X_2 + \frac{1}{2} [X_1, X_2] + \cdots \, .$$ Therefore $$\begin{aligned} Z & = & X_1 + X_2 + \frac{1}{2} [X_1, X_2] + \cdots + ( -X_1 -X_2 + \frac{1}{2} [X_1, X_2] + \cdots ) + \nonumber \\ & & + \frac{1}{2} \left[X_1 + X_2 + \frac{1}{2} [X_1, X_2] + \cdots , -X_1 -X_2 + \frac{1}{2} [X_1, X_2] + \cdots \right] + \cdots \nonumber \\ & = & [X_1,X_2] + \frac{1}{8} [[X_1,X_2], [X_2,X_1]] + \cdots \, . \label{finalcommutators}\end{aligned}$$ Hence, if $X_1,X_2$ do not commute, then $[X_1, X_2] = \sum_{j \geq k}^{\infty} Y_j$, where $Y_j$ are homogeneous vector field and $k$ is the smallest strictly positive integer for which $Y_k$ is not identically [*zero*]{}. The orders of the higher iterated commutators appearing in Equation (\[finalcommutators\]) are strictly greater than $k$, since the orders of $X_1,X_2$ at the origin are at least $2$. In other words, we have $Z = Y_k + {\rm h.o.t.}$. Since $F_1 \circ F_2 \circ F_1^{-1} \circ F_2^{-1} = {\rm Exp} \, (Z)$, it follows that $F_1, F_2$ do not commute. The lemma is proved. \[obs1.1\] [Consider elements $F_1, F_2 \in {{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$ whose orders of contact with the identity at $(0,0)$ are respectively $r, s$. The preceding argument shows, in particular, that the order of contact with the identity at $(0,0)$ of $F_1\circ F_2 \circ F_1^{-1} \circ F_2^{-1}$ is at least $r+s-1$. Since $F_1,F_2$ are tangent to the identity, we have that $\min \{ r,s\} \geq 2$ so that the order of contact with the identity of $F_1\circ F_2 \circ F_1^{-1} \circ F_2^{-1}$ is strictly greater than $\max \{ r,s \}$]{}. Armed with Lemma \[commuting1\], it is now easy to describe the set consisting of those elements in ${{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$ commuting with $F$, namely the centralizer of an element $F \in {{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$. First, given formal vector fields $X, Y$, we shall say that they are [*not everywhere parallel*]{} to mean that $X$ is not a multiple of $Y$ by an element in ${{\mathbb{C} ((x,y))}}$. Also, given a formal vector field $X$, there may or may not exist another vector field $Y$ not everywhere parallel to $X$ and commuting with $X$. When this vector field $Y$ exists, it is never unique since every linear combination of $X$ and $Y$ has similar properties. Furthermore, if $X$ happens to admit some non-constant first integral $h$, then $hY$ will also commute with $X$. Although $Y$ and $hY$ belong to ${{\widehat{\mathfrak{X}}_2}}$, the possibility of having $h$ in ${{\mathbb{C} ((x,y))}}\setminus {{\mathbb{C} [[x,y]]}}$ cannot be ruled out since $Y$ need not have isolated singularities. Also, preparing the way for Lemma \[commuting2\] below, note that every element of ${{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$ possessing an infinitesimal generator $Z$ of the form $Z=aX + bY$, where $X, Y$ are as above and $a,\, b$ are first integrals of $X$, automatically belongs to the centralizer of $X$, cf. Lemma \[commuting1\]. Clearly the vector field $Y$ (commuting with $X$ and not everywhere parallel to $X$) is never uniquely defined but a representative of them can be chosen. Once this vector field $Y$ is chosen, we also have the following: \[lastversionLemma1\] Let $X, Y$ be as above and consider the set of elements $F \in {{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$ whose infinitesimal generators have the form $Z=aX + bY$, where $a,\, b$ are first integrals of $X$. Then the set of these elements $F \in {{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$ form a subgroup of ${{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$. In what follows it is understood that $a,b,c,d$ are always first integrals of $X$. To show that the set of elements in ${{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$ having infinitesimal generators of the form $aX +bY$ is a group, consider vector fields $Z_1 = aX +bY$ and $Z_2 = cX + dY$ and set $F_1 = {\rm Exp}\, (Z_1)$, $F_2 = {\rm Exp}\, (Z_2)$. According to Campbell-Hausdorff formula, the infinitesimal generator $Z$ of $F_1 \circ F_2$ is given by $$Z = Z_1 + Z_2 + \frac{1}{2} [Z_1,Z_2] + \frac{1}{12} \left( [Z_1, [Z_1,Z_2]] - [Z_2, [Z_1,Z_2]] \right) + \cdots \, .$$ However, note that $$[Z_1,Z_2] = \left( \frac{\partial a}{\partial Y} - \frac{\partial c}{\partial Y} \right) X + \left( \frac{\partial b}{\partial Y} - \frac{\partial d}{\partial Y} \right) Y \, .$$ Since $X, Y$ commute, Schwarz theorem implies that the derivative with respect to $Y$ of a first integral for $X$ still is a first integral for $X$. Thus $[Z_1,Z_2]$ has the form $\tilde{a} X + \tilde{b} Y$, where $\tilde{a}, \tilde{b}$ are first integrals for $X$. Now an induction argument immediately ensures that the same conclusion is valid for the higher iterated commutators appearing in Campbell-Hausdorff formula. Therefore the infinitesimal generator of $F_1 \circ F_2$ still has the general form $AX + BY$ where $A,B$ are first integrals of $X$. The lemma is proved. The next lemma characterizes the centralizer of $X$ and shows, in particular, that the group described in Lemma \[lastversionLemma1\] does not depend on the choice of the representative vector field $Y$. \[commuting2\] Let $F \in {{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$ be given and denote by $X$ its infinitesimal generator. Then the centralizer of $F$ in ${{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$ coincides with one of the following groups. [Case 1]{} : Suppose that every vector field $Y \in {{\widehat{\mathfrak{X}}_2}}$ commuting with $X$ is everywhere parallel to $X$. Then the centralizer of $X$ consists of the subgroup of ${{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$ whose elements have infinitesimal generators of the form $hX$, where $h \in {{\mathbb{C} ((x,y))}}$ is a formal first integral of $X$. In particular, if $X$ admits only constants as first integrals, then the centralizer of $F$ is reduced to the exponential of $X$. [Case 2]{} : Suppose there is $Y \in {{\widehat{\mathfrak{X}}_2}}$ which is not everywhere parallel to $X$ and still commutes with $X$. Then the centralizer of $F$ coincides with the subgroup of ${{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$ consisting of those elements $F \in {{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$ whose infinitesimal generators have the form $aX +bY$, where $a,b \in {{\mathbb{C} ((x,y))}}$ are (formal) first integral of $X$. Suppose that $H$ is an element of ${{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$ commuting with $F$. Denoting by $Z$ the infinitesimal generator of $H$, it follows from Lemma \[commuting1\] that $[X,Z]=0$. Conversely the $1$-parameter group generated by $Y$ is automatically contained in the centralizer of $F$. Next, suppose that the assumption in Case 1 is verified. Then the quotient $h$ between $Z$ and $X$ can be defined as an element of ${{\mathbb{C} ((x,y))}}$ satisfying $Z = hX$. Therefore the condition $[X,Z]=0$ becomes $dh.X=0$, i.e. $h$ is a first integral for $X$. Consider now the existence of $Y$, not everywhere parallel to $X$, verifying $[X,Y]=0$. It is clear that the elements of ${{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$ described in Case 2 belong to the centralizer of ${\mathcal{F}}$. Thus only the converse needs to be proved. Since $H$ commutes with $F$, Lemma \[commuting1\] yields again $[X,Z]=0$. Since $Y$ is not a multiple of $X$, there are functions $a(x,y) , \, b(x,y) \in {{\mathbb{C} ((x,y))}}$ such that $Z = a X + b Y$. Now the equation $[X, Z] = 0$ yields $$( \partial a /\partial X) .X + ( \partial b /\partial X) .Y =0 \, .$$ Thus the fact that $Y$ is not a multiple of $X$ ensures that $( \partial a /\partial X) = ( \partial b /\partial X) =0$. In other words, both $a, \, b$ are first integrals of $X$. The lemma follows. Concerning the situation described in Case 2 of Lemma \[commuting2\], it is already known that $Y$ is not uniquely defined. Nonetheless the characterization of the subgroup of ${{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$ mentioned in Lemma \[lastversionLemma1\] as the centralizer of $F ={\rm Exp}\, (X)$ implies that the group in question does not depend on the choice of the vector field $Y$ commuting with $X$ and not everywhere parallel to $X$. A consequence of Lemma \[commuting2\] is as follows. \[commuting3\] An abelian group $G \subset {{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$ is either contained in the group generated by the exponentials of two commuting vector fields $X, Y$ that are not everywhere parallel or it is contained in the group constituted by the exponentials of vector fields $h.X$ (where, as mentioned, $h$ is a first integral of $X$). If all elements in $G$ have infinitesimal generators of the form $hX$, where $h$ is a first integral for $X$, then the statement is clear. Thus suppose there are elements $F_1, F_2$ in $G$ whose respective infinitesimal generators $X, Y$ are not everywhere parallel. Let $\varphi$ be an element of $G$ whose infinitesimal generator $Z$ is not everywhere parallel to $X$. Clearly we have $Z = f_1(x,y) X + f_2(x,y) Y$ where $f_1,f_2 \in {{\mathbb{C} ((x,y))}}$. Since $\varphi$ must commute with $F_1$, it follows that $[X,Z]=0$ what, in turn, implies that both $f_1, \, f_2$ are first integrals for $X$ (cf. proof of Lemma \[commuting2\]). Analogously $\varphi$ also commutes with $F_2$ so that $[Y,Z]=0$ and hence $f_1, f_2$ are first integrals for $Y$ as well. Since $X, \, Y$ are not everywhere parallel, it follows that both $f_1, \, f_2$ must be constant. The lemma is proved. \[lastversionRem1\] [Given two commuting non-everywhere parallel vector fields $X, Y$, the subgroup of ${{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$ generated by the exponentials ${\rm Exp}\, (tX), \, {\rm Exp}\, (tY)$, $t\in {\mathbb{C}}$, of $X, Y$ is going to be referred to as the [*linear span of $X, \, Y$*]{}. A consequence of the preceding proof is that every vector field $Z$ commuting with both $X, Y$ must be contained in the linear span of $X,Y$ provided that $X,Y$ are non-everywhere parallel commuting vector fields as above. In particular, if $F$ is an element of ${{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$ commuting with both ${\rm Exp}\, (X), \, {\rm Exp}\, (Y)$ then the infinitesimal generator $Z$ of $F$ has the form $c_1X + c_2Y$ where $c_1,c_2$ are [*constants*]{}.]{} Here is another easy consequence of Lemmas \[commuting2\]. \[lastversionLemma2\] Suppose that $h$ is a non-constant first integral of $X$ and let $F_1 = {\rm Exp} \, (X)$ and $F_2 = {\rm Exp} \, (hX)$ be elements in ${{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$ (in particular the first jet of $X$ at the origin vanishes). The intersection of the centralizers of $F_1$ and $F_2$, i.e. the set of elements in ${{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$ commuting with both $F_1, F_2$ is the subgroup of ${{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$ constituted by those elements whose infinitesimal generators have the form $aX$, where $a$ is a first integral of $X$. In particular, this group is abelian. If every vector field $Y$ commuting with $X$ is everywhere parallel to $X$, then the statement follows at once from Lemma \[commuting2\], Case 1. Suppose then the existence of $Y$ not everywhere parallel to $X$ satisfying $[X,Y]=0$. Again it follows from Lemma \[commuting2\] that the centralizer of $F_1 = {\rm Exp} \, (X)$ consists of those elements whose infinitesimal generators have the form $aX + bY$. Nonetheless, the elements in the intersection of the centralizers of $F_1, \, F_2$ must commute with $F_2$ as well. According to Lemma \[commuting1\], this happens if and only if the infinitesimal generator $aX + bY$ commutes with $hX$. However $$[hX, aX + bY] = b\left( \frac{\partial h}{\partial Y} \right) X = 0 \, .$$ Nonetheless $\partial h /\partial Y$ is not identically zero since $Y$ is not everywhere parallel to $X$. It then follows that $b$ must vanish identically. The lemma is proved. Another central ingredient in the proof of Proposition \[commuting9\] is the algebraic description of solvable subgroups of ${{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$ that will be undertaken below. As mentioned the corresponding material is interesting in itself. Recall that the [*normalizer*]{} of a group $G \subset {{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$ is the maximal subgroup $N_G$ of ${{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$ containing $G$ and such that $G$ is a normal subgroup of $N_G$. Similarly the [*centralizer*]{} of an abelian group $G$ is the maximal abelian subgroup of ${{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$ containing $G$ (in particular the centralizer of an element is nothing but the centralizer of the cyclic group generated by this element). \[lastversionLemma3\] Let $G \subset {{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$ be an abelian group contained in the linear span of two non-everywhere parallel commuting vector fields $X, Y$ (by assumption this also means that $G$ is not contained in the exponential of a single vector field). Then the normalizer $N_G$ of $G$ coincides with the group of diffeomorphisms induced by the linear span of $X, Y$. In particular, $N_G$ is abelian. Consider an element $F$ in $N_G$ along with its adjoint action on vector fields belonging to the linear space $E$ spanned by $X, Y$ which is naturally isomorphic to ${\mathbb{C}}^2$. This action is clearly well-defined since $G$ is not contained in the exponential of a single vector field. Next, note that the eigenvalues of the automorphism of $E$ induced by $F$ are equal to $1$ since $F$ is tangent to the identity. Then either the action induced by $F$ is the identity or it is non-diagonalizable. In the former case, $F$ clearly belongs to the linear span of $X\, Y$ and there is nothing else to be proved. Thus only the non-diagonalizable case remains to be considered. Modulo a change of basis, we can assume that $F^{\ast} X =X$. Therefore $F$ preserves $X$ and, hence, it is contained in the centralizer of ${\rm Exp}\, (tX)$. According to Lemma \[commuting2\], the centralizer of ${\rm Exp}\, (tX)$ consists of elements having the form ${\rm Exp}\, (thX)$ or the form ${\rm Exp}\, (thZ)$, where $Z$ is some vector field commuting with $X$ but not everywhere parallel to $X$. In both cases $h$ is a first integral for $X$. To obtain a contradiction with the assumption that the action of $F$ on $E$ is not diagonalizable, let us first consider the action of an element $F$ of ${{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$ having the form $F={\rm Exp}\, (hZ)$ (i.e. $F={\rm Exp}\, (thZ)$ with $t=1$) on the vector field $Z$ itself. According to Hadarmard’s lemma, see [@who??], we have $$F^{\ast} Z = Z + [hZ, Z] + \frac{1}{2!} [hZ, [hZ,Z]] + \frac{1}{3!} [hZ,[hZ, [hZ,Z]]] + \cdots \, .$$ By assumption $hZ$ lies in ${{\widehat{\mathfrak{X}}_2}}$ so that $F^{\ast} Z-Z$ is a vector field in ${{\widehat{\mathfrak{X}}_2}}$ as well. Besides, this vector field is clearly parallel to $Z$. Thus we have $F^{\ast} Z = Z + Z'$ where $Z' \in {{\widehat{\mathfrak{X}}_2}}$ is everywhere parallel to $Z$. However $F^{\ast} Z$ is still contained in the vector space $E$ and, furthermore, the action of $F$ on $E$ has eigenvalues equal to $1$. Therefore we must have $F^{\ast} Z =Z$ so that $F$ preserves $Z$ contradicting the fact that the action of $F$ on $E$ is not the identity. Hence we are led to the conclusion that $F$ must have the form ${\rm Exp}\, (thX)$ in order to have a non-diagonalizable action. Still considering the action of $F = {\rm Exp}\, (hX)$ (i.e. $t$ was again set to be $1$) on $Y$, the same Hadamard’s lemma yields $$F^{\ast} Y = Y + [hX, Y] + \frac{1}{2!} [hX, [hX,Y]] + \frac{1}{3!} [hX,[hX,[hX,Y]]] + \cdots \, .$$ However $[hX, Y] = (\partial h/\partial Y) X$ and $\partial h/\partial Y$ still is a first integral of $X$ thanks to Schwarz theorem ($X,Y$ commute). In particular $[hX, [hX,Y]]$ vanishes identically and so do all higher orders commutators. It then follows that $F^{\ast} Y = Y + ( \partial h/\partial Y) X$. Besides $\partial h/\partial Y$ must be a constant in ${\mathbb{C}}$ since $F$ preserves $E$. By the standard Jordan form, the constant in question can be chosen equal to $1$ (recall that the action of $F$ on $E$ is not diagonalizable). Summarizing the preceding discussion, we have $X$ and $Y$ so that $$F^{\ast} X = X \; \; \, {\rm and} \; \; \, F^{\ast} Y = Y + ( \partial h/\partial Y) X = Y +X \, . \label{thetwoequations}$$ The proof of the lemma is now reduced to showing that every $F$ satisfying the preceding equations must belong to the linear span of $X, \, Y$. For this suppose that $h$ is holomorphic so that the space of its level curves can be considered. The equation $\partial h/\partial Y =1$ implies that $Y$ can be projected in the space of level curves of $h$ and, in fact, under this projection $Y$ is mapped to the constant vector field $Z$ (equal to $1$). Thus $Y$ decomposes as a multiple of $X$ plus a “constant transverse” vector field represented by $Z$. Since $F$ preserves $X$, it must also define an automorphism on the space of level curves of $h$. Furthermore the condition $F^{\ast} Y = Y +X$ ensures that this induced automorphism must preserve $Z$. Since this “leaf space” has dimension $1$, we conclude that the mentioned induced automorphism must be embedded in the flow of $Z$. Hence $F$ can itself be decomposed as an automorphism preserving each level curve of $h$ composed with a transverse automorphism embedded in the flow of $Z$. Since $Y$ commutes with $X$, the leaf-preserving component of $F$ must preserve the vector field $X$ (recall that $F^{\ast} X = X$) and hence it is embedded in the flow of $X$. Summarizing, we concluded that $F$ is obtained by composing a local diffeomorphism embedded in the flow of $X$ with another local diffeomorphism embedded in the flow of $Y$. The statement is then proved. The only point in the above discussion where $h$ was required to be analytic was to make sense of its level curves. The general algebraic statement however also holds in the formal category. For example, by truncating $h$ at some high order, the equation $F^{\ast} Y = Y + ( \partial h/\partial Y) X = Y +X$ will still hold for terms of lower order. Thus every element $F$ in ${{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$ verifying Equation (\[thetwoequations\]) must coincide with elements in the linear span of $X, Y$ to arbitrarily large orders. From this it is straightforward to conclude that $F$ must be contained in the linear span in question. The proof of the lemma is over. The next lemma completes the description of the normalizers of non-trivial abelian groups, cf. Lemma \[commuting3\]. \[commuting7nowlemma\] Let $G \subset {{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$ be a finitely generated non-trivial abelian group all of whose elements have infinitesimal generators parallel to a certain formal vector field $X$. Then the normalizer $N_G$ of $G$ in ${{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$ satisfies the following: - Suppose that $G$ is contained in ${\rm Exp}\, (tX)$. Then $N_G$ coincides with the centralizer of its elements. Namely, it consists of those elements of ${{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$ whose infinitesimal generators have the form $aX + bY$, with $a,b$ first integrals of $X$ and where $Y$ is a vector field commuting with $X$ and not everywhere parallel to $X$ (if $Y$ does not exist, then $N_G$ is reduced to the group formed by those elements whose infinitesimal generators have the form $aX$). - If $G \subset {{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$ is not contained in the exponential of a single vector field $X$, then $N_G$ coincides with the subgroup of ${{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$ consisting of those elements whose infinitesimal generators have the form $hX$ ($h$ first integral for $X$). In particular, $N_G$ is abelian. Suppose first that $G$ is contained in ${\rm Exp} \, (tX)$, for some $X \in {{\widehat{\mathfrak{X}}_2}}$ and let $\varphi$ be a non-trivial element of $G$. Next, let $F \in {{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$ satisfy $F \circ \varphi \circ F^{-1} = \widetilde{\varphi} \in G$. Then $F \circ \varphi \circ F^{-1} \circ \varphi^{-1} = (\widetilde{\varphi} \circ \varphi^{-1}) \in G$. Suppose now that $\varphi \neq \widetilde{\varphi}$. Since $\widetilde{\varphi}, \, \varphi$ are both embedded in ${\rm Exp} \, (tX)$ and satisfy $\widetilde{\varphi} \circ \varphi^{-1} \neq {\rm id}$, the order of contact between $\widetilde{\varphi} \circ \varphi^{-1}$ and the identity is equal to the order of contact between $\varphi$ and the identity. On the other hand, the order of contact of $F \circ \varphi \circ F^{-1} \circ \varphi^{-1} \neq {\rm id}$ with the identity must be strictly larger than the corresponding order of $\varphi$, cf. Remark \[obs1.1\]. The resulting contradiction ensures that $F$ must commute with $\varphi$. Therefore $N_G$ coincides with the centralizer of $\varphi$ and the statement results from Lemma \[commuting2\]. Suppose now that $G$ is contained in the group generated by a number of exponentials ${\rm Exp}\, (thX)$, where $h$ is a first integral for $X$, but not on the exponential of a single vector field. Since $G$ is finitely generated and abelian, it contains an element $\varphi \neq {\rm id}$ having maximal order $r$ of contact with the identity. Without loss of generality, let $X$ denote the infinitesimal generator of $\varphi$. Note that $G$ may contain other elements having order of contact with the identity equal to $r$ but not contained in ${\rm Exp}\, (tX)$ since $X$ might admit a first integral in ${{\mathbb{C} ((x,y))}}$ whose order of the “numerator” equals the order of the “denominator”. Yet, again the fact that $G$ is finitely generated implies that only finitely many vector fields $X, h_1 X, \ldots , h_l X$ may give rise to elements in $G$ having order of contact with the identity equal to $r$. Now, recall that inner automorphisms of ${{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$ preserves the order of contact with the identity so that the normalizer of $G$ must left invariant the set of elements of $G$ contained in one of the following exponentials: ${\rm Exp}\, (tX), \ldots , {\rm Exp}\, (th_lX)$. Therefore the natural action of $F \in N_G$ on ${{\widehat{\mathfrak{X}}_2}}$ must induce a permutation of the set $X, h_1 X, \ldots h_l X$. Modulo passing to a finite power $F^k$ of $F$, it follows that $F^k$ preserves ${\rm Exp}\, (tX)$ and hence it must commute with $\varphi$. In other words, there is a finite power $F^k$ of $F$ that lies in the centralizer of $\varphi$. However, it follows from the description of centralizers presented in Lemma \[commuting2\] that the “$k^{\rm th}$-root” of an element in the centralizer still belongs to the centralizer. Therefore $F$ itself belongs to the centralizer of $\varphi$. However, if $l \geq 1$, then the power $F^k$ may be chosen so that $F^k$ preserves also the vector field $h_1X$. Thus the same argument will imply that $F$ belongs to the centralizer of some element in ${\rm Exp}\, (th_1X)$. According to Lemma \[lastversionLemma2\], the intersection of these two centralizers consist of elements in ${{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$ whose infinitesimal generators have the form $hX$ where $h$ is a first integral for $X$. The lemma is proved in this case. Finally suppose that $l=0$ and let $s$ be the “second greatest” order of an infinitesimal element $\tilde{h}_0X$ of an element in $G$. In particular $2 \leq s < r$. By assumption, this element exists since $l=0$ and $G$ is not contained in the exponential of a single vector field. Again, $G$ being finitely generated and abelian, it follows that there are only finitely many formal vector fields $\tilde{h}_0 X, \ldots ,\tilde{h}_mX$ having order $s$ at the origin and corresponding to infinitesimal generators of elements in $G$. Therefore the action of $F$ on the set formed by these vector fields must preserve the set itself. Thus $F$ will finally belong to the centralizer of some element whose infinitesimal generator is $\tilde{h}_0X$. Lemma \[lastversionLemma2\] will then complete our proof. \[ageneraldecomposition\] [Assume we are given vector fields $X, Y \in {{\widehat{\mathfrak{X}}_2}}$ which are not everywhere parallel. Consider also a third vector field $Z \in {{\widehat{\mathfrak{X}}_2}}$. Since $X,Y$ are not everywhere parallel, there exists a unique decomposition $Z = aX +bY$ with $a, b$ belonging to ${{\mathbb{C} ((x,y))}}$. Our purpose here is to remind the reader of the elementary fact that, whereas $a,b$ may lie in ${{\mathbb{C} ((x,y))}}\setminus {{\mathbb{C} [[x,y]]}}$, the corresponding vector fields $aX, \, bY$ have coefficients in ${{\mathbb{C} [[x,y]]}}$ and, in fact, the vector fields $aX, \, bY$ belong to ${{\widehat{\mathfrak{X}}_2}}$, as it can immediately be checked by directly computing the corresponding functions $a,b$. The contents of this remark will implicitly be used in the rest of the section.]{} Let us now state a technical lemma that will repeatedly be used in the proofs of the two main results of this section, namely Propositions \[commuting8\] and \[commuting9\] below. For this suppose that $X,Y$ are formal vector fields in ${{\widehat{\mathfrak{X}}_2}}$ that are not everywhere parallel. \[formerclaim2\] Suppose that $X, Y$ are non-everywhere parallel commuting vector fields. Consider elements $F_1, F_2 \in {{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$ whose infinitesimal generators are respectively given by $a_1 X + b_1 Y$ and by $a_2 X + b_2 Y$. The coefficients $a_i,b_i \in {{\mathbb{C} [[x,y]]}}$ are supposed to be first integrals for $X$ and $b_1 . b_2$ is supposed not to vanish identically. Then the infinitesimal generator of $F_1 \circ F_2 \circ F_1^{-1} \circ F_2^{-1}$ is everywhere parallel to $X$ if and only if the quotient $b_1/b_2$ is constant. To begin with, note that the commutator of $a_1X + b_1 Y$ and $a_2 X + b_2 Y$ is given by $$[a_1X + b_1 Y, a_2X + b_2 Y] = \left( b_1 \frac{\partial a_2}{\partial Y} - b_2 \frac{\partial a_1}{\partial Y}\right) X + \left( b_1 \frac{\partial b_2}{\partial Y} - b_2 \frac{\partial b_1}{\partial Y}\right) Y \, .$$ Assume that the infinitesimal generator of $F_1 \circ F_2 \circ F_1^{-1} \circ F_2^{-1}$ is everywhere parallel to $X$. To conclude that the quotient $b_1/b_2$ is a constant, it suffices to check that the coefficient of $Y$ in the right-hand side of the above equation vanishes identically. Indeed, this would mean that $b_1/b_2$ is a first integral for $Y$ and hence must be constant since it is also a first integral for $X$ (and $X,Y$ are not everywhere parallel by assumption). Therefore, it remains to check that the function $b_1 (\partial b_2/\partial Y) - b_2 (\partial b_1/\partial Y)$ is identically zero. For this let us recall that, whether or not the functions $a_i,b_i$ belong to ${{\mathbb{C} ((x,y))}}\setminus {{\mathbb{C} [[x,y]]}}$, $i=1,2$, the vector fields $a_1X + b_1 Y, \, a_2X + b_2 Y$ are supposed to belong to ${{\widehat{\mathfrak{X}}_2}}$. Therefore their commutators have order strictly higher than the maximum between the orders of $a_1X + b_1 Y$ and of $a_2X + b_2 Y$. Having recalled this fact, the argument to prove that $b_1 (\partial b_2/\partial Y) - b_2 (\partial b_1/\partial Y)$ must vanish identically is as follows. Suppose for a contradiction that $b_1 (\partial b_2/\partial Y) - b_2 (\partial b_1/\partial Y)$ does not vanish identically and denote by $C_m (x,y)$ is first non-zero homogeneous component (so that $m$ is the order of the function $b_1 (\partial b_2/\partial Y) - b_2 (\partial b_1/\partial Y)$). Whereas $m$ may be negative, the vector field $[b_1 (\partial b_2/\partial Y) - b_2 (\partial b_1/\partial Y)] Y$ lies in ${{\widehat{\mathfrak{X}}_2}}$. On the other hand, as done in the proof of Lemma \[commuting1\], the Campbell-Hausdorff formula may be used to compute the infinitesimal generator of $F_1 \circ F_2 \circ F_1^{-1} \circ F_2^{-1}$. It turns out, however, that iterated commutators higher than $[a_1X + b_1 Y, a_2X + b_2 Y] $ give rise to monomials of degree strictly greater than $m$ appearing as multiplicative factors of the vector field $Y$. In fact, all contributions parallel to $Y$ arise from a commutator of the form $[\overline{b}Y, \overline{d}Y]$ where at least one between $\overline{b}Y, \, \overline{d}Y$ has order greater than or equal to the order of $C_m (x,y).Y$. It then follows that the component $C_m (x,y).Y$ appearing in the expression for the mentioned infinitesimal generator of $F_1 \circ F_2 \circ F_1^{-1} \circ F_2^{-1}$ will not be cancelled out by a contribution arising from higher order commutators. In other words, the infinitesimal generator of $F_1 \circ F_2 \circ F_1^{-1} \circ F_2^{-1}$ has a non-zero component in $Y$ and thus is not everywhere parallel to $X$. The converse is a more direct application of the Campbell-Hausdorff formula. If $b_2 =cb_1$ for some constant $c \in {\mathbb{C}}$, then $[a_1X + b_1 Y, a_2X + b_2 Y] $ is everywhere parallel to $X$. It is the immediate to check that all higher iterated commutators are everywhere parallel to $X$ as well. The proof of the lemma is over. Building on the previous material, the next proposition provides some key information on the structure of solvable subgroups of ${{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$. \[commuting8\] Suppose that $G \subset {{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$ is a solvable non-abelian group. Then the following holds: - $G$ has non-trivial center $Z(G)$. - $G$ is metabelian, i.e. its first derived group is abelian. - If $G$ is not abelian, then $Z (G)$ coincides with the first derived group $D^1 (G)$ of $G$. Moreover $Z (G) =D^1 (G)$ is fully contained in ${\rm Exp}\, (tX)$ for a certain $X \in {{\widehat{\mathfrak{X}}_2}}$. Consider the derived series $D^0 G = G$, $D^1G = \langle [ G,G] \rangle$, $D^{i+1} G = \langle [D ^i G, D^i G] \rangle$ of $G$. Let $k \geq 1$ be the largest integer for which $D^{i} G$ is not trivial. Then $D^k G$ is a non-trivial abelian group which, in addition, is a normal subgroup of $D^{k-1} G$. Our first purpose is to characterize $D^k G$. [*Claim 1*]{}: $D^k G$ is contained in the exponential ${\rm Exp}\, (tX)$ of a single vector field $X$. [*Proof of Claim 1*]{}. Suppose for a contradiction that the statement is false. Then, being an abelian group, it follows from Lemma \[commuting3\] that either $D^k G$ is contained in the group induced by the linear span of two commuting non-everywhere parallel vector fields $X,Y$ or it is as in the second item of Lemma \[commuting7nowlemma\]. In both cases, it follows from Lemma \[lastversionLemma3\] and Lemma \[commuting7nowlemma\], that the normalizer $N_{D^k G}$ of $D^k G$ is an abelian group. Since $D^{k-1} G \subset N_{D^k G}$, we conclude that $D^{k-1} G$ is itself abelian what is impossible since $D^k G$ is not reduced to the identity. The claim is proved. Since $D^{k-1} G$ is not abelian, we conclude in particular the existence of a vector field $Y$ commuting with $X$ and not everywhere parallel to $X$. Besides, $D^{k-1} G$ is contained in the centralizer of ${\rm Exp}\, (tX)$ so that, every element in $D^{k-1} G$ has an infinitesimal generator of the form $aX + bY$, where $a,b$ are first integrals of $X$, cf. Lemma \[commuting7nowlemma\]. Consider again the collection of all infinitesimal generators $a_i X + b_iY$, $i=1, \ldots ,l$ of non-trivial elements in $D^{k-1} G$. [*Claim 2*]{}: There exists one value of $i$ for which $b_i$ is not identically zero. Furthermore, if $a_{i_1} X + b_{i_1}Y$ and $a_{i_2} X + b_{i_2}Y$ are such that $b_{i_1} b_{i_2}$ is not identically zero, then the quotient $b_{i_1} /b_{i_2}$ is a constant. [*Proof of Claim 2*]{}. Clearly there is at least one value of $i$ for which $b_i$ is not identically zero, otherwise $D^{k-1} G$ would be an abelian group. Next consider, without loss of generality, that $a_1 X + b_1Y$ and $a_2 X + b_2Y$ are such that $b_1 b_2$ does not vanish identically. The commutator subgroup of $D^{k-1} G$ being $D^kG$, all its elements have a (constant) multiple $X$ as infinitesimal generator. The fact that the quotient $b_{i_1} /b_{i_2}$ must be constant then results at once from Lemma \[formerclaim2\]. Let then $\overline{f}$ denote a non-identically zero function such that, for every infinitesimal generator $a_i X +b_i Y$ of an element in $D^{k-1} G$, the coefficient $b_i$ is a constant multiple (possibly zero) of $\overline{f}$. In the sequel, we are going to show that $G$ is metabelian, i.e. that $k =1$ (provided that $G$ is not abelian). Naturally this will complete the proof of our proposition. Indeed, it was just seen that $D^{k-1} G$ is contained in the centralizer of ${\rm Exp}\, (tX)$ and, in turn, $D^k G$ is contained in ${\rm Exp}\, (tX)$ (for a unique vector field $X$). The statement is then established provided that $k=1$. To prove that $G$ is metabelian, let us suppose for a contradiction that $k \geq 2$. Hence the group $D^{k-2} G$ can be considered. This group contains $D^{k-1} G$ as a normal subgroup. Since $D^{k-2} G$ normalizes $D^{k-1} G$, it must also normalize the center of $D^{k-1} G$, namely the group $D^k G$. Therefore $D^{k-2} G$ is contained in the centralizer of ${\rm Exp}\, (tX)$ and this ensures that the infinitesimal generator of every element in $D^{k-2} G$ still has the form $cX +dY$, with $c,d$ being first integrals of $X$. The next step consists of characterizing these elements so as to show that the commutator between every two elements in $D^{k-2} G$ possesses an infinitesimal generator that is a multiple of $X$. A contradiction with the fact that $k \geq 2$ then arises since there are elements in $D^{k-1} G$ whose infinitesimal generators have the form $a_i X + b_i Y$ with $b_i$ non identically zero (Claim 2). Let then $\psi \in D^{k-2} G$ be a non-trivial element whose infinitesimal generator is $cX +dY$ and consider another non-trivial element $\varphi \in D^{k-1} G$ whose infinitesimal generator $a X + bY$ is such that $b$ does not vanish identically. According to Hadamard lemma, the infinitesimal generator $AX + BY$ of $\psi \circ \varphi \circ \psi^{-1} \in D^{k-1} G$ is given by $aX +bY + [cX+dY, aX +bY] + \cdots$ where the dots stand for terms whose order is strictly greater than the order of $[cX+dY, aX +bY]$. In turn, $$[cX+dY, aX +bY] = \left( d \frac{\partial a}{\partial Y} - b \frac{\partial c}{\partial Y}\right) X + \left( d \frac{\partial b}{\partial Y} - b \frac{\partial d}{\partial Y}\right) Y \, . \label{therewego}$$ Note that the coefficient $B$ in the infinitesimal generator $AX + BY$ is a constant multiple of $b$ thanks to Claim 2. It then follows that the coefficient of $Y$ in the right-hand side of Formula (\[therewego\]) must vanish identically. In fact, if it does not vanish identically, its order is strictly greater than the order of $bY$ since this coefficient is nothing but $[dY, bY]$. On the other hand, the coefficients of $Y$ in the remaining terms of Hadamard’s formula have order strictly greater than the order of $[dY, bY]$. From this, we promptly conclude that $B$ cannot be a constant multiple of $b$ contradicting Claim 2. On the other hand, from the fact that $d (\partial b/\partial Y) - b (\partial d/\partial Y)$ vanishes identically, it follows that $d$ is a constant multiple of $b$. Therefore, we have proved that every element in $D^{k-2} G$ has an infinitesimal generator of the form $c_i X + d_iY$ where $d_i$ is a constant multiple of the function $\overline{f}$ (the possibility of having $d_i$ identically zero being clearly included in the discussion). The desired contradiction is then obtained by considering the commutator of two elements $\varphi_1, \varphi_2 \in D^{k-2} G$. It follows from Lemma \[formerclaim2\] that the infinitesimal generator of $\varphi_1 \circ \varphi_2 \circ \varphi_1^{-1} \circ \varphi_2^{-1}$ is a multiple of the vector field $X$. Therefore $D^{k-1} G$ is abelian what contradicts the assumption that $D^k G$ is not reduced to the identity. The proof of Proposition \[commuting8\] is over. Let then $G \subset {{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$ be a [*finitely generated*]{} solvable non-abelian group. According to Proposition \[commuting8\], every element in $D^1 G$ is contained in ${\rm Exp}\, (tX)$ for a certain $X \in {{\widehat{\mathfrak{X}}_2}}$. In other words, the infinitesimal generator of every element in $D^1 G$ is a constant multiple of $X$. Consider a finite generating set $\{\psi_1, \ldots , \psi_k\}$ for $G$. The lemma below complements Proposition \[commuting8\] by providing an explicit normal form for the generators $\psi_1, \ldots , \psi_k$. \[normalformsolvablegroup\] Let $G \subset {{\widehat{\rm Diff}_{1} (\mathbb{C}^2, 0)}}$ and $X$ be as above. Then there is a vector field $Y$ commuting with $X$ but not everywhere parallel to $X$ along with a non-identically zero $\overline{f} \in {{\mathbb{C} ((x,y))}}$ such that the following holds: 1. For every $i \in \{ 1, \ldots , k\}$, the infinitesimal generator $Z_i$ of $\psi_i$ has the form $a_i X + b_i Y$ where $a_i, b_i$ are first integrals of $X$. 2. For every $i \in \{ 1, \ldots , k\}$, $b_i = \alpha_i \overline{f}$ with $\alpha_i \in {\mathbb{C}}$ (in particular $\overline{f}$ is itself a first integral of $X$). 3. Given $i,j \in \{ 1, \ldots , k\}$, then $$[Z_i, Z_j] = \overline{f} \left( \frac{\partial (\alpha_i a_j -\alpha_j a_i)}{\partial Y} \right) X \, .$$ 4. Given $i,j \in \{ 1, \ldots , k\}$, then $\psi_i, \, \psi_j$ commute if and only if $[Z_i, Z_j] =0$ what, in turn, is equivalent to saying that $\alpha_i a_j -\alpha_j a_i$ is a constant. 5. For every $\psi \in G$, the infinitesimal generator $Z$ of $\psi$ has order at $(0,0)$ less than or equal to the order of $X$. According to Proposition \[commuting8\] all elements in $D^1 G$ are contained in the center of $G$. Since all these elements have $X$ as infinitesimal generator (up to a multiplicative constant), item (1) follows from Lemma \[commuting2\]. Similarly, for every $i,j \in \{ 1, \ldots , k\}$ the element $\psi_i \circ \psi_j \circ \psi_i^{-1} \circ \psi_j^{-1}$ lies in $D^1 G$ and hence possesses an infinitesimal generator parallel to $X$. In view of it, Lemma \[formerclaim2\] implies item (2) above. In turn, item (3) becomes an immediate computation. As to item (4), Lemma \[commuting1\] says that $\psi_i, \psi_j$ commute if and only if $[Z_i, Z_j ] =0$. However, item (3) shows that $[Z_i,Z_j]=0$ if and only if $\alpha_i a_j -\alpha_j a_i$ is a first integral for $Y$. Since $\alpha_i a_j -\alpha_j a_i$ is also a first integral for $X$, the fact that $X,Y$ are not everywhere parallel ensures that $\alpha_i a_j -\alpha_j a_i$ must be constant in this case. It only remains to check item (5). Suppose for a contradiction that $\psi \in G$ has an infinitesimal generator $Z$ whose order at $(0,0) \in {\mathbb{C}}^2$ is strictly greater than the order of $X$. Modulo adding $\psi$ to the generating set of $G$, we can assume without loss of generality that $\psi=\psi_1$ so that $Z$ becomes $Z_1$. To prove item (5), it suffices to find $j \in \{ 2, \ldots , k\}$ so that $\psi_j$ does not commute with $\psi_1$. In fact, in this case, the infinitesimal generator of $\psi_1 \circ \psi_j \circ \psi_1^{-1} \circ \psi_j^{-1}$ has order strictly greater than the order of $X_1$ and, on the other hand, this infinitesimal generator is a constant multiple of $X$ what yields the desired contradiction. Now, to check the existence of $\psi_j$ as desired, note that $\psi_j$ commutes with $\psi_1$ if and only if $\alpha_1 a_j -\alpha_j a_1 \in {\mathbb{C}}$ (item (4)). Thus, if $\psi_j$ commutes with $\psi_1$ for every $j \in \{ 1, \ldots , k\}$, the condition that $\alpha_1 a_j -\alpha_j a_1$ is a constant for every $j \in \{ 2, \ldots ,k\}$ implies that, indeed, for every $i,j \in \{ 1, \ldots , k\}$, the value of $\alpha_j a_i -\alpha_i a_j$ is a constant as well. In other words $G$ is an abelian group what contradicts the assumption that $G$ is solvable non-abelian. The lemma is proved. We are finally able to prove Proposition \[commuting9\]. Consider the corresponding sequence of sets $S(j)$ and let $G(j)$ (resp. $\overline{G} (j,j-1)$) be the subgroup generated by $S(j)$ (resp. $S(j) \cup S(j-1)$). Let $k$ be the largest integer for which $S(k)$ is not trivial. Then $G(k)$ is abelian while $\overline{G} (k,k-1)$ is solvable. In particular, we can consider the [*smallest*]{} integer $m$ for which $\overline{G} (m,m-1)$ is solvable. Let us assume for a contradiction that $m \geq 2$ so that $\overline{G} (m,m-1)$ is strictly contained in $G$. Let $F$ be an element in $S (m-2)$ and note that, by construction, $F$ satisfies $F^{\pm 1} \circ G (m) \circ F^{\mp 1} \subset \overline{G} (m,m-1)$. Since $\overline{G} (m, m-1)$, and hence $G (m)$, are both solvable, it follows from Proposition \[commuting8\] that they have non-trivial centers. These centers will respectively be denoted by $Z (\overline{G} (m, m-1))$ and $Z (G (m))$. Another general remark concerning the groups $G(m)$ and $\overline{G} (m,m-1)$ is as follows. Let $\varphi_0$ be an element (not necessarily unique) of $S (m-1)$ having the smallest order of contact with the identity among all elements in $S (m-1)$. Then every element in $S (m)$, and hence every element in $G (m)$, has contact order with the identity strictly larger than the contact order of $\varphi_0$. In other words, there is an element $\varphi_0 \in S (m-1)$ whose order of contact with the identity is strictly smaller than the orders of contact with the identity of all elements in $G (m)$. Let us begin the discussion with the case where $\overline{G} (m,m-1)$ is an abelian group [Case A]{}: Suppose that the group $\overline{G} (m,m-1)$ is abelian. The group $G (m) \subseteq \overline{G} (m,m-1)$ is abelian as well. Suppose that $G (m)$ is contained in the span of two non everywhere parallel commuting vector fields (without being contained in the exponential of a single vector field). In this case, Lemma \[commuting3\] ensures the same must hold for $\overline{G} (m,m-1)$. In particular $F$ acts on the linear span $E$ of these vector fields. Besides the eigenvalues of this action are equal to $1$ since $F$ is tangent to the identity. The proof of Lemma \[lastversionLemma3\] then shows that $F$ is naturally embedded in $E$. Hence $\overline{G} (m-1,m-2)$ is abelian and the desired contradiction results at once. Suppose now that every element in $G (m)$ has an infinitesimal generator of the form $aX$, for a certain formal vector field $X$ and such that $a$ is a first integral for $X$. Since $G (m) \subseteq \overline{G} (m,m-1)$ and the latter group is abelian, there are two possibilities for $\overline{G} (m,m-1)$, namely: 1. $\overline{G} (m,m-1)$ is contained in the linear span of two non everywhere parallel commuting vector fields $Y,Z$. 2. All elements in $\overline{G} (m,m-1)$ have infinitesimal generators of the form $aX$, where $a$ still is a first integral for $X$. Consider first the situation described in item (1). Since there is $\varphi_0 \in \overline{G} (m,m-1)$ whose order of contact with the identity is strictly smaller than the orders of elements in $G (m)$, the inclusion $G (m) \subset \overline{G} (m,m-1)$ ensures that $G (m)$ must be contained in ${\rm Exp}\, (tX)$ for a certain vector field, still denoted by $X$, belonging to the span in question. By construction, the order of $X$ at $(0,0) \in {\mathbb{C}}^2$ is strictly greater than the order of the remaining vector fields in the span of $Y,Z$ (apart from constant multiples of $X$). Thus $F \in S (m-2)$ must take $X$ on $X$. It then follows that $F$ belongs to the centralizer of ${\rm Exp}\, (tX)$. In other words, the set $S (m-2)$ is contained in the centralizer of ${\rm Exp}\, (tX)$. Without loss of generality, there is an element $\varphi$ in $S (m-1)$ whose infinitesimal generator is $Y$. Since $G (m)$ is not trivial, the collection of commutators $[\varphi ,F]$, for every $F \in S (m-2)$ is contained in ${\rm Exp}\, (tX)$. To obtain the desired contradiction, we proceed as follows: let $F \in S (m-2)$ be fixed. We can assume that $[\varphi ,F]$ is not the identity otherwise $F$ is contained in the exponential of the span of $Y,Z$ ($F$ already commutes with ${\rm Exp}\, (tX)$). Hence, according to Lemma \[lastversionLemma1\], the infinitesimal generator of $F$ has the form $aX + bY$ where $a,b$ are first integrals for $X$. Since the infinitesimal generator of $[\varphi ,F]$ is $X$, it follows from Lemma \[formerclaim2\] that $b$ is a constant. A contradiction then arises from observing that this “normal form” for elements in $S (m-2)$ implies that the group generated by $S (m) \cup S (m-1) \cup S (m-2)$ is solvable (cf. again Lemma \[formerclaim2\]). Thus the group $\overline{G} (m-1,m-2)$ is solvable as well and this is clearly impossible. To complete the proof of Proposition \[commuting9\] in the case where $\overline{G} (m,m-1)$ is abelian, it remains to check the case in which all elements in $\overline{G} (m,m-1)$ have infinitesimal generators of the form $aX$. Now we have: [*Claim 1*]{}. $G (m)$ must be contained in ${\rm Exp}\, (tX)$. [*Proof of Claim 1*]{}. Let $F$ be a given element in $S (m-2)$. Suppose there are $\varphi_1 \in G (m) \cap {\rm Exp}\, (tX)$ and $\varphi_2 \in G (m) \cap {\rm Exp}\, (tAX)$, where $A$ is a non-constant first integral of $X$. Denote by $r$ (resp. $s$) the order of $X$ (resp. $AX$) at the origin. As already seen, the collection of vector fields of order $r$ (resp. $s$) inducing a non-trivial element in the abelian group $\overline{G} (m,m-1)$ is finite. Hence, up to passing to a finite power $F^k$ of $F$, it follows that $F^k$ fixed both $X$ and $AX$, i.e. $F^k$ belongs to the centralizers of both ${\rm Exp}\, (tX)$ and ${\rm Exp}\, (tAX)$. As previously seen, this implies that $F$ itself belongs to the intersections of the centralizers of ${\rm Exp}\, (tX)$ and ${\rm Exp}\, (tAX)$. Thanks to Lemma \[lastversionLemma2\], the intersections of these centralizers is an abelian group whose infinitesimal generators have all the form $aX$ ($a$ first integral of $X$), it follows again that the group generated by $S (m-1) \cup S (m-2)$ is abelian. The resulting contradiction proves the claim. To conclude the proof, we still know that every $F \in S (m-2)$ belongs to the centralizer of ${\rm Exp}\, (tX)$. Let $S (m-2) = \{ F_1 , \ldots , F_l \}$ and let $c_i X + d_i Y$ denote the infinitesimal generator of $F_i$, $i=1, \ldots ,l$. To obtain the desired contradiction, it suffices to show the following: [*Claim 2*]{}. For every pair $i,j$ such that $d_i . d_j$ is not identically zero, the quotient $d_i /d_j$ is a constant. Indeed, as already pointed out, Claim 2 implies that $G (m-1, m-2)$ is a solvable group what is impossible. [*Proof of Claim 2*]{}. As already observed, without loss of generality there is $\varphi$ in $S (m-1)$ whose infinitesimal generator is $Y$. The commutator of $F_i$ an $\varphi$ belongs to $G (m)$ and hence must admit $X$ as infinitesimal generator (cf. Claim $1$). The statement of Claim 2 becomes then a direct consequence of Lemma \[formerclaim2\]. [Case B]{}: Suppose that the group $\overline{G} (m,m-1)$ is solvable but not abelian. According to Proposition \[commuting8\] the center of $\overline{G} (m,m-1)$ is non-trivial and all its elements admit a certain vector field $X$ as infinitesimal generator. In particular $\overline{G} (m,m-1)$ is contained in the centralizer of ${\rm Exp}\, (tX)$ and, in addition, there is a vector field $Y$ not everywhere parallel to $X$ and commuting with $X$. Also, by construction, the group $G (m)$ contains the center of $\overline{G} (m,m-1)$. However, we recall that every element in $\overline{G} (m,m-1)$ has an infinitesimal generator whose order is at most the order of $X$. Since these orders are preserved by the adjoint action of elements in ${{{\rm Diff}_1 ({\mathbb C}^2, 0)}}$, the condition $F^{\pm 1} \circ G (m) \circ F^{\mp 1} \subset \overline{G} (m,m-1)$ implies that every element $F\in S(m-2)$ must take $X$ to a constant multiple of $X$ and hence to $X$ itself since $F$ is tangent to the identity. Thus $F$ belongs to the centralizer of $X$ and hence its infinitesimal generator has the form $aX +bY$ where $a,b$ are first integrals of $X$. Let us now consider the group $G (m)$. It was seen that $G (m)$ contains $D^1 \overline{G} (m,m-1)$ and hence elements whose infinitesimal generator is $X$. On the other hand, recall that $G (m)$ is generated by elements having the form $[\psi, F]$ where $\psi \in \overline{G} (m,m-1)$ and $F \in \overline{G} (m,m-1) \cup S (m-2)$. In view of the fact that $F$ belongs to the centralizer of $X$, we conclude that $G (m)$ is also contained in the centralizer of $X$ since so is $\overline{G} (m,m-1)$. There are two cases to be considered depending on whether or not $G (m)$ contains elements whose infinitesimal generators are not everywhere parallel to $X$. Suppose first that $G (m)$ contains an element whose infinitesimal generator $Y$ is not everywhere parallel to $X$. Then $G (m)$ contains a rank $2$ abelian group. Modulo passing to a finite power of $F$ this group must be preserved by the adjoint action of $F$ since $F$ preserves $X$ and it also preserves the order of $Y$: up to multiplicative constants and additive constant multiples of $X$, in the solvable group $\overline{G} (m,m-1)$ there can exist only finitely many infinitesimal generators with the same order since the difference between two of them has order bounded by the order of $X$, cf. Proposition \[commuting8\]. Since a power of $F$ preserves $Y$ and $F$ is tangent to the identity, it follows that $F$ itself preserves $Y$, cf. Lemma \[commuting2\]. It then follows from the discussion in Case A, item (1), that $F$ is contained in the linear span of $X,Y$. Therefore $\overline{G} (m,m-1) \cup S (m-2)$ generated a solvable group what yields a contradiction in the present case. Suppose now that every element in $G (m)$ has an infinitesimal generator everywhere parallel to $X$. Since the infinitesimal generator of each element $F \in S (m-2)$ has the form $aX +bY$, where $a,b$ are first integrals of $X$, the fact that all the commutators $[\psi, F]$, where $\psi \in \overline{G} (m,m-1)$ and $F \in \overline{G} (m,m-1) \cup S (m-2)$, have infinitesimal generators parallel to $X$ implies that all the coefficients “$b$” differ by a multiplicative constant, cf. Lemma \[formerclaim2\]. It then follows that $\overline{G} (m,m-1) \cup S (m-2)$ still generates a solvable group what is impossible. Proposition \[commuting9\] is proved. Proof of Theorem C ================== Building on the material developed in the previous section, and especially on Proposition \[commuting9\], the proof of Theorem C will be completed in this last section. Let us first make use of Ghys’s observation [@ghysBSBM] concerning convergence of commutators for diffeomorphisms “close to the identity” to establish the following proposition: \[almostthere\] Suppose that $G \subset {{{\rm Diff}_1 ({\mathbb C}^2, 0)}}$ is a group possessing locally discrete orbits. Then $G$ is solvable. Consider a finite set $S$ consisting of tangent to the identity local diffeomorphisms of $({\mathbb{C}}^2, 0)$. Suppose that the group $G$ generated by the set $S$ is not solvable (at level of groups of germs of diffeomorphisms). Then consider the pseudogroup generated by $S$ on a certain (sufficiently small) neighborhood of the identity which will be left implicit in the subsequent discussion for the sake of notation. The proof of the proposition amounts to showing that the resulting pseudogroup $G$ is [*non-discrete*]{} in the sense that it contains a sequence of elements $h_i$ satisfying the following conditions: - $h_i \neq {\rm id}$ for every $i \in {\mathbb{N}}$ and, furthermore, as element of the pseudogroup $G$, $h_i$ is defined on a ball $B_{\epsilon}$ of uniform radius $\epsilon > 0$ about $(0,0) \in {\mathbb{C}}^2$. - The sequence of mappings $\{ h_i \}$ converges uniformly to the identity on $B_{\epsilon}$. Assuming the existence of a sequence $h_i$ as indicated above, it follows that each of the sets ${\rm Fix}_i = \{ p \in B_{\epsilon} \, \; ; \; \, h_i (p) = p \}$ is a proper analytic subset of $B_{\epsilon}$. For every $N \geq 1$, pose $A_N = \bigcap_{i=N}^{\infty} {\rm Fix}_i$ so that $A_N$ is also a proper analytic set of $B_{\epsilon}$. Finally, let $F = \bigcup_{N=1}^{\infty} A_N$. The set $F$ has null Lebesgue measure so that points in $B_{\epsilon} \setminus F$ can be considered. If $p \in B_{\epsilon} \setminus F$ then, by construction, there is a subsequence of indices $\{ i(j) \}_{j \in {\mathbb{N}}}$ such that $h_{i(j)} (p) \neq p$ for every $j$. Since $h_i$ converges to the identity on $B_{\epsilon}$, the sequence $\{ h_{i(j)} (p) \}_{j \in {\mathbb{N}}}$ converges non-trivially to $p$. This show that the orbit of $p$ is not locally discrete and establishes the proposition modulo verifying the existence of mentioned sequence $\{ h_i \}$. The construction of the sequence $\{ h_i \}$ begins with an estimate concerning commutators of diffeomorphisms that can be found in [@lorayandI], page 159, which is itself similar to another estimate found in [@ghysBSBM]. Let $F_1, F_2$ be local diffeomorphisms (fixing the origin and) defined on the ball $B_r$ of radius $r > 0$ about the origin of ${\mathbb{C}}^2$. For small $\delta > 0$, to be fixed later, suppose that $$\max \{ \sup_{z \in B_r} \Vert F_1^{\pm 1} (z) -z \Vert \; , \; \sup_{z \in B_r} \Vert F_2^{\pm 1} (z) -z \Vert \} \leq \delta/4 \, . \label{initializing}$$ Then, given $0 < \tau \leq 2 \delta$, the commutator $[F_1,F_2]$ is defined on the ball of radius $r -4\delta -\tau$ and, in addition, it verifies the estimate $$\sup_{z \in B_{r -4\delta -\tau}} \Vert [F_1,F_2] (z) -z \Vert \leq \frac{2}{\tau} \sup_{z \in B_r} \Vert F_1 (z) -z \Vert \, . \, \sup_{z \in B_r} \Vert F_2 (z) -z \Vert \, . \label{estimateLorayandI}$$ Let us apply the preceding estimate to elements in $S(i)$. Because $G$ consists of diffeomorphisms tangent to the identity, modulo conjugating it by a homothety of type $(x,y) \mapsto (\lambda x, \lambda y)$, with $\vert \lambda \vert < 1$, all local diffeomorphisms in $S$ can be supposed to be defined on the unit ball. Furthermore they can also be supposed to satisfy Estimate (\[initializing\]) for $r=1$ and some arbitrarily small $\delta >0$ to be fixed later. Setting $\tau = 2\delta$, it then follows that every element $H$ in $S(1)$ is defined on $B_{1- 6 \delta}$ and satisfies $$\sup_{z \in B_{1 - 6\delta}} \Vert H (z) -z \Vert \leq \delta /2^4\, .$$ Next, note that every element in $S(2)$ is the commutator of an element in $S (1)$ and an element in $S \cup S(1)$. Thus, applying again Estimate (\[estimateLorayandI\]) to $r=1-6\delta$, $\delta_1 =\delta/2$ and $\tau_1 = \tau/2 = \delta$, we conclude that every element $H$ in $S(2)$ is defined on $B_{r_1}$, where $r_1 = 1 -6\delta (1 + 1/2)$. Furthermore these elements $H$ satisfy the estimate $$\sup_{z \in B_{r_1}} \Vert H (z) -z \Vert \leq \delta /2^5 \, .$$ Continuing inductively with $r_i = 1 -6\delta (\sum_{n=0}^i 1/2^n)$, $\delta_i = \delta_{i-1}/2$ and $\tau_i = \tau_{i-1}/2 = \delta_{i-1}$, we conclude that every element $H^i$ in $S (i)$ is defined on a ball of radius $1 - 12 \delta$ and satisfy $\sup_{z \in B_{1-12\delta}} \Vert H^i (z) - z\Vert \leq \delta / 2^{i+3}$ In particular, if $\delta = 1/24$, all elements in $S(i)$ are defined on the ball of radius $1/2$ ($i \in {\mathbb{N}}$). Similarly, it is also clear that elements in $S (i)$ converge uniformly to the identity on $B_{1/2}$. Therefore, to obtain the desired sequence $h_i$, it suffices to pick, for every $i$, one element $h_i \in S(i)$ which is different from the identity. In view of Proposition \[commuting9\], the sequence of sets $S (i)$ never degenerate into the identity alone so that the indicated choice of $h_i$ is always possible. The proof of the proposition is over. We are finally ready to prove Theorem C. Let ${{{\rm Diff}\, ({\mathbb C}^2, 0)}}$ denote the group of (germs of) holomorphic diffeomorphisms at $(0,0) \in {\mathbb{C}}^2$. Consider a subgroup $G \subset {{{\rm Diff}\, ({\mathbb C}^2, 0)}}$ possessing locally discrete orbits in some neighborhood $U$ of $(0,0) \in {\mathbb{C}}^2$. Let $\rho$ be the homomorphism from $G$ to ${\rm GL}\, (2,{\mathbb{C}})$ assigning to an element $\varphi \in G$ its Jacobian matrix at the origin. Denoting by $\Gamma \subset {\rm GL}\, (2,{\mathbb{C}})$ the image of $\rho$, let us consider the short exact sequence $$0 \longrightarrow G_0 = {\rm Ker}\, (\rho) \longrightarrow G \longrightarrow \Gamma \longrightarrow 0 \, .$$ The kernel $G_0$ of $\rho$ consists of those elements in $G$ that are tangent to the identity. Since $G$, and hence $G_0$, has locally discrete orbits, it follows from Proposition \[almostthere\] that $G_0$ is solvable. Therefore, to conclude that $G$ is solvable, it suffices to check that the assumption of having locally discrete orbits forces $\Gamma$ to be solvable as well. While $\Gamma$ is a subgroup of ${\rm GL}\, (2,{\mathbb{C}})$, its standard action on $({\mathbb{C}}^2, 0)$ has little to do with the action of $G$. In fact, if $\gamma$ is an element of $\Gamma$, then $\gamma$ is simply the derivative at the origin of an actual element $\varphi \in G$ and it is $\varphi$, rather than $\gamma$, that acts on $({\mathbb{C}}^2, 0)$. Thus, the effect of the non-linear terms in $\varphi$ must be taken into account. Recall that ${\rm PSL}\, (2, {\mathbb{C}})$ is the quotient of the subgroup ${\rm SL}\, (2,{\mathbb{C}})$ of ${\rm GL}\, (2,{\mathbb{C}})$ consisting of matrices whose determinant equals $1$ by its center which, in turn, consists of $\{ I , -I \}$ where $I$ stands for the identity matrix. Let us consider the projection of $\Gamma$ in ${\rm PSL}\, (2, {\mathbb{C}})$ and let ${\rm P} G$ denote its image. [*Claim 1*]{}. Without loss of generality, we can suppose that ${\rm P} G$ is not solvable. [*Proof of Claim 1*]{}. Note that ${\rm P} G$ is solvable if and only if its first derived group $D^1 ({\rm P} G)$ is abelian. Now, denote by $\widetilde{{\rm P} G}$ the projection of $\Gamma$ to ${\rm SL}\, (2,{\mathbb{C}})$ as an intermediate step for the projection of $\Gamma$ onto ${\rm P} G$. The first derived group of $\widetilde{{\rm P} G}$ will be denoted by $D^1( \widetilde{{\rm P} G})$. Naturally the group $D^1 (\widetilde{{\rm P} G})$ must be abelian provided that $D^1 ({\rm P} G)$ is abelian. In fact, if two matrices $A, B$ commute, then the same applies to any combination of $\pm A, \, \pm B$. On the other hand, $D^1 (\widetilde{{\rm P} G})$ coincides with $D^1 \Gamma$ since the determinant of the commutator of two matrices necessarily equals $1$. Hence the group $\Gamma$ itself is abelian and the theorem is proved in this case. Next note that, as a subgroup of ${\rm PSL}\, (2, {\mathbb{C}})$, ${\rm P} G$ may or may not be discrete. Suppose ${\rm P} G$ non-discrete. Being, in addition, non-solvable, it follows that ${\rm P} G$ is dense in ${\rm PSL}\, (2, {\mathbb{C}})$. In particular, it contains non-elementary discrete Kleinian groups (or even Schottky groups). So it is sufficient to show that a group $G \subset {{{\rm Diff}\, ({\mathbb C}^2, 0)}}$ cannot have locally discrete orbits provided that derivatives at $(0,0) \in {\mathbb{C}}^2$ of its elements induce a non-elementary Kleinian group in ${\rm PSL}\, (2, {\mathbb{C}})$. This will be done below. Summarizing what precedes, the group ${\rm P} G$ can be supposed to be a non-elementary discrete subgroup of ${\rm PSL}\, (2, {\mathbb{C}})$, i.e. ${\rm P} G$ is a non-elementary Kleinian group. Under this assumption, we need to prove that the corresponding group $G \subset {{{\rm Diff}\, ({\mathbb C}^2, 0)}}$ does not have locally discrete orbits. The condition of having a non-elementary Kleinian group ${\rm P} G$ will be exploited through the fact that these groups always possess loxodromic elements, see [@apanasov]. Let us first consider the meaning of loxodromic elements in our context. Consider an element $\varphi \in G$ whose derivative $D_0 \varphi$ at the origin gives rise to a loxodromic element in ${\rm P} G$. Then $D_0 \varphi$ is diagonalizable. Note also that the Jacobian determinant of $D_0 \varphi$ can be supposed equal to $1$ since, again, we can start out by looking at $D^1 \Gamma$, instead of $\Gamma$, and the former group still induces a non-elementary Kleinian group in ${\rm PSL}\, (2, {\mathbb{C}})$. Therefore, the eigenvalues of $D_0 \varphi$ are $\lambda$ and $\lambda^{-1}$, with $\vert \lambda \vert > 1$. It follows that $\varphi$ has a hyperbolic fixed point at the origin with stable and unstable manifolds, $W^s_{\varphi}, \, W^u_{\varphi}$, having complex dimension $1$ and intersecting transversely at $(0,0) \in {\mathbb{C}}^2$. Fix then a [*closed annulus*]{} $A^s \subset W^s_{\varphi}$ (resp. $A^u \subset W^u_{\varphi}$) with radii $r_2 > r_1 > 0$ such that every point $p \in W^s_{\varphi} $ (resp. $p \in W^s_{\varphi} $) possesses an orbit by $\varphi$ non-trivially intersecting $A^s$ (resp. $A^u$). Given a point $p$ in a fixed neighborhood $U$ of the origin where the group $G$ has locally discrete orbits, denote by ${{\mathcal{O}}}_G (p)$ the orbit of $p$ (by the pseudogroup) $G$. Similarly, let ${\rm Acc}_p (G)$ denote the set of [*ends*]{} of ${{\mathcal{O}}}_G (p)$. In other words, if ${{\mathcal{O}}}_G (p)$ is infinite and $p =p_1, p_2, \ldots$ is an enumeration of its points, then ${\rm Acc}_p (G) = \bigcap_{n=1}^{\infty} [ \overline{{{\mathcal{O}}}_G (p)} \setminus \bigcup_{j=1}^n \{ p_j\} ]$. If ${{\mathcal{O}}}_G (p)$ is finite, then ${\rm Acc}_p (G) = \emptyset$. Clearly ${\rm Acc}_p (G)$ is closed and invariant by $G$ (viewed as pseudogroup). The following claim is the key for the proof of Theorem B. [*Claim 2*]{}. For every point $p \in A^s$, the closed set $A^s \cap {\rm Acc}_p (G)$ is not empty. Note that Claim 2 does not immediately imply Theorem C for it does not assert that $p$ itself belongs to $A^s \cap {\rm Acc}_p (G)$. However, if this were the case, then clearly the orbit of $p$ would not be locally discrete. The resulting contradiction would then ensure that ${\rm P}\,G$ cannot contain a non-elementary Kleinian group so that the statement of Theorem C would follow. However, by resorting to a standard application of Zorn Lemma, Claim 2 can still be used to prove Theorem C. Let us first provide the details and then go back to the proof of Claim 2. To begin with, if $K \subseteq A^s$ is a non-empty closed set, we shall say that $K$ is [*relatively invariant*]{} by the pseudogroup $G$ if, for every point $p \in K$ and every point $q \in A^s \cap {\rm Acc}_p (G)$, the point $q$ lies in $K$ as well. Next, let $\mathfrak{C}$ denote the collection of non-empty closed sets in $A^s$ that are relatively invariant by the pseudogroup $G$. Claim 2 ensures that the collection $\mathfrak{C}$ is not empty. In fact, $A^s \cap {\rm Acc}_p (G)$ in a non-empty set relatively invariant under $G$, and thus $A^s \cap {\rm Acc}_p (G)$ belongs to $\mathfrak{C}$ for every $p \in A^s$. Now, let the collection $\mathfrak{C}$ be endowed with the partial order defined by inclusion. Finally, given a sequence $K_1 \supset K_2 \supset \ldots$ of sets in $\mathfrak{C}$, the intersection $K_{\infty}=\bigcap_{i=1}^{\infty} K_i$ is non-empty since each $K_i$ is compact (closed and contained in the compact set $A^s$). The set $K_{\infty}$ belongs to $\mathfrak{C}$, since it is clearly relatively invariant by $G$, and satisfies $K_{\infty} \subset K_i$ for every $i$. According to Zorn Lemma, the collection $\mathfrak{C}$ contain minimal elements, so that we can consider a minimal element $K$. Choose then $q \in K$ and consider the non-empty set $A^s \cap {\rm Acc}_q (G)$. If $q \not\in {\rm Acc}_q (G)$, then $A^s \cap {\rm Acc}_q (G)$ would be an element of $\mathfrak{C}$ strictly smaller than $K$. The resulting contradiction shows that $q \in A^s \cap {\rm Acc}_q (G)$ and finishes the proof of Theorem C. It only remains to prove Claim 2. [*Proof of Claim 2*]{}. Recall that $A^s \subset W^s_{\varphi}$ (resp. $A^u \subset W^u_{\varphi}$) is an annulus such that every $p \in W^s_{\varphi} $ (resp. $p \in W^s_{\varphi} $) possesses an orbit by $\varphi$ non-trivially intersecting $A^s$ (resp. $A^u$). Now consider another element $\psi \in G$ whose Jacobian matrix at the origin is hyperbolic with determinant equal to $1$. Again stable and unstable manifolds for $\psi$ will respectively be denoted by $W^s_{\psi}, \, W^u_{\psi}$. Since a Kleinian group contains “many” loxodromic elements, $\psi$ can be chosen so that the four manifolds $W^s_{\varphi}, \, W^u_{\varphi}, \, W^s_{\psi}, \, W^u_{\psi}$ intersect pairwise transversely at the origin. The previously fixed annuli $A^s \subset W^s_{\varphi}$ and $A^u \subset W^u_{\varphi}$ will be denoted in the sequel by $A^s_{\varphi}$ and $A^u_{\varphi}$. An annulus $A^s_{\psi} \subset W^s_{\psi}$ (resp. $A^s_{\psi} \subset W^s_{\psi}$) with analogous properties concerning $\psi$ is also fixed. To prove the claim it suffices to check that every point $p$ in $A^s_{\varphi}$ is such that $A^u_{\psi} \cap {\rm Acc}_p (G) \neq \emptyset$. Indeed, let $p^{\ast} \in A^u_{\psi}$ be a point in $A^u_{\psi} \cap {\rm Acc}_p (G)$. The analogue argument changing the roles of $\varphi, \, \psi$ and replacing them by their inverses, will ensure that $A^s_{\varphi} \cap {\rm Acc}_{p^{\ast}} (G) \neq \emptyset$. Since $p^{\ast}$ lies in ${\rm Acc}_p (G)$ and this set is invariant under the pseudogroup $G$, it will follow that $A^s \cap {\rm Acc}_p (G) \neq \emptyset$ as desired. Finally to check that $A^u_{\psi} \cap {\rm Acc}_p (G) \neq \emptyset$ for every point $p \in A^s_{\varphi}$, we proceed as follows. Consider local coordinates $(x,y)$ about the origin of ${\mathbb{C}}^2$ so that $\{ x=0\} \subset W^u_{\psi}$ and $\{ y=0\} \subset W^s_{\psi}$. Recall that $W^s_{\varphi}$ is smooth and intersects the coordinate axes transversely at the origin. Since this intersection is transverse, we can assume that it is the only intersection point of $W^s_{\varphi}$ with the coordinate axes. In particular, a point $p \in A^s_{\varphi}$ has coordinates $(u,v)$ with $u.v \neq 0$. By iterating $\varphi$, we can find points $p_n = (u_n , v_n) =\varphi^n (p) \in {\mathbb{C}}^2$ such that $\vert u_n \vert \rightarrow 0$ and $$\frac{1}{C} \vert u_n \vert \leq \vert v_n \vert \leq C \vert u_n \vert \, ,$$ for some uniform constant $C$ related to the “angles” between $W^s_{\varphi}$ and the coordinate axes at the origin. Now, for every $n$, consider the points of the form $\psi (p_n) , \ldots , \psi^{l(n)} (p_n)$ where $l(n)$ is the smallest positive integer for which the absolute value of the second component of $\psi^{l(n)} (p_n)$ is greater than $\sup_{z \in A^u_{\psi}} \vert z \vert$. The integer $l(n)$ exists since $\psi$ has a hyperbolic fixed point at the origin and the action of $\psi$ on $p_n$ is such that the first coordinate becomes smaller and smaller while the second coordinate gets larger and larger. Now it is clear that the set $\bigcup_{n=1}^{\infty} \{ \psi (p_n) , \ldots , \psi^{l(n)} (p_n) \}$ accumulates on $A^u_{\psi}$ and this ends the proof of Claim 2. [Dillo 83]{} , The residual index and the dynamics of holomorphic maps tangent to the identity, [*Duke Math. J.*]{}, [**107**]{}, 1, (2001), 173-207. , [*Discrete Groups in Space and Uniformization Problems*]{}, Mathematics and Its Applications, Kluwer Academic Publishers, (1991). , On Ecalle-Hakim’s theorem in holomorphic dynamics, [*to appear in Frontiers in Complex Dynamics*]{}, (2011). , Sur certains pseudogroupes de biholomorphismes locaux de $({\mathbb{C}}^n,0)$, [*Bull. Soc. Math. France*]{}, [**129**]{}, (2001), 259-284 , Groups of germs of analytic diffeomorphisms in $({\mathbb{C}}^2,0)$, [*J. Dynam. Control Systems*]{}, [**9**]{}, no. 1 (2003), 1-32. , The topology of holomorphic flows with singularities, [*Publ. Math. I.H.E.S.*]{}, [**48**]{}, (1978), 5-38. , On the integrability of holomorphic vector fields, [*Discrete and Continuous Dynamical Systems*]{}, [**25**]{}, 2, (2009), 481-493. , [*Complex dynamics*]{}, Springer-Verlag, New York, (1993). , Groupes d’automorphismes de $({\mathbb{C}}, 0)$ et équations différentielles $y \, dy + \cdots =0$, [*Bull. Soc. Math. France*]{}, [**116**]{}, 4, (1988), 459-488. , Remarks in the orbital analytic classification of germs of vector fields, [*Math. USSR Sb.*]{}, [**49**]{}, (1984), 111-124. , Finitely generated groups of germs of one-dimensional conformal mappings and invariants for complex singular points of analytic foliations of the complex plane, [*Adv. in Soviet Math.*]{} [**14**]{}, (1993). , The Tits alternative for the group of real analytic diffeomorphisms of a real analytic manifold, [*in preparation*]{}. , [*private communication*]{}. , Sur les groupes engendrés par les difféomorphismes proches de l’identité, [*Bol. Soc. Bras. Mat.*]{} [**24**]{}, N2, (1993), 137-178. , Singularités des flots holomorphes II, [*Ann. Inst. Fourier*]{}, [**47**]{}, 4, (1997), 1117-1174. , Analytic transformations of $({\mathbb{C}}^p, 0)$ tangent to the identity, [*Duke Math. J.*]{}, [**92**]{}, 2, (1998), 403-428. , Feuilletages holomorphes à holonomie résoluble, [*Thèse*]{}, Univ. Rennes I, (1994). , Minimal, rigid foliations by curves in ${\mathbb{C}}\mathbb{P}^n$, [*J. Eur. Math. Soc.*]{}, [**5**]{}, (2003), 147-201. , Classification analytique des équations différentielles non linéaires réssonantes du premier ordre, [*Ann. Sc. Ec. Norm. Sup.*]{}, [**16**]{}, 4, (1983), 469-523. , Holonomie et intégrales premières, [*Ann. Sc. E.N.S. Série IV*]{}, [**13**]{}, 4, (1980), 469-523. , Integrability of hamiltonian systems and differential Galois groups of higher variational equations, [*Ann. Sc. E.N.S. Série IV*]{},[**40**]{}, 6, (2007), 845-884. , Separatrix for non solvable dynamics on $({\mathbb{C}}, 0)$, [*Ann. Inst. Fourier,*]{} [**44**]{}, 2, (1994), 569-599. , Topological aspects of completely integrable foliations, [*preprint available at*]{} http://arxiv.org/abs/1209.2956. , Local Theory of Holomorphic Foliations and Vector Fields, [*Lecture Notes available at*]{} http://arxiv.org/abs/1101.4309. , Equivalence and semi-completude of foliations, [*Nonlinear Anal.*]{}, [**64**]{}, 8, (2006), 1654-1665. , [*Lie Algebras and Lie Groups: 1964 Lectures given at Harvard University*]{}, Lect. Notes in Math 1500, Springer-Verlag, Berlin Heidelberg. On the density of an orbit of a pseudogroup of conformal mappings and a generalization of the Hudai-Verenov theorem. [*Vestnik Movskov Univ. Math.*]{} [**31**]{}, 4, (1982), 10-15. , [*Infinite linear groups*]{}, Erg. der Mathematik (1976). [Julio Rebelo]{}\ Institut de Mathématiques de Toulouse\ 118 Route de Narbonne\ F-31062 Toulouse, FRANCE.\ rebelo@math.univ-toulouse.fr [Helena Reis]{}\ Centro de Matemática da Universidade do Porto,\ Faculdade de Economia da Universidade do Porto,\ Portugal\ hreis@fep.up.pt\ [^1]:
--- abstract: 'It is argued that the strong coupling version of recent experiment \[Denkmayr et al., PRL 118, 010402 (2017)\] while correctly estimating the pre-selected states of the neutrons does not perform strong measurements of weak values as claimed.' title: ' Comment on “Experimental demonstration of direct path state characterization by strongly measuring weak values in a matter-wave interferometer” ' --- Denkmayr [*et al.*]{} [@Denk] reported an experiment in which a tomographic task of “direct path state characterization” in the neutron interferometer has been performed using weak and strong coupling to neutron’s spin. I correct misleading statements in the title, abstract and conclusions regarding strong measurements of weak values. According to the title, direct path state characterization has been achieved by “strongly measuring weak values". In the abstract: “weak measurements are not a necessary condition to determine the weak value”. In the conclusions: “we have presented a weak value determination scheme via arbitrary interaction strengths. We have applied it to experimentally determine weak values using both weak and strong interactions.” I argue that in the strong regime, the experiment does not measure weak values of the observed quantum system. The weak value of a variable $A$ is a property of a quantum system at a particular time [@AAV]. It is specified by the forward and backward evolving quantum states at this time and it has a well defined operational meaning: any weak enough coupling to $A$ is an effective coupling to the weak value $A_w$. The pointer of a weakened von Neumann measurement is shifted in proportion to ${\rm Re} A_w$ while the shift of the conjugate pointer variable is proportional to ${\rm Im} A_w$. Lundeen [*et al.*]{} [@Lund] pointed out that, given a particular post-selection, the weak values of local projections are proportional to the local values of the wave function and thus, measurements of these weak values provide a “direct measurement of the quantum wavefunction”. Vallone and Dequal [@Vall] showed that a modification of this procedure, in which the weak coupling is replaced by a strong coupling, provides a more efficient method for “direct measurement of the quantum wavefunction”, although one might argue that it is less “direct”, because instead of simple proportionality, we need calculations to obtain the local amplitude from a set of pointer readings. Denkmayr [*et al.*]{} implemented these proposals in neutron interferometry, successfully accomplishing both strong and weak coupling versions of the “path state characterization”. However, the strong coupling version of their experiment is not a strong measurement of weak values as they claim. In the experiment, polarized neutrons, $|{\uparrow}_x\rangle$, are prepared in the path state $|P_i\rangle = a |{\rm I}\rangle+b |{\rm II}\rangle$ and post-selected in $|P_f\rangle = \frac{1}{\sqrt 2} (|{\rm I}\rangle+|{\rm II}\rangle)$. The task is to determine $|P_i\rangle$. Weak values of the projections on the paths are $({\rm {\bf P}_I})_w=\frac{a}{a+b}$ and $({\rm {\bf P}_{II}})_w=\frac{b}{a+b}$. Proportionality of weak values to the complex amplitudes in the paths makes weak measurements of the projections “direct” measurements of the path state. In a more direct version of their experiment, the polarization is rotated in one of the arms of the interferometer: $|{\uparrow_x}\rangle \rightarrow \cos \alpha |{\uparrow}_x\rangle - i\sin \alpha |{\downarrow}_x\rangle$. The spin tomography of the output beam provides the information about $|P_i\rangle$. If we choose a small coupling, say in path I, the angle of rotation in the $xy$ plane is $2\alpha{\rm Re}({\rm {\bf P}_{\rm I}})_w$ and in the $xz$ plane $2\alpha{\rm Im}({\rm {\bf P}_{\rm I}})_w$. After repeating the procedure in path II, $({\rm {\bf P}_I})_w$ and $({\rm {\bf P}_{II}})_w$ yield the pre-selected path state $|P_i\rangle$. In fact, since $({\rm {\bf P}_I})_w +({\rm {\bf P}_{II}})_w = ({\rm {\bf P}_I} +{\rm {\bf P}_{II}})_w=1$, we can calculate $({\rm {\bf P}_{\rm II}})_w $ and the second procedure is not needed. The first procedure with strong coupling (large $\alpha$) provides $|P_i\rangle$ even more efficiently [@Vall]. But does it measure the weak values of the projection, as the authors [@Denk] claim? When the experiment runs with large $\alpha$, the two-state vector description of the neutrons inside the interferometer is different. The weak values of projections remain constant in time, but their values are not the same as in the run with vanishing polarization rotation. At the time before the post-selection, the state of the neutron is: $ |\Psi'\rangle = a |{\rm I}\rangle |{\uparrow}_x\rangle + b |{\rm II}\rangle ( \cos \alpha |{\uparrow}_x\rangle- i\sin \alpha |{\downarrow}_x\rangle). $ Then, the neutron is partially post-selected onto path state $|P_f\rangle$. In such a case, the weak value is given by (13.23) of [@AV2008] $$\label{psi} ({\rm {\bf P}_{I}})_w=\frac{\langle \Psi'|{\rm {\bf P}_{P_f}}{\rm {\bf P}_{I}}|\Psi'\rangle}{\langle \Psi'| {\rm {\bf P}_{P_f}}|\Psi'\rangle}=\frac{a(b^\ast \cos \alpha+a^\ast)}{b(a^\ast \cos \alpha+b^\ast)+a(b^\ast \cos \alpha+a^\ast)}.$$ The ratio of weak values of the projections, $ \frac{({\rm {\bf P}_{I}})_w}{({\rm {\bf P}_{II}})_w}= \frac{a(b^\ast \cos \alpha+a^\star)}{b(a^\ast \cos \alpha+b^\ast)},$ yields the ratio of complex amplitudes only for vanishing interaction, $\alpha \rightarrow 0$. Direct path state characterization has not been done by strongly measuring weak values in this experiment because weak values cannot be measured strongly. This work has been supported in part by the Israel Science Foundation Grant No. 1311/14, the German-Israeli Foundation for Scientific Research and Development Grant No. I-1275-303.14. L. Vaidman\ Raymond and Beverly Sackler School of Physics and Astronomy\ Tel-Aviv University, Tel-Aviv 69978, Israel [99]{} T. Denkmayr, H. Geppert, H. Lemmel, M. Waegell, J. Dressel, Y. Hasegawa, and S. Sponar, Experimental demonstration of direct path state characterization by strongly measuring weak values in a matter-wave interferometer, Phys. Rev. Lett. [**118**]{}, 010402 (2017). G. Vallone and D. Dequal, Strong measurements give a better direct measurement of the quantum wave function, Phys. Rev. Lett. [**116**]{}, 040502 (2016). Y. Aharonov, D. Z. Albert, and L. Vaidman, How the result of a measurement of a component of the spin of a spin-$\frac{1}{2}$ particle can turn out to be 100, Phys. Rev. Lett. **60**, 1351 (1988). J. S. Lundeen, B. Sutherland, A. Patel, C. Stewart, and C. Bamber, Direct measurement of the quantum wavefunction, Nature (London) [**474**]{}, 188 (2011). Y. Aharonov and L. Vaidman, The two-state vector formalism: an updated review, Lect. Notes Phys. **734**, 399 (2008).
--- abstract: 'It is known that, through inflation, Planck scale phenomena should have left an imprint in the cosmic microwave background. The magnitude of this imprint is expected to be suppressed by a factor $\sigma^n$ where $\sigma\approx 10^{-5}$ is the ratio of the Planck length to the Hubble length during inflation. While there is no consensus about the value of $n$, it is generally thought that $n$ will determine whether the imprint is observable. Here, we suggest that the magnitude of the imprint may not be suppressed by any power of $\sigma$ and that, instead, $\sigma$ may merely quantify the amount of fine tuning required to achieve an imprint of order one. To this end, we show that the UV/IR scale separation, $\sigma$, in the analogous case of the Casimir effect plays exactly this role.' author: - | Sven Bachmann${}^{a,d}$,    Achim Kempf${}^{a,b,c}$\ \ Depts. of Applied Mathematics${}^{(a)}$ and Physics${}^{(b)}$, University of Waterloo\ Perimeter Institute for Theoretical Physics${}^{(c)}$\ Waterloo, Ontario, Canada\ \ Ecole Polytechnique Fédérale de Lausanne${}^{(d)}$\ Lausanne, Switzerland date: title: The Transplanckian Question and the Casimir Effect --- Introduction ============ The so-called transplanckian question is concerned with low energy phenomena whose calculation appears to require the validity of standard quantum field theory (QFT) at energies beyond the Planck scale. The issue first arose in the context of black holes: the derivation of Hawking radiation is based on the assumption that standard QFT is valid even at scales beyond the Planck scale. For example, the typical low-energy Hawking photons that an observer might detect far from the horizon are implied to have possessed proper frequencies that were much larger than the Planck frequency close to the event horizon, even at distances from the horizon that are farther than a Planck length. This led to the question if Planck scale effects could influence or even invalidate the prediction of Hawking radiation. Numerous studies have investigated the issue and the current consensus is that Hawking radiation is largely robust against modifying QFT in the ultraviolet (UV). This is plausible since general thermodynamic considerations already constrain key properties of Hawking radiation. See, e.g., [@brout-review-etc; @Unruh2]. More recently, the transplanckian question arose in the context of inflationary cosmology: according to most inflationary models, space-time inflated to the extent that fluctuations which are presently of cosmological size started out with wavelengths that were shorter than the Planck length. The derivation of the inflationary perturbation spectrum therefore assumes the validity of standard QFT beyond the Planck scale. Unlike in the case of black holes, no known thermodynamic reasons constrain the properties of the inflationary perturbation spectrum so as to make it robust against the influence of physics at the Planck scale. It is, therefore, very actively being investigated if future precision measurements of the cosmic microwave background (CMB) intensity and polarization spectra could in this way offer an experimental window to Planck scale phenomena. See e.g. [@infl-etc]. It is generally expected that the imprint of Planck scale physics on the CMB is suppressed by a factor $\sigma^n$ where $\sigma$ is defined as the ratio of the UV and IR scale. In inflation, this ratio is $\sigma \approx 10^{-5}$ since modes evolve nontrivially only from the Planck scale to the Hubble scale, $L_\text{Hubble}\approx 10^5 ~L_\text{Planck}$, after which their dynamics freezes until much later when they reenter the horizon to seed structure formation. We note that if the UV scale is the string scale, $\sigma$ could be as large as $\sigma\approx 10^{-3}$. Regarding the value of the power, $n$, in $\sigma^n$, no consensus has been reached. It is generally expected however, that the value of $n$ decides whether the imprint of Planck scale physics in the CMB could ever become measurable. Concrete studies in this field often model the influence of Planck scale physics on QFT through dispersion relations that become nonlinear at high energies. This approach is motivated by the fact that the natural ultraviolet cutoff in condensed matter systems characteristically affects the dispersion relations there. See, e.g., [@Unruh2]. It has been shown that while some ultraviolet-modified dispersion relations would affect the inflationary predictions for the CMB to the extent that effects might become measurable, other modified dispersion relations would have a negligible effect on the CMB. It is so far not fully understood which properties of Planck scale modifications to the dispersion relation decide whether or not an observable effect is induced. In order to clarify if and how an imprint of Planck scale effects in the CMB are suppressed by $\sigma$ it would be most interesting, therefore, to find and study the operator which maps arbitrary ultraviolet-modified dispersion relations directly into the correspondingly modified CMB perturbation spectra. Here, we will investigate the simpler transplanckian question for the Casimir force. As is well-known, the Casimir force arises due to quantum fluctuations of the electromagnetic field and occurs between neutral conducting objects. Similar to Hawking radiation and inflationary fluctuations, the Casimir force can be seen as a vacuum effect which involves modes of arbitrarily short wave lengths. In fact, naively it appears that modes contribute the more the shorter their wave length is. This suggests that, in principle, the predicted Casimir force could be influenced by Planck scale physics. The Casimir effect is simple enough so that we will be able to completely answer its transplanckian question when modelling Planck scale physics through ultraviolet-modified dispersion relations. Namely, we will find the explicit operator which maps generic ultraviolet-modified dispersion relations into the corresponding Casimir force functions. The properties of this operator reveal that and how ultraviolet-modified dispersion relations can strongly affect the Casimir force even in the ‘infrared’ i.e. at practically measurable distances. Interestingly, the extreme ratio $\sigma\approx 10^{-28}$ between the effective UV and IR scales in the Casimir effect does not suppress the possible strength of Planck scale effects in the Casimir force at macroscopic distances. We find that, instead, the extreme value of $\sigma$ implies that UV-modified dispersion relations that lead to a large IR effect merely need to be extremely fine-tuned, which suppresses the a priori likelihood that such a dispersion relation should arise from an underlying theory of quantum gravity. This is of interest because if the situation in inflation is analogous, the imprint of Planck space physics in the CMB may not be suppressed in strength by any power $\sigma^n$ of $\sigma$. Instead, the $\sigma$ of inflation, $\sigma\approx 10^{-5}$ or $\sigma\approx10^{-3}$, may determine the amount of fine-tuning required to achieve an imprint of order one. Thus, $\sigma$ would be related to the a priori likelihood for an observable imprint to arise from an underlying theory of quantum gravity. In inflation, this likelihood would not be extremely small since the UV and IR scales in inflation are not extremely separated. The Casimir force and ultraviolet-modified dispersion relations =============================================================== The Casimir effect arises when reflecting surfaces pose boundary conditions on the modes of the electromagnetic field. For example, two perfectly reflecting parallel plates impose boundary conditions such that the set of electromagnetic modes in between them is discretized. The spacing of the modes, and therefore the vacuum energy that each mode contributes, depends on the distance between the plates. This distance-dependence of the vacuum energy leads to the Casimir force between the plates. In general, the force is a function of both the distance and the shape of the reflecting surfaces, and the force can be both attractive or repulsive. The Casimir effect was first predicted, by Casimir, in 1948, see [@Casimir:1948dh]. In the meanwhile, the Casimir force has been calculated for several types of geometries and in various dimensions. Also, effects of imperfect conductors, rough surfaces and finite temperatures have been considered, see [@Balian:2002]. In addition, detailed calculations have been carried out to account for higher order corrections due to virtual electrons and their interaction with the boundaries [@Aghababaie:2003iw]. For recent reviews see [@bordag-etal] and for precision measurements of the effect see e.g. [@Lamoreaux:1999cu-etal]. For our purposes, the essential features of the Casimir effect are captured already when working with a massless real scalar field between two perfectly conducting parallel plates. For simplicity, we will consider the simple case of just one space dimension, in which case the reflecting plates are mere points. We place these points at $x=0$ and $x=L$, i.e., we impose the boundary conditions $\hat{\phi}(0,t)=0=\hat{\phi}(L,t)$ for all $t$. In order to fulfill these boundary conditions we expand the quantum field between the plates using the Fourier sine series: $$\hat{\phi}(x,t) = \sum_{n=1}^{\infty} \hat{\phi}_n(t) \sin(k_n x), ~~~~~~~ k_n= \frac{n\pi}{L}$$ We are using units such that $\hbar=c=1$. Recall that in a Fourier sine series all $n$ and therefore all wave numbers $k_n$ are positive. The reason is that the sine functions form a complete eigenbasis of the square of the momentum operator, $\hat{p}^2=-d^2/dx^2$, all of whose eigenvalues are of course positive. (Recall that the momentum operator of a particle in a box is not self-adjoint and not diagonalizable, see e.g. [@ak-beethoven]). The usual ansatz $$\hat{\phi}_n=\frac{1}{\sqrt{\omega(k_n)L}}\left(e^{i\omega(k_n)t}a^\dagger_n+ e^{-i\omega(k_n)t}a_n\right)$$ with $[a_n,a_m^\dagger]=\delta_{n,m}$ diagonalizes the Hamiltonian: $$\hat{H}=\sum_{n=1}^\infty \omega(k_n)\left(a^\dagger_na_n+\frac{1}{2}\right)$$ Thus, with the usual linear dispersion relation $$\omega(k)=k,$$ the vacuum energy between plates of distance $L$ is divergent: $$\begin{aligned} \label{eq:infSum} E_{in}(L) & = & \frac{1}{2}\sum_{n=0}^{\infty}\omega(k_{n}) \\ & = & \frac{\pi}{2L}\sum_{n=0}^{\infty} n ~~~=~ \infty\end{aligned}$$ We notice that modes appear to contribute the more the shorter their wavelength, i.e. the larger $k$ and $n$ are. One proceeds by regularizing the divergence and by then calculating the *change in the regularized total energy (of a large region that contains the plates) when varying $L$. As is well-known, the resulting expression for the Casimir force remains finite after the regularization is removed, and reads: $$\mathcal{F}(L)=-\frac{\pi}{24L^{2}}\,$$ It has been shown that this result does not depend on the choice of regularization method. Our aim now is to re-calculate the Casimir force within standard quantum field theory while modelling the onset of Planck scale phenomena at high energies through general nonlinear modifications to the dispersion relation. The goal is to calculate the operator which maps arbitrary modified dispersion relations $\omega(k)$ into the resulting Casimir force functions $\mathcal{F}(L)$. To this end, let us begin by writing generalized dispersion relations in the form: $$\omega(k)=k_{c}f\left(\frac{k}{k_{c}}\right)$$ Here, $k_{c}>0$ is a constant with the units of momentum, say the Planck momentum so that its inverse is the Planck length: $L_c=k_c^{-1}$. The function $f$ encodes unknown Planck scale physics and for now we will make only these minimal assumptions:* - $f(0)=0$, and $f(x)\approx x$ if $x\ll 1$    (regular dispersion at low energies) - $f(x)\geq 0$ when $x \ge0$     (stability: each mode carries positive energy) We will use the term dispersion relation for both $\omega(k)$ and $f(x)$. \[bedi\] Exponential regularization ========================== For generically modified dispersion relations the vacuum energy (\[eq:infSum\]) must be assumed to be divergent and therefore in need of regularization. Let us therefore regularize (\[eq:infSum\]) by introducing an exponential regularization function, parametrized by $\alpha>0$, i.e. we define the regularized vacuum energy between the plates as: $$\label{eq:EinReg} E_{in}^{reg}(L)=\frac{1}{2}\sum_{n=0}^{\infty}k_{c} \,f\left(\frac{n\pi}{k_{c}L}\right)\exp{\left[-\alpha k_{c}\,f\left(\frac{n\pi}{k_{c}L}\right)\right]}\,$$ In order to calculate the regularized vacuum energy density outside the plates we notice that the right and left outside regions are half axes and that the energy density in a half axis can be calculated from (\[eq:EinReg\]) by letting $L$ go to infinity: $$\mathcal{E}^{reg}=\lim_{L\to\infty}\frac{E_{in}^{reg}(L)}{L}\, \label{ove}$$ The expression for the vacuum energy density outside the plates, (\[ove\]), is conveniently rewritten as a Riemann sum by defining $\Delta x = \frac{1}{L}$: $$\begin{aligned} \label{eq:EoutReg} \mathcal{E}^{reg} & = & \lim_{\Delta x\to 0}\left\{\frac{1}{2}\sum_{n=0}^{\infty}\Delta x~k_{c}\,f\left(\frac{n\Delta x\pi}{k_{c}}\right)\exp{\left[-\alpha k_{c}\,f\left(\frac{n\Delta x\pi}{k_{c}}\right)\right]} \right\}\nonumber \\ & = &\frac{k_{c}^{2}}{2\pi}\int_{0}^{\infty}dx\,f(x) \exp{\left[-\alpha k_{c}\,f(x)\right]}\,.\end{aligned}$$ Notice that we are here implicitly restricting attention to dispersion relations for which exponential regularization is sufficient to render the energy densities outside and between the plates finite. This excludes, for example, the dispersion relation $f(x)=\ln(1+x)$ which would require a regularization function such as $\exp(-f(x)^2)$. We will later be able to lift this restriction on the dispersion relations, namely by allowing the use of arbitrary regularization functions. Indeed, as we will prove in Sec.\[indep\], our results only depend on the dispersion relation and are independent of the choice of regularization function, as long as the regularization function does regularize the occurring series and integrals, obeys certain mild smoothness conditions and recovers the original divergent series of (\[eq:infSum\]) in the limit $\alpha\to 0$. In order to calculate the Casimir force, let us now consider a very large but finite region, say of length $M$, which contains the two plates. The total energy in this region is finite and consists of the energy between the plates, (\[eq:EinReg\]), plus the energy density outside the plates, (\[eq:EoutReg\]), multiplied by the size of the region outside, namely $M-L$. Note that by choosing $M$ large enough ensures that the energy density outside the plates does not depend on $L$. Thus, the total energy in this region is given by $E_{in}^{reg}(L)+(M-L)\mathcal{E}^{reg}$. The regularized Casimir force is the derivative of this energy with respect to a change in the distance of the plates: $$\mathcal{F}_{\alpha}(L)=-\frac{\partial}{\partial L}E_{in}^{reg}+ \mathcal{E}^{reg}\,.$$ The total length $M$ of the region under consideration has dropped out, as it should be. Hence, before removing the regularization (i.e. before letting $\alpha \rightarrow 0^+$), the Casimir force in the presence of a nonlinear dispersion relation is given by: $$\begin{aligned} \label{eq:GenRel} \lefteqn{\mathcal{F}_{\alpha}(L)=\frac{1}{2}k_{c}\left\{ \sum_{n=0}^{\infty}\frac{1}{L}\left[ \frac{n\pi}{k_{c}L}~ f^{\prime} \left(\frac{n\pi}{k_{c}L}\right)\exp{\left[-\alpha k_{c}\,f\left( \frac{n\pi}{k_{c}L}\right)\right]}\times{} \right.\right.}\nonumber \\ & & {}\left.\left. \times \left(1-\alpha k_{c}\,f \left(\frac{n\pi}{k_{c}L}\right)\right)\right]+ \frac{k_{c}}{\pi}\int_{0}^{\infty}dx\,f(x)\exp{ \left[-\alpha k_{c}\,f(x)\right]}\right\}\end{aligned}$$ Here, $f'$ stands for differentiating $f$ with respect to the variable $x=\frac{n\pi}{k_cL}$. Application of the Euler-Maclaurin formula ========================================== It will be convenient to collect the terms that constitute the argument of the series in a new definition: $$\label{tfg} \varphi_{\alpha}(t) := \frac{t\pi}{k_{c}L} ~f^{\prime} \left(\frac{t\pi}{k_{c}L}\right)\exp{\left[-\alpha k_{c}\,f\left(\frac{t\pi}{k_{c}L}\right)\right]}\left(1-\alpha k_{c}\,f\left(\frac{t\pi}{k_{c}L}\right)\right)$$ Thus, (\[eq:GenRel\]) becomes: $$\mathcal{F}_{\alpha}(L)=\frac{k_c}{2L} \sum_{n=0}^{\infty} \varphi_\alpha(n) ~+~ \frac{k_c^2}{2\pi}\int_0^\infty dx~f(x)~e^{-\alpha k_c f(x)}\label{cf45}$$ We notice that if the first term in (\[cf45\]) were an integral instead of a series then the two terms in (\[cf45\]) would exactly cancel another: $$\begin{aligned} \label{eq:intPrts}\label{3a} \frac{k_c}{2L}\int_{0}^{\infty}\varphi_{\alpha}(t)\,dt & = & \frac{k_c}{2L}\frac{k_cL}{\pi}\int_{0}^{\infty}\varphi_{\alpha}(t) \,\frac{\pi}{k_cL}\,dt \\ & = & \frac{k_c^2}{2\pi}\int_{0}^{\infty}dx \,x\,f^{\prime}(x)e^{-\alpha k_{c}f(x)}\left(1-\alpha k_{c}f(x)\right)\\ & = & \left . \frac{k_c^2}{2\pi}~xf(x)e^{-\alpha k_{c}f(x)}\right|_{0}^{\infty}-\frac{k_c^2}{2\pi}\int_{0}^{\infty}dx \,f(x)e^{-\alpha k_{c}f(x)}\label{bt}\\ \label{3c} \label{hew} & = & 0-\frac{k_c^2}{2\pi}\int_{0}^{\infty}dx\,f(x)e^{-\alpha k_{c}f(x)}\,.\end{aligned}$$ In (\[bt\]), the boundary terms are zero because at $x=0$ the dispersion relation yields $f(0)=0$ and because for $x\rightarrow \infty$ the finiteness of (\[eq:EoutReg\]) implies that its integrand decays faster than $1/x$. In order to compute the Casimir force, let us now use the Euler-Maclaurin sum formula, see e.g. [@Ford-etal], to express the series of $\varphi_\alpha$ as an integral of $\varphi_\alpha$ plus corrections. As we just saw, the integral will then cancel in (\[cf45\]) and the correction terms will constitute the Casimir force. To this end, recall that if the $(k+1)$st derivative of a function $\xi$ is continuous, i.e., if $\xi\in \mathcal{C}^{k+1}$, then: $$\begin{aligned} \label{eq:em1} \sum_{a<n\le b}\xi(n)&=&\int_a^b \xi(t)\,dt + \sum_{r=0}^k \frac{(-1)^{r+1}B_{r+1}}{(r+1)!} \left(\xi^{(r)}(b)-\xi^{(r)}(a)\right) +\nonumber \\ & & +\frac{(-1)^k}{(k+1)!} \int_a^b B_{k+1}(t)\xi^{(k+1)}(t)\,dt\end{aligned}$$ Here, the superscript at $\xi^{(r)}$ denotes the $r$’th derivative of the function $\xi$, the $B_{s}$ are the Bernoulli numbers and $B_{s}(t)$ is the $s$’th Bernoulli periodic function, i.e. the periodic extension of the $s$’th Bernoulli polynomial from the interval $[0,1]$. We can now choose $\xi=\varphi_{\alpha}$, set $a=0$ and take the limit $b\to \infty$. Since the vacuum energy density, (\[eq:EoutReg\]), is finite it follows that (\[3c\]) is finite and therefore also (\[3a\]). This in turn implies that $\lim_{x\rightarrow \infty} \varphi_{\alpha}(x)= 0$ and $\lim_{x\to\infty}\varphi_{\alpha}^{(n)}(x)=0$ for all $n\geq1$. Hence, the series involving the Bernoulli numbers simplifies and we obtain for arbitrary $k \in \mathbb{N}$ this Euler-Maclaurin formula for $\varphi_\alpha$: $$\sum_{n=0}^{\infty}\varphi_{\alpha}(n)= \int_{0}^{\infty} \varphi_{\alpha}(t)\,dt-\sum_{r=0}^{k} \frac{(-1)^{r+1}B_{r+1}}{(r+1)!}~\varphi_{ \alpha}^{(r)}(0)+\Omega_k[\varphi_{\alpha}]\,$$ Here, $\Omega_k[\varphi_{\alpha}]$ represents the remainder integral: $$\Omega_k[\varphi_{\alpha}] = \frac{(-1)^k}{(k+1)!}\int_0^\infty B_{k+1}(t)~ \varphi_\alpha^{(k+1)}(t)~dt \label{rema}$$ Using $\varphi_{\alpha}(0)=0$ and the fact that, except for $B_{1}$, all Bernoulli numbers $B_s$ with odd indices $s$ are zero, we obtain: $$\sum_{n=0}^{\infty}\varphi_{\alpha}(n)= \int_{0}^{ \infty}\varphi_{\alpha}(t)\,dt-\sum_{r=1}^{k} \frac{B_{2r}}{(2r)!}~ \varphi_{\alpha}^{(2r-1)}(0)+\Omega_k[\varphi_{\alpha}]\label{drt6}$$ Equation (\[drt6\]) expresses the series as an integral plus corrections, as desired. Applied to the expression (\[cf45\]) for the regularized Casimir force, $\mathcal{F}_\alpha(L)$, the integrals then cancel and we obtain for the regularized Casimir force: $$\label{eq:Falpha} \mathcal{F}_{\alpha}(L)=-\frac{k_{c}}{2L}\sum_{r=1}^{k}\frac{B_{2r}}{2r!} ~ \varphi_{\alpha}^{(2r-1)}(0)+\frac{k_c}{2L}~ \Omega_k[\varphi_{\alpha}]$$ The actual Casimir force, $\mathcal{F}(L)$, is obtained by removing the regularization: $$\mathcal{F}(L)=\lim_{\alpha\to 0^+}\left\{ -\frac{k_{c}}{2L}\sum_{r=1}^{k}\frac{B_{2r}}{2r!} ~ \varphi_{\alpha}^{(2r-1)}(0)+\frac{k_c}{2L}~ \Omega_k[\varphi_{\alpha}]\right\}\label{cf45b}$$ The Casimir force for polynomial dispersion relations ===================================================== In order to further evaluate this expression for the Casimir force let us restrict attention to dispersion relations that are sufficiently well behaved so that $\varphi_\alpha(t)$ is $\mathcal{C}^\infty$ with respect to both $\alpha$ and $t$. The simplest case is that of dispersion relations which are polynomial: $$f(x)=\sum_{s=0}^{n}\nu_{s}x^{s} \label{polone}$$ We are assuming that $\varphi_\alpha(t) \in \mathcal{C}^\infty$ which here allows us to take the limit $\alpha \to 0$ in $\varphi_\alpha(t)$ before differentiating it. From (\[tfg\]) we then have $\varphi_{0}(t) = \lim_{\alpha\to 0}\varphi_{\alpha}(t)=x(t) f^{\prime}(x(t))$ where $x(t)=\frac{t\pi}{k_cL}$ and where $'$ stands for $d/dx$. Thus, iterated differentiation yields $$\label{eq:diffPhi} \frac{d^{n}\varphi_{0}(t)}{dt^{n}} = n \left(\frac{\pi}{k_cL}\right)^n \,\frac{d^nf(x)}{dx^n}+x\left(\frac{\pi}{k_cL}\right)^{n+1}\, \frac{d^{n+1}f(x)}{dx^{n+1}}$$ and therefore the terms in the series in (\[cf45b\]) read: $$\varphi^{(n)}_{0}(t)\vert_{t=0}= n\left(\frac{\pi}{k_cL}\right)^n \,f^{(n)}(x)\vert_{x=0}\label{dfg}$$ We now show that the remainder term $\Omega_k[\varphi_\alpha]$ does not contribute. Assuming for the moment that the dispersion relation is polynomial, $\varphi_{\alpha}(t)$ is a polynomial times the exponential regularization function $e^{-\alpha k_c f}$ which tends to $1$ as $\alpha\to 0$. Therefore, after sufficiently many differentiations, i.e., when choosing $k$ large enough, $\varphi^{(k+1)}_{\alpha}(t)\to 0$ as $\alpha\to 0$ for all fixed $t$. In order to evaluate $\Omega_k[\varphi_\alpha]$, let us now split (\[rema\]) into two integrals: $\int_0^\infty=\int_0^b+\int_b^\infty$. For all finite $b>0$ the first integral commutes with the limit $\alpha \to 0$ to yield for large enough $k$: $$\lim_{\alpha\to 0}\int_0^b B_{k+1}(t)~\varphi_\alpha^{(k+1)}(t)~dt=\int_0^b\lim_{\alpha\to 0} B_{k+1}(t)~\varphi_\alpha^{(k+1)}(t)~dt=0$$ Further, we notice that, since $f$ is polynomial and the exponential regularization function is positive, $\varphi_\alpha(t)$ does not change sign for all $t>b$ if $b$ is chosen sufficiently large. Since the periodic Bernoulli functions are bounded from above by their Bernoulli numbers we therefore obtain: $$\begin{aligned} \left\vert\int_b^\infty B_{k+1}(t)~\varphi_\alpha^{(k+1)}(t)~dt\right\vert & \le & \left\vert B_{k+1}\right\vert~ \left\vert\int_b^\infty \varphi_\alpha^{(k+1)}(t)~dt\right\vert\nonumber\\ & \le & \left\vert B_{k+1}\right\vert~\left\vert\varphi_\alpha^{(k)}(t) \vert_b^\infty\right\vert\nonumber\\ & = & \left\vert B_{k+1}\right\vert~\left\vert\varphi_\alpha^{(k)}(b) \right\vert\nonumber\\ & \to & 0 ~~~\mbox{as}~~\alpha \to 0\end{aligned}$$ Thus, when choosing $k$ large enough, the remainder term disappears so that, using (\[dfg\]), we obtain for the Casimir force for arbitrary polynomial dispersion relations: $$\label{eq:force} \mathcal{F}(L)=-\frac{k_{c}}{2L}\sum_{r=1}^{k} \frac{(2r-1)B_{2r}}{2r!}\,f^{(2r-1)}(0)\left(\frac{\pi}{ k_{c}L}\right)^{2r-1}$$ Further, since $f^{(s)}(0)=s!\,\nu_{s}$, we obtain: $$\label{eq:forcePoly} \mathcal{F}(L) = -\frac{k_{c}}{2L}\sum_{r=1}^k \frac{(2r-1)B_{2r}}{2r}~\nu_{2r-1} \left(\frac{\pi}{k_{c}L}\right)^{2r-1}$$ We notice that, interestingly, the even powers in a nonlinear dispersion relation, i.e. the coefficients $\nu_{2r}$, do not contribute to the Casimir force. As a consistency check, let us now choose the usual linear dispersion relation $f(x)=x$. Since $B_{2}=\frac{1}{6}$, we obtain $$\mathcal{F}(L)=-\left(\frac{k_{c}}{2L}\right) \left(\frac{\pi}{k_{c}L}\right)\frac{1}{2\cdot 6}=-\frac{\pi}{24L^{2}}\,,$$ which is the well-known usual result for the Casimir force, as it should be. Generic dispersion relations ============================ Considering our results for the Casimir force with polynomial dispersion relations, (\[eq:force\],\[eq:forcePoly\]) we notice that the addition of mode energies translates into the addition of the corresponding Casimir forces: if two dispersion relations are added, $f_{t}(x)=f_1(x)+f_2(x)$, then the two corresponding Casimir forces are added: $$\label{eq:additivity} \mathcal{F}_{t}=\mathcal{F}_1+\mathcal{F}_2$$ This shows that the operator, $\mathcal{K}$, that we have been looking for, namely the operator which maps arbitrary dispersion relations into their corresponding Casimir forces, $\mathcal{K}: f \mapsto \mathcal{F}$, is a linear operator: $$\mathcal{K}[f_1+f_2]=\mathcal{K}[f_1]+\mathcal{K}[f_2]$$ Because of its linearity, we can straightforwardly extend the action of $\mathcal{K}$ to arbitrary dispersion relation, $f$, which are given by power series in $x$: $$f(x)=\sum_{s=0}^{\infty}\nu_{s}x^{s}$$ The radius of convergence of the power series must be infinite since the dispersion relation needs to be evaluated for all $x$, i.e., $f$ is an entire function. The linearity of $\mathcal{K}$ yields the corresponding Casimir force function $\mathcal{F}$ as a power series in $1/L$: $$\label{foww} \mathcal{K}[f](L)=\mathcal{F}(L)=-\frac{k_{c}}{2L}\sum_{r=1}^{\infty} \frac{(2r-1)B_{2r}}{2r}\,\nu_{2r-1}\left(\frac{\pi}{ k_{c}L}\right)^{2r-1}\,$$ We need to determine under which conditions the resulting power series for the Casimir force function is convergent. Interestingly, as we will show in Sec.\[nese\], the convergence, i.e. the well-definedness of the Casimir force, generally depends on the plate separation $L$. When the power series possesses a finite radius of convergence, i.e. when there is a largest allowed value for $1/L$, this means that there is a smallest allowed value for the length $L$. This is beautifully consistent with the expectation that dispersion relations that arise from an underlying quantum gravity theory can imply a finite minimum length scale. For analyzing the convergence properties of the series (\[foww\]) the presence of the Bernoulli numbers is somewhat cumbersome. It will be useful, therefore, to use the connection between the Bernoulli numbers and the Riemann zeta function, see [@Havil]: $$B_{n}=(-1)^{n+1}n\,\zeta(1-n)$$ Thus: $$\label{mres} \mathcal{F}(L)=\frac{k_{c}}{2L}\sum_{r=1}^{\infty} (2r-1)\zeta(1-2r)~\nu_{2r-1}\left(\frac{ \pi}{k_{c}L}\right)^{2r-1}\,.$$ We can now use the fact that, see [@Hardy99]: $$\zeta(1-s)=\frac{2}{(2\pi)^{s}}\cos\left(\frac{1}{2}\pi s\right)\Gamma(s)\zeta(s)$$ In our case, since $s$ is always an integer, the Euler gamma function reduces to a factorial, and the cosine is $\pm 1$. Thus: $$\label{eq:forceFactorial} \mathcal{F}(L)=\frac{k_{c}}{L}\sum_{r=1}^{\infty} \frac{(-1)^{r}}{(2\pi)^{2r}}~(2r-1)~(2r-1)!~\zeta(2r)~ \nu_{2r-1}\left(\frac{\pi}{k_{c}L} \right)^{2r-1}$$ Having replaced the Bernoulli numbers by the Riemann zeta function is advantageous because obviously $\zeta(r)\to1$ very quickly as $r\to\infty$. For example, for $r=6$, the difference is already at the one percent level. This means that for the purpose of analyzing the convergence properties of the power series we will be able to use that the Riemann zeta function for the arguments that occur is close to $1$ and essentially constant. Example with minimum length {#nese} =========================== Ultraviolet-modified nonlinear dispersion relations which approach the usual linear dispersion relation for small momenta are given, for example, by: $$\label{eq:dispExpSinh} f(x)=\exp(x)-1 \qquad\textrm{and}\qquad f(x)=\sinh(x)$$ The odd coefficients, $\nu_{2r-1}=1/(2r-1)!$ are the same for both the exponential and the $sinh$ dispersion relation, i.e. the two functions differ only by their even part. But we know from (\[eq:forceFactorial\]) that the even components of the dispersion relations do not affect the Casimir force. The two dispersion relations therefore happen to lead to the same Casimir force. It is plotted with the usual Casimir force in Fig.\[fig:exponential\]. We see that the Casimir force matches the usual Casimir force at large $L$ but is weaker for small $L$. As the plot also shows, the Casimir force is well defined only for values of $L$ above a certain value $L_c$, corresponding to a finite radius of convergence of the power series in $1/L$ for the Casimir force. In order to calculate this minimum length $L_c$, we notice that all the coefficients $\nu_{2r-1}$ are non-negative, which implies that (\[eq:forceFactorial\]) is an alternating series. Such series converge if and only if their coefficients converge to zero. Hence, for any such dispersion relation, the Casimir force is well defined for all $L$ which obey: $$\lim_{r\to\infty}\left[\frac{1}{(2\pi)^{2r}}~ (2r-1)~(2r-1)!~\zeta(2r)~\nu_{2r-1}\left(\frac{\pi}{k_{c}L} \right)^{2r-1}\right]=0$$ In the particular case of the two dispersion relations above, we have $\nu_{2r-1}=1/(2r-1)!$ and the condition that the Casimir force be well-defined therefore reads $$\lim_{r\to\infty}\left[\frac{\zeta(2r)}{(2\pi)}~ (2r-1)\left(\frac{1}{2k_{c}L} \right)^{2r-1}\right]=0$$ which means that $\frac{1}{2k_cL} < 1$. The minimum length implied by this dispersion relation is therefore: $$L_c = \frac{1}{2k_c} \label{edf}$$ This is an example of what we hinted at before, namely that a dispersion relation can in this way reveal an underlying short-distance cutoff. For general dispersion relations the coefficients $\nu_{2r-1}$ are not necessarily all positive, i.e., the Casimir force need not be given by an alternating series. In this general case the minimum length can be determined by using the fact that the radius of convergence, $\mathcal{R}$, of an arbitrary power series $\sum c_{r}x^{r}$ is given by: $$\label{eq:RConv} \frac{1}{\mathcal{R}}=\limsup_{r\to\infty}\left|c_{r}\right|^{\frac{1}{r}}\,.$$ For example, in the case of the dispersion relations given in (\[eq:dispExpSinh\]), where $\nu_{2r-1}=1/(2r-1)!$, the Casimir force (\[eq:forceFactorial\]) can be written as a power series $\mathcal{F}(L)=\sum_{r=1}^\infty c_r \left(\frac{1}{L^2}\right)^r$ in $1/L^2$ with the coefficients: $$c_r = \frac{(-1)^r~k_c ~(2r-1)~\zeta(2r)}{2\pi}\left(\frac{1}{2 k_c }\right)^{2r-1}$$ Thus, the minimum length obeys $$\begin{aligned} L_c^2 & = & \limsup_{r\to\infty}\left[\frac{k_c ~(2r-1)~\zeta(2r)}{2\pi}\left(\frac{1}{2 k_c}\right)^{2r-1}\right]^{\frac{1}{r}}\\ & = & \lim_{r\to \infty}\left(\frac{1}{2 k_c}\right)^{\frac{2r-1}{r}}\\ & = & \left(\frac{1}{2k_c}\right)^2\end{aligned}$$ and therefore: $$L>L_c=\frac{1}{2k_{c}}$$ As expected, this agrees with the result (\[edf\]) which we obtained by using the alternating series test. Regularization-function independence {#indep} ==================================== It is known that the prediction for the Casimir force with the usual linear dispersion relation does not depend on the choice of regularization function, as long as the regularization function obeys certain smoothness conditions and is such that it does in fact regularize the integrals and series which occur in the calculation. In our calculation of the Casimir force for nonlinear dispersion relations we chose an exponential regularization function. We need to prove that our result (\[eq:force\]) does not depend on this choice. To see that this indeed the case, assume that we use an arbitrary regularization function, $\gamma_{\alpha}(x)$, which is a positive function of $x$ that obeys $\lim_{\alpha\to 0^+}\gamma_{\alpha}(x)=1$ for all $x$ so that the original divergent series is recovered when the regulator $\alpha$ goes to zero. The regularized energy between the plates then reads: $$\label{eds} \tilde{E}_{in}^{reg}=\frac{1}{2} \sum_{n=0}^{\infty}k_{c}\,f\left(\frac{n\pi}{k_{c}L} \right)\,\gamma_{\alpha}\left[f\left(\frac{n\pi}{k_{c}L} \right)\right]$$ The regularization function, $\gamma_{\alpha}$, needs to be chosen such that (\[eds\]) as well as the energy density are finite, i.e. such that $\lim_{L\to\infty} \tilde{E}_{in}^{reg}(L)/L<\infty$, which means: $$\label{eq:finiteCut} \int_{0}^{\infty}dx\,f(x)\gamma_{\alpha}\left[f(x)\right]<\infty\,$$ Finally, in order to be able to use the Euler-Maclaurin sum formula and in it to interchange $d/dt$ and the limit $\alpha\to 0$, we require the regularization functions $\gamma_{\alpha}$ to be smooth enough so that $\gamma_{\alpha}\in\mathcal{C}^{\infty}$ as well as $\varphi_\alpha(t) \in \mathcal{C}^{\infty}$ as a function of $\alpha$ and $t$. The above derivation of the Casimir force can then be repeated point by point using the corresponding new definition of $\varphi_{\alpha}$. In particular, we apply the Euler-Maclaurin sum formula to the expression: $$\begin{aligned} \label{eq:GenGenRel} \lefteqn{\tilde{\mathcal{F}}_{\alpha}(L)=-\frac{k_{c}}{2L}\left\{ \sum_{n=0}^{\infty}\left[ \frac{n\pi}{k_{c}L}~ f^{\prime}\left(\frac{n\pi}{k_{c}L}\right) \left\{\gamma_{\alpha}\left[f\left(\frac{n\pi}{k_{c}L}\right)\right]+{} \right.\right.\right.}\nonumber \\ & & {}\left.\left.\left. +f\left(\frac{n\pi}{k_{c}L}\right) \gamma_{\alpha}^{\prime} \left[f\left(\frac{n\pi}{k_{c}L}\right)\right] \right\}\right]+\frac{k_{c}L}{\pi}\int_{0}^{ \infty}f(x)\gamma_{\alpha}[f(x)]\right\}\end{aligned}$$ An integration by parts as in (\[eq:intPrts\]) shows that the integrals cancel. Equation (\[eq:finiteCut\]) ensures that the boundary term vanishes, as before in (\[bt\]). Hence, we again arrive at (\[eq:Falpha\]). We now take the limit $\alpha\to 0$ term by term in the sum, and since $\varphi_{\alpha}$ is in $\mathcal{C}^{\infty}$, we can again do this before differentiating. Moreover, by the basic assumptions made on $\gamma_{\alpha}$, we know that $\gamma_{\alpha}^{\prime}(x)\to 0$ as $\alpha\to 0$, so that as before: $$\lim_{\alpha\to 0}\varphi_{\alpha}(t)=x(t)f^{\prime}(x(t))$$ The arguments given in the previous section to show that the remainder integral disappears for polynomial dispersion relations and that the coefficients in the Euler-Maclaurin sum are those given in (\[eq:forcePoly\]) apply unchanged. This proves that our results for the Casimir force are independent of the choice of regularization function, as it should be. The operator $\mathcal{K}$ which maps dispersion relations into Casimir force functions ======================================================================================= In preparation for our study of the transplanckian question for the Casimir effect in Sec.\[tps\], let us now calculate explicit representations of the operator $\mathcal{K}$ which maps dispersion relations $f$ into Casimir force functions $\mathcal{F}$: $$\mathcal{K}: ~f(x) \longmapsto \mathcal{F}(L)$$ We already saw that $\mathcal{K}$ is linear. Indeed, from (\[eq:forceFactorial\]), it can be written as a differential operator: $$\mathcal{K}=\frac{k_{c}}{2\pi L}\sum_{r=1}^{\infty}(-1)^{r}(2r-1)\zeta(2r)\left(\frac{1}{2k_{c}L} \right)^{(2r-1)}\left .\frac{d^{(2r-1)}}{dx^{(2r-1)}}\right|_{x=0} \label{dop}$$ As we already mentioned, the convergence of the zeta function, $\zeta(2r)\to 1$, is very fast as $r\to\infty$. Since the study of the transplanckian question involves large orders of magnitudes, we will therefore henceforth replace $\zeta(2r)$ by $1$. By this approximation we incur at most a numerical error of a pre-factor of order one which will not affect our later analysis of the question when the ultraviolet modifications to the dispersion relations can or cannot affect the Casimir force in the infrared. Representation of $\mathcal{K}$ as an integral operator {#sec9.1} ------------------------------------------------------- For the purpose of studying the transplanckian question, the representation of $\mathcal{K}$ as a differentiation operator in (\[dop\]) is not as suitable as a representation as an integral operator would be. Indeed, as we now show, an equivalent representation of $\mathcal{K}$ is given by $$\label{eq:NiceIntOp} \mathcal{K}[f](L)=\mathcal{F}(L)=\frac{k_{c}^{2}}{\pi}~\text{Im}\int_{0}^{ \infty} f(ix)~(1-2k_cLx)~e^{-2k_cLx}\,dx$$ where Im stands for taking the imaginary part. To verify that the action of this operator on all polynomial $f$ agrees with that given in (\[dop\]), let us begin by introducing variables $\Lambda=2k_{c}L$ and $\tilde{x}=2k_cLx$, to write: $$\label{eq:IntOpBeg} \mathcal{F}(L)=\frac{k_{c}^{2}}{\pi\Lambda}~\text{Im} \int_{0}^{\infty} f\left(i\frac{\tilde{x}}{\Lambda}\right)(1-\tilde{x})~ e^{-\tilde{x}}\,d\tilde{x}$$ We claim that iterated integrations by parts yield: $$\begin{aligned} \label{eq:IntOp} \mathcal{F}(L) &=&\frac{k_{c}^{2}}{\pi\Lambda}~\text{Im}\left\{ \sum_{s=0}^{n}\left.e^{-\tilde{x}}(\tilde{x}+s)\frac{d^{s}}{ d\tilde{x}^{s}}f\left(i\frac{\tilde{x}}{\Lambda} \right)\right|_{\tilde{x}=0}^{ \infty}\right. \nonumber \\ & & \qquad \quad \left. - \int_{0}^{\infty}e^{-\tilde{x}}(\tilde{x}+n) \frac{d^{n+1}}{d\tilde{x}^{n+1}}f \left(i\frac{\tilde{x}}{\Lambda}\right) \right\}\end{aligned}$$ Integrating (\[eq:IntOpBeg\]) by parts once shows that the equation holds for $n=0$. Assuming now that the formula is valid for $n-1$, integration by parts of the remaining integral yields: $$\begin{aligned} \mathcal{F}(L) &=& \frac{k_{c}^{2}}{\pi\Lambda}~\text{Im}\left\{ \sum_{s=0}^{n-1}\left.e^{-\tilde{x}}(x+s)\frac{d^{s}}{ d\tilde{x}^{s}}f\left(i\frac{\tilde{x}}{ \Lambda}\right)\right|_{\tilde{x}=0}^{ \infty}\right. \label{wed1} \\ & & \qquad \qquad + \left.e^{-\tilde{x}}(\tilde{x}+n)\frac{d^{n}}{d\tilde{x}^{n}}f\left( i\frac{\tilde{x}}{\Lambda}\right) \right|_{\tilde{x}=0}^{\infty}\label{wed2}\\ & & \qquad \qquad - \left. \int_{0}^{ \infty} e^{-\tilde{x}}(\tilde{x}+n)\frac{d^{n+1}}{d\tilde{x}^{n+1}}f\left(i \frac{\tilde{x}}{\Lambda}\right)\right\}\end{aligned}$$ The boundary term in (\[wed2\]) becomes the next term in the sum (\[wed1\]) and by induction this completes the proof of (\[eq:IntOp\]). In (\[eq:IntOp\]), since $f$ is polynomial, the integral vanishes if $n$ is chosen large enough. Also, the boundary terms clearly vanish at the upper limit. Letting $n\to \infty$, we are left with: $$\begin{aligned} \label{eq:SumI} \mathcal{F}(L) & = & \frac{-k_c^2}{\pi\Lambda}~\text{Im}\sum_{s=0}^\infty \left.s\frac{d^{s}}{d\tilde{x}^{s}}f \left(i\frac{\tilde{x}}{\Lambda}\right)\right|_{\tilde{x}=0}\\ & = & \frac{k_c}{2\pi L} \label{la2} \sum_{r=1}^\infty ~(2r-1)~(-1)^r~\left(\frac{1}{2k_cL} \right)^{2r-1}\frac{d{\,}^{2r-1}}{dx^{2r-1}} ~f(x)\vert_{x=0}\end{aligned}$$ which agrees with (\[dop\]), up to the zeta function which we omitted since it is close to one. In the step from (\[eq:SumI\]) to (\[la2\]) we made use of the fact that the imaginary part selects for only the odd powers in the series. As a consistency check, let us apply the integral representation, (\[eq:NiceIntOp\]), of $\mathcal{K}$ to the usual linear dispersion relation $f(x)=x$. Carrying out the integration yields $\mathcal{F}(L)=-\frac{1}{4\pi L^{2}}$. As expected, this differs from the usual result only by the omitted $\zeta$ function pre-factor of $\zeta(2)=\frac{\pi^{2}}{6}$. Relation of $\mathcal{K}$ to the Laplace transform -------------------------------------------------- The representation of $\mathcal{K}$ as an integral operator came at the cost of complexifying the analysis by having to integrate the dispersion relation along the imaginary axis. Fortunately, it is possible to re-express $\mathcal{K}$ as a real integral operator, namely as a slightly modified Laplace transform. To this end, let us use our finding that even powers in the dispersion relations do not contribute to the Casimir force. This means that, without restricting generality, we can assume that the dispersion relation is odd, i.e. that it can be written in the form $$f(x)=x~g(x^{2})$$ for some function $g$. Thus, $f(ix) = i\,x\,g(-x^2)$, and therefore the integral representation (\[eq:NiceIntOp\]) of $\mathcal{K}$ now takes the form: $$\label{eq:OpIntRe} \mathcal{K}[f](L)=\mathcal{F}(L)=\frac{k_{c}^{2}}{\pi}\int_{0}^{ \infty}x~g(-x^{2})~(1-2k_cLx)~e^{-2k_cLx}\,dx$$ Using the properties of the Laplace transform with respect to differentiation, we can finally conclude that the operator $\mathcal{K}$ which maps dispersion relations into Casimir force functions can be written as a modified Laplace transform: $$\begin{aligned} \mathcal{K}[f](L) = \mathcal{F}(L) & = & \frac{k_{c}^{2}}{\pi}\nonumber \left(1+L\frac{d}{d L}\right) \int_0^\infty e^{-2k_cLx}x~g(-x^2)~dx \\ & = & \frac{k_{c}^{2}}{\pi} \left(1+L\frac{d}{d L}\right)\mathcal{L}_{\Lambda}[\tilde{f}]\label{cffi}\end{aligned}$$ In the last line, $\mathcal{L}_{\Lambda}[\tilde{f}]$ stands for the Laplace transform of $\tilde{f}(x) =x\,g(-x^{2})$ with respect to the variable $\Lambda = 2k_cL$. Let us test (\[cffi\]) by applying it to the linear dispersion relation, where $\tilde{f}(x)=x$. Then, $$\begin{aligned} \mathcal{F}(L) & = & \frac{k_c^2}{\pi}~\left(1+L~\frac{d}{d L}\right)\int_0^\infty e^{-2k_c L x}x~dx\\ & = & -\frac{1}{4\pi L^2}~,\end{aligned}$$ which indeed agrees with the expected result as obtained at the end of Sec.\[sec9.1\]. We notice that the representation of $\mathcal{K}$ through (\[cffi\]) involves the analytic extension of the function $g$ from positive arguments, where it encodes the dispersion relation through $f(x)=x\,g(x^2)$, to negative arguments where $g$ is evaluated by the Laplace transform in (\[cffi\]). This observation about $\mathcal{K}$ will be useful for answering the transplanckian question in Sec.\[tps\]: clearly, the dispersion relation $f(x)=x\,g(x^2)$ may be very close to linear, i.e. $g(y)$ may be close to one for $y>0$, while at the same time the unique analytic extension $g(y)$ for $y<0$ may be far from linear. This already shows that ultraviolet-modified dispersion relations can easily lead to arbitrarily pronounced nontrivial Casimir forces even at infrared length scales. The inverse of $\mathcal{K}$ ---------------------------- Let us now calculate the inverse of the operator $\mathcal{K}$ to obtain the operator which maps odd Casimir force functions (recall that the even ones do not contribute to the Casimir force) into the corresponding dispersion relations. To this end, we need to solve for $\tilde{\mathcal{F}}(L)$: $$\frac{k_{c}^{2}}{\pi}\left(1+L\frac{d}{dL}\right) \tilde{\mathcal{F}}(L)=\mathcal{F}(L)\,.$$ The Green’s function for this differential operator satisfies the following equation: $$\frac{k_{c}^{2}}{\pi}\left(1+L\frac{d}{dL}\right) G_{\mathcal{F}}(L,L')=\delta(L-L')$$ Since the $\delta$-function is formally the derivative of the Heavyside step function $\theta$, an integration on both sides yields $$\int G_{\mathcal{F}}(L,L')\,dL+L G_{\mathcal{F}}(L,L')-\int G_{\mathcal{F}}(L,L')\,dL =\frac{\pi}{k_{c}^{2}}\,\theta(L-L')+\kappa(L')\,,$$ where $\kappa(L')$ is some arbitrary function. Hence, $$G_{\mathcal{F}}(L,L')=\frac{1}{L}\left[\frac{\pi}{ k_{c}^{2}}\,\theta(L-L')+\kappa(L')\right]\,,$$ and $$\label{eq:PreFTilde} \tilde{\mathcal{F}}(L)=\frac{1}{L}\int_{-\infty}^{\infty} \left[\frac{\pi}{k_{c}^{2}}\,\theta(L-L')+ \kappa(L')\right]\mathcal{F}(L')\,dL'\,.$$ For the boundary condition, we set $\tilde{\mathcal{F}}(L)\to0$ as $ L\to+\infty$, to ensure the correct behavior of $\mathcal{F}$. Hence, $$\kappa(L')+\frac{\pi}{k_{c}^{2}}=0 \, \Longleftrightarrow \, \kappa(L')\equiv -\frac{\pi}{k_{c}^{2}}$$ Thus, the integral in (\[eq:PreFTilde\]) is effectively truncated and we have: $$\label{eq:FTilde} \tilde{\mathcal{F}}(L)=-\frac{\pi}{k_{c}^{2}L}\int_{L}^{ \infty}\mathcal{F}(L')\,dL'\,.$$ Eventually, we also need to invert the Laplace transform through a Fourier-Mellin integral, to obtain: $$\label{invfo} x\,g(-x^{2})=-\frac{1}{2i k_c^2L}\int_{\gamma}dL\,e^{x L}\int_{L}^{\infty}\mathcal{F}(L')\,dL'$$ Here, the integration path $\gamma$ is to be chosen parallel to the imaginary axis and to the right of all singularities of the integrand. Analytic continuation of $g$ to the positive reals finally yields the dispersion relation $\mathcal{K}^{-1}[\mathcal{F}](x)=f(x)=xg(x^2)$, modulo, of course, even components to the dispersion relations. We will here not go further into the functional analysis of (\[invfo\]) and the inverse of $\mathcal{K}$. The transplanckian question {#tps} =========================== Having calculated $\mathcal{K}$, we are now prepared to address the transplanckian question, namely the question which types of Planck scale modified dispersion relations would significantly affect the predictions for the Casimir force at realistic plate separations. To this end, let us begin by investigating the lowest order corrections to the dispersion relation, $f$, namely by including a quadratic and a quartic correction term: $f(x)=x+\nu_2x^2+\nu_3x^3$. The coefficients $\nu_2,\nu_3$ can be as large as of order one, $\nu_2,\nu_3\approx 1$, without appreciably affecting the dispersion relation $\omega(k)=k_c\,f(k/k_c)$ at small momenta $k\ll k_c$. Using our result (\[eq:forceFactorial\]) for $\mathcal{K}$ we find the corresponding Casimir force function: $$\mathcal{F}(L) = -\frac{\pi}{24 L^2} \,+\nu_3\,\frac{\pi^5}{20\, k_c^2L^4}$$ The quadratic correction term $\nu_2 x^2$ is an even component of $f$ and therefore does not affect the Casimir force. The quartic correction term does affect the Casimir force, changing the Casimir force from attractive to repulsive at very short distances, as shown in Fig. \[fig:Poly\]. However, as we can also see in Fig. \[fig:Poly\], the Casimir force function converges very rapidly towards the usual Casimir force function for plate separations that are significantly larger than $L_c=k_c^{-1}$. To be precise, we recall that the standard dispersion relation $f_\text{standard}(x)=x$ implies the standard Casimir force function $\mathcal{F}_\text{standard}(L) = -\frac{\pi}{24 L^2}$. The relative size of the correction to the Casimir force depends on the plate separation $L$ and reads: $$\label{relcorc} \frac{\mathcal{F}_\text{standard}(L)-\mathcal{F}(L)}{ \mathcal{F}_\text{standard}(L)}=\nu_3\frac{6\pi^4}{5L^2k_c^2}$$ Let us calculate the orders of magnitude. The dispersion relation $\omega(k)=k_c\,f(k/k_c)$ is expected to start to appreciably differ from linearity the latest at the Planck scale, which in $3+1$ dimensional space-time means that the critical length, $L_c$, obeys $L_c =k_{c}^{-1}\approx 10^{-35}m$. Actual measurements of the Casimir force have been performed at about $L_m\approx 10^{-7}m$, see e.g. [@Lamoreaux:1999cu-etal]. Therefore, evaluating the relative correction of the Casimir force, (\[relcorc\]), at the measurable scale $L=L_m$ yields $$\frac{\mathcal{F}_\text{standard}(L_m)-\mathcal{F}(L_m)}{ \mathcal{F}_\text{standard}(L_m)}=\nu_3\,\frac{6\pi^4}{5}~\sigma^2$$ where $\sigma$ denotes the dimensionless ratio of the ultraviolet length scale $L_c$ and the infrared length scale $L_m$: $$\sigma=\frac{L_{c}}{L_{m}} \approx 10^{-28}$$ Thus, the effect of the lowest order corrections to the dispersion relation on the Casimir force is extremely small at measurable plate separations. Naively, on might expect that higher-order corrections to the dispersion relations contribute even less to the Casimir force. If true, this would indicate that the physical processes that happen at these two length scales respectively are very effectively decoupled from another. In fact, however, the two scales are not quite as decoupled. Roughly speaking, the reason is that higher order corrections to the dispersion relations contribute more rather than less to the Casimir force, as we will now show. UV-IR coupling with polynomial dispersion relations {#sec:IRUV} --------------------------------------------------- \[shfi\] Recall that we here need not be concerned with the even components of dispersion relations since they do not contribute to the Casimir force. Let us, therefore, consider higher order odd polynomial dispersion relations: $$f(x)=x+\sum_{r=2}^N\nu_{2r-1}\,x^{2r-1}$$ The coefficients $\nu_{2r-1}$ can be chosen as large as of order one, $\nu_{2r-1}\approx 1$, and $f$ will still be modified only in the ultraviolet. We showed above that the contribution of the lowest order correction term, $\nu_3x^3$, to the Casimir force at the infrared length scale $L_m$ is proportional to $\sigma^2$, i.e. that it is completely negligible. One might expect that higher order terms $\nu_{2r-1}x^{2r-1}$ in the dispersion relation would contribute even less to the Casimir force. At first sight this expectation appears to be confirmed: $\mathcal{K}$ maps a dispersion relation term $\sim x^{2n-1}$ into a Casimir force term $\sim (k_cL)^{-2r}$. At the infrared scale, $L=L_m$, the latter term reads: $$\left(\frac{1}{k_cL}\right)^{2r}=\left(\frac{L_c}{L_m}\right)^{2r}= \sigma^{2r}$$ This indeed means that the size of this term decreases exponentially with increasing $r$. Upon closer inspection, however, we see that, nevertheless, a higher order term $x^{2r-1}$ in $f$ can give an arbitrarily large contribution to the Casimir force, in particular if $r$ is very large. The reason is that $\mathcal{K}$ involves a factorial amplification of higher order terms which eventually overcomes the exponential suppression that we discussed above. Namely, as (\[eq:forceFactorial\]) shows, the precise action of $\mathcal{K}$ on the correction term $\nu_{2r-1}x^{2r-1}$ reads: $$\mathcal{K}: ~~\nu_{2r-1}\,x^{2r-1} ~~\longrightarrow~~ \nu_{2r-1}\,\frac{(-1)^r k_c^2}{\pi}\,(2r-1)(2r-1)!\, \zeta(2r)\left(\frac{1}{2k_cL}\right)^{2r} \label{kappa34}$$ Due to the presence of the factorial term $(2r-1)!$, the coefficients of the Casimir force function grow much faster than those of the dispersion relation. In particular, for the dispersion relation $f(x)=x+ \nu_{2r-1}x^{2r-1}$ the relative change in the Casimir force at the infrared scale $L_m$ reads: $$\frac{\mathcal{F}_\text{standard}(L_m)-\mathcal{F}(L_m)}{ \mathcal{F}_\text{standard}(L_m)}=\nu_{2r-1} \frac{(-1)^{r-1}(2r-1)\zeta(2r)}{4\pi^2}~(2r-1)!~ \left(\frac{\sigma}{2}\right)^{2r-2}$$ It is straightforward to apply Stirling’s formula for the factorial, $ n!\approx \sqrt{2\pi n}\left(\frac{n}{e}\right)^{n}$ for $n\gg 1$ in order to calculate how large $r$ needs to be for the factorial amplification to overcome the exponential suppression. We find that a correction term $\nu_{2r-1}x^{2r-1}$ with $\nu_{2r-1}\approx1$ in the dispersion relation leads to a relative change of order one in the Casimir force at the infrared scale $L_m$ if $r$ is of the order $\sigma^{-1}$, i.e. if $r\approx 10^{28}$. To summarize: We found that $\mathcal{K}$ is a well-defined but unbounded and therefore discontinuous operator (as are, e.g., the quantum mechanical position and momentum operators). Namely, a modified dispersion relation of the form $f(x) = x + \nu_{2r-1}x^{2r-1}$, say with $r\approx10^{28}$ and $\nu_{2r-1}\approx 1$ is virtually indistinguishable from the linear dispersion relation $f(x)=x$ at all scales up to the Planck scale, but does lead to a modification of the Casimir force which is very strong (the relative change is of order 100%) even at laboratory length scales. Thus, even though the first order terms contribute extremely little to the Casimir force, very high order corrections to the dispersion relations can contribute significantly to the Casimir force - in fact, the more so the larger $r$ is. Realistic candidates for Planck scale modified dispersion relation are given by a series $f(x)=x+\sum_{n=2}^\infty \nu_n x^n$ and such dispersion relations therefore contain terms $\nu_{2r-1}x^{2r-1}$ for arbitrarily large $r$. At the same time, the prefactors $\nu_n$ must of course obey $\nu_n\rightarrow 0$ as $n\rightarrow \infty$ because this is a necessary condition for the convergence of the series. We conclude that it is this competition between the decay of the coefficients $\nu_{2r-1}$ and the increasing Casimir effect of terms $x^{2r-1}$, for $r\rightarrow\infty$, which decides whether or not a given ultraviolet-modified dispersion relation does or does not lead to an appreciable effect on the Casimir force at infrared distances. In practice, to study this competition directly by using the complicated representation of $\mathcal{K}$ in (\[kappa34\]) would be a tedious approach to the transplanckian question because, for example, the coefficients of the Casimir force acquire alternating signs. Instead, as we will show in the next section, we will conveniently be able to study the transplanckian question by making use of our representation of $\mathcal{K}$ in terms of the Laplace transform. UV-IR coupling with generic dispersion relations {#shsec} ------------------------------------------------ Let us write the dispersion relations again in the form $f(x)=x\,g(x^2)$ so that, e.g., $g\equiv 1$ yields the standard dispersion relation. This allows us to apply the representation of $\mathcal{K}$ in terms of the Laplace transform, (\[eq:OpIntRe\]). We begin by noticing that, since $x^2$ is positive, the evaluation of the dispersion relation $f$ involves evaluating $g(y)$ only for positive $y$. Now considering (\[eq:OpIntRe\]) we see that, curiously, the calculation of the Casimir force involves evaluating $g(y)$ only for negative values of $y$. This is surprising because if $g$ could be any arbitrary function, this would mean that the dispersion relation, which is determined by the behavior of $g$ on the positive half-axis, and the Casimir force function, which is determined by the behavior of $g$ on the negative half axis, were unrelated. But of course our $g$ are not arbitrary functions but are polynomials or power series with infinite radius of convergence, i.e. they are entire functions. Therefore, the behavior of $g$ on the positive half axis fully determines its behavior also on the negative half axis. The dispersion relations do determine the corresponding Casimir force. Of crucial importance for the transplanckian question, however, is the fact that there are entire functions $g$ which are arbitrarily close to one for $0<y<1$ and which nevertheless reach arbitrarily large values on the negative half axis. Such functions do not noticeably affect the dispersion relation for momenta up to the Planck scale but do arbitrarily strongly affect the Casimir force. These are the dispersion relations $f(x)=x\,g(x^2)$ with $$g(y)=1+h(y),\label{def77}$$ where the function $h$ obeys $h(y)\approx 0$ for $y\in(0,1)$ while exhibiting large $\vert h(y)\vert$ in some range of negative values of $y$. Let us now analyze which behavior of $h$ on the negative half axis determines if the Casimir force is affected in the infrared. To this end, let us use (\[eq:OpIntRe\]) and (\[def77\]) to express the correction in the Casimir force, $\Delta \mathcal{F}=\mathcal{F}-\mathcal{F}_\text{standard}$, in terms of the correction $h$ to the dispersion relation: $$\Delta\mathcal{F}(L)=\frac{k_{c}^{2}}{\pi}\int_{0}^{ \infty}x~h(-x^{2})~(1-2k_cLx)~e^{-2k_cLx}\,dx$$ The integral kernel $$G(x, L)=(1-2k_cLx)~e^{-2k_cLx}$$ is positive for $x<(2k_cL)^{-1}$, negative for $x>(2k_cL)^{-1}$ and rapidly decreases to zero for $x\gg(2k_cL)^{-1}$. (We remark that the the integral of the kernel over all $x\in[0, \infty)$ is $0$, which expresses the fact that the Casimir force does not depend on the absolute value of the energy.) Thus, for a fixed plate separation $L$, what matters most for the Casimir force is the behavior of $h(y)$ from $y=0$ to about $y\approx-(k_cL)^{-2}$. As we increase $L$, the interval $y\in(-(k_cL)^{-2},0)$ on which the integral kernel $G$ is mostly supported is shrinking, see Fig.\[fig:kernel\]. Thus, there is a significant effect on the Casimir force at realistically large plate separations, such as $L=L_m$, if the function $h$ is either of order one in this small interval close to the origin or it must be exponentially large (so as to compensate the exponential suppression in $G$) in some interval to the left of $-(k_cL)^{-2}$. Of course, both are possible. There are entire functions $h$ which possess either one of these behaviors on the negative half axis and therefore do affect the Casimir force in the infrared, while being arbitrarily close to zero for $0<y<1$, so as to leave the dispersion relation virtually unchanged in the infrared. There is even the extreme case of functions, $h$, whose corresponding dispersion relation $f$ is arbitrarily little affected at *all scales while the Casimir force function is arbitrarily much affected at any scale we wish, say in the infrared. To see this, consider for example the case where $h$ is a Gaussian which is centred around a low negative value $y_0<0$ while being so sharply peaked that its tail into the positive half axis is negligibly small. The function that enters into the calculation of the Casimir force, $\tilde{f}_1=x\,g(-x^2)$, then features the low-$x$ spike of the Gaussian, implying by our above consideration that the Casimir force is affected in the infrared. At the same time, the dispersion relation itself, $\tilde{f}_2(x)=x\,g(x^2)$, is virtually unaffected for all $x$.* Conclusions =========== We investigated the effect of ultraviolet corrections to the dispersion relation on the Casimir force. To this end, we calculated the operator $\mathcal{K}$ which maps generic dispersion relations, $\omega(k)=k_{c}f\left(k/k_{c}\right)$, into the corresponding Casimir force functions $\mathcal{F}(L)$. Here, $k_c$ is the Planck momentum, $f$ is a power series in $x=k/k_c$ and $L$ is the plate separation. The structure of $\mathcal{K}$ showed that the even components of dispersion relations do not contribute to the Casimir force. This implies, for example, that the dispersion relations defined through $f(x)=\sinh(x)$ and $f(x)=\exp(x)-1$ yield identical Casimir force functions. We also showed that a certain class of UV-modified dispersion relations, such as $f(x)=\sinh(x)$, lead to Casimir force functions that are well defined only down to a finite smallest distance between the plates. Physically, the existence of a finite lower bound for the plate separation, $L$, is indeed what should be expected if the ultraviolet-modified dispersion relation arises from an underlying theory of quantum gravity which possesses a notion of minimum length. Technically, the phenomenon of a finite minimum $L$ arises because the Casimir force $\mathcal{F}(L)$ is always a polynomial or power series in $1/L$, depending on whether the dispersion relation is polynomial or a power series. Therefore, if $\mathcal{F}(L)$ is a power series then it can possess a finite radius of convergence, i.e. an upper bound on $1/L$, which then implies a lower bound on $L$. Of course, a finite radius of convergence can occur only for power series but not for polynomials. Interestingly, this means that the existence of a finite lower bound on $L$ cannot arise from polynomial dispersion relations of any degree. An important conclusion that we can draw from this is that if a candidate quantum gravity theory yields a non-polynomial dispersion relation then working with any finite degree polynomial approximation of this dispersion relation may be missing crucial qualitative features, such as the existence of a finite minimum length. There is a deeper reason for why it is important to apply a nontrivial dispersion relation in the exact form in which it arises from some proposed quantum gravity theory. The reason is that $\mathcal{K}$ is an unbounded and therefore also discontinuous operator, which means that arbitrarily small changes to the dispersion relation can lead to arbitrarily large changes to the Casimir force. On the other hand, the action of $\mathcal{K}$ is of course well-defined, which means that if a candidate quantum gravity theory implies a particular UV-modified dispersion relation then $\mathcal{K}$ can be used to precisely predict the corresponding Casimir force function. We proceeded by determining which ultraviolet modifications to the dispersion relation would appreciably affect the Casimir force function at a large length scale $L_m$. To this end, it was convenient to express dispersion relations, $f$, in the form $f(x)=x\,g(x^2)$ and $g(y)=1+h(y)$ where $h$ is an entire function (so that $h\equiv 0$ for the usual linear dispersion relation). Recall that $y$ is the momentum squared, in units of $k^2_c=L^{-2}_c$, i.e., $y=1$ is the Planck momentum squared. We are interested in dispersion relations which are essentially unchanged in the infrared, i.e., which obey $h(y)\approx 0$, up to unmeasurable deviations, for all $y$ in the interval $(0,1)$. Our analysis of $\mathcal{K}$ through the Laplace transform then showed that if the corresponding Casimir force is to be affected at an infrared scale, say $L_m$, then the dispersion relation must come from a function $h$ which obeys one or both of two conditions: (a) either $h$ obeys $\vert h(y)\vert =\mathcal{O}(1)$ for $y$ in parts of the interval $(-L^2_c/L^2_m,0)=(-\sigma^2,0)$, or (b) $h$ is exponentially large in a finite interval of more negative $y$ obeying $y<-\sigma^2$. In the case (a), an ultraviolet-modified dispersion relation induces an infrared modification of the Casimir force if the correction to the dispersion relation, $h(y)$, is essentially zero in all of $(0,1)$, while it rises very steeply towards the left to amplitudes of order one within the extremely short interval $(-\sigma^2,0)$, where we recall that $\sigma\approx 10^{-28}$. In the case (b), UV/IR coupling arises if $h$ is again essentially zero in the interval $(0,1)$, while now needing to reach exponentially large values for a finite stretch of more negative $y$ values, again resulting in the need for $h$ to rise extremely steeply towards the left. It is easy to give examples of such $h$, such as the Gaussian $h$ that we discussed. In fact, we can easily write down $h$ which would lead to no appreciable modification of the dispersion at low energies and yet to arbitrarily large changes to the Casimir force even at macroscopically large plate separations. Because of their large slope, however, such functions $h$ are severely fine-tuned and must therefore be considered unlikely to arise from an underlying quantum gravity theory. We can conclude, therefore, that the 28 orders of magnitude which separate the effective UV and IR scales do not suppress UV/IR coupling in strength but instead in likelihood, namely through the need for extreme fine tuning. This is interesting because, in inflation, the separation of the effective UV and IR scales is only about three to five orders of magnitude: Consider the operator $\mathcal{K}$ for inflation, namely the operator which maps arbitrary ultraviolet-modified dispersion relations into the function that describes the CMB’s tensor or scalar fluctuation spectrum. Let us assume that its properties are analogous to that of the operator $\mathcal{K}$ which we here found for the Casimir effect. This would mean that an ultraviolet-modified dispersion relation that arises from some underlying quantum gravity theory can lead to effects on the CMB spectrum which are not automatically limited in their strength by the separation of scales $\sigma\approx 10^{-5}$, or indeed by any power of $\sigma$. Instead, arbitrarily large effects on the CMB must be considered possible, while it is merely the a priori likelihood of large effects that is suppressed by the separation of scales. That this is indeed the case can of course only be confirmed by calculating an explicit expression for the operator $\mathcal{K}$ for inflation. Outlook ======= The task of finding the operator $\mathcal{K}$ for inflation will be more difficult than it was to calculate $\mathcal{K}$ for the Casimir effect. This is mainly because it is highly nontrivial to identify the comoving modes’ initial condition, i.e. their ingoing vacuum state. This problem needs to be solved because a misidentification of the vacuum could mask the infrared effects that one is looking for. The reason is that the mode equations reduce to the mode equations with the usual linear dispersion at late times, namely at large length scales. Therefore, the mode solutions at late times live in the usual solution space. Thus, any effects of ultraviolet-modified dispersion relations in the IR could be masked by an incorrect choice of the initial condition for the mode equation. A further complication is that of possibly strong backreaction, although there are indications that this problem can be absorbed in a suitable redefinition of the inflaton potential, see [@greenenew]. Once these points are clarified, $\mathcal{K}$ for inflation can be calculated. A limitation of our investigation of the Casimir effect has been that we restricted attention to modelling the effects of Planck scale physics on quantum field theory exclusively through UV-modified dispersion relations. This assumes that fields can possess arbitrarily large $k$ and arbitrarily short wavelengths, an assumption which is likely too strong. Indeed, studies of quantum gravity and string theory strongly indicate the existence of a universal minimum length at the Planck or string scale. In particular, it has been suggested that, in terms of first quantization, this natural UV cutoff could possess an effective description through uncertainty relations of the form $\Delta x \Delta p \le \frac{\hbar}{2}(1+\beta (\Delta p)^2 +...)$, see, e.g., [@Garay:1994en]. As is easily verified, such uncertainty relations encode the minimum length as a lower bound, $\Delta x_\text{min}=\hbar\sqrt{\beta}$, on the formal position uncertainty, $\Delta x$. It has been shown that this type of uncertainty relations also implies a minimum wavelength and that, therefore, fields possess the sampling property, see [@ak-prl2000]: if a field’s (number or operator-valued) amplitudes are known only at discrete points then the field’s amplitudes everywhere are already determined - if the average sample spacing is less than the critical spacing, which is given by the minimum length. As a consequence, any theory with this type of uncertainty relation can be written as continuum theory or, fully equivalently, as a discrete theory on any lattice of sufficiently tight spacing. This UV cutoff can also be viewed as an information theoretic cutoff, and it possesses a covariant generalization, see [@ak-prl]. Indeed, nontrivial dispersion relations also raise the question of local Lorentz invariance. One possibility is that local Lorentz is broken hard or soft and that, e.g., the CMB rest frame is the preferred frame. It has also been suggested that the Lorentz group might be deformed, or that it may be unchanged but represented nonlinearly. Various experimental bounds on Lorentz symmetry breaking are being discussed, e.g., from observations of gamma ray bursts. For the literature, see e.g. [@lorentz]. An application of the minimum length uncertainty principle to the Casimir effect has recently been tried, see [@hossen]. There, the Casimir force was found to be a discontinuous function of the plate separation. This problem is due to the fact that, in [@hossen], the plate boundaries are implicitly treated as possessing sharp positions. This is not fully consistent with the assumption that all particles including those that make up the plates can be localized only up to the finite minimum position uncertainty. As a consequence, as the plate separation increases, the energy eigenvalues discontinuously enter the spectrum of the first quantized Hamiltonian. It should be very interesting to extend these Casimir force calculations while applying the minimum length uncertainty relations to both the field and the plates. Finally, we note an additional analogy between the Casimir effect and inflation: in the Casimir effect with UV cutoff, as the distance between the plates is increased, new modes enter the space between the plates, thereby changing the vacuum energy. In cosmology, space itself expands and, in the presence of an UV cutoff, new comoving modes (recall that these are the independent degrees of freedom) are continually being created, similar to the Casimir effect. A priori, these new modes arise with vacuum energy. During the expansion, the modes’ vacuum energy becomes diluted but if the dispersion is nonlinear then the balance of new vacuum energy creation and vacuum energy dilution is nontrivial. A paper which addresses this question is in progress, [@ak-ll]. [99]{} J. D. Bekenstein, Phys. Rev. [**D7**]{}, 2333 (1973) S. W. Hawking, Commun. Math. Phys. [**43**]{}, 199 (1975), W.G. Unruh, Phys. Rev. [**D51**]{}, 2827 (1995), R. Brout, S. Massar, R. Parentani, P. Spindel, Phys. Rept. [**260**]{}, 329 (1995), S. Corley, T. Jacobson, Phys. Rev. [D54]{}, 1568 (1996), R. Brout, C. Gabriel, M. Lubo, P. Spindel, Phys. Rev. [**D59**]{}, 044005 (1999). W. G. Unruh, Phys. Rev. [**D51**]{}, (1995) 2827, W. G. Unruh, R. Schutzhold, gr-qc/0408009, Phys. Rev. [**D71**]{}, 024028 (2005). J. Martin, R. Brandenberger, Phys. Rev. [**D63**]{}, 123501 (2001), J. C. Niemeyer, Phys. Rev. [**D63**]{}, 123502 (2001), A. Kempf, astro-ph/0009209, Phys. Rev. [**D63**]{}, 083514 (2001), A. Kempf, J. C. Niemeyer, astro-ph/0103225, Phys. Rev. [**D64**]{}, 103501 (2001), N. Kaloper, M. Kleban, A. E. Lawrence, S. Shenker, Phys. Rev. [**D66**]{}, 123510 (2002), L. Bergstrom, U. H. Danielsson, hep-th/0211006, JHEP [**0212**]{}, 038 (2002), O. Elgaroy, M. Gramann, O. Lahav, astro-ph/0111208, Mon. Not. Roy. Astron. Soc. [**333**]{}, 93 (2002), G. F. Giudice, E. W. Kolb, J. Lesgourgues, A. Riotto, hep-ph/0207145, Phys. Rev. [**D66**]{}, 083512 (2002), J. Martin, R. Brandenberger, Phys. Rev. [**D65**]{}, 103514 (2002), Phys.Rev. [**D68**]{}, 063513 (2003), C.P. Burgess, J.M. Cline, F. Lemieux, R. Holman, hep-th/0210233, JHEP [**0302**]{}, 048 (2003), S. Cremonini, Phys. Rev. [**D68**]{}, 063514 (2003), M. Giovannini, hep-th/0308066, Class. Quant. Grav. [**20**]{}, 5455 (2003), K. Goldstein, D. A. Lowe, hep-th/0208167, Phys. Rev. [**D67**]{}, 063502 (2003), E. Di Grezia, G. Esposito, A. Funel, G. Mangano, G. Miele, gr-qc/0305050, Phys. Rev. [**D68**]{}, 105012 (2003), S. Hannestad, L. Mersini-Houghton, hep-ph/0405218, L. Sriramkumar, T. Padmanabhan, gr-qc/0408034, R. Easther, W. H. Kinney, H. Peiris, astro-ph/0412613, K. Schalm, G. Shiu, J. P. van der Schaar, hep-th/0412288, AIP Conf. Proc. [**743**]{}, 362 (2005), H. Collins, R. Holman, hep-th/0501158. H. B. G. Casimir, Kon. Ned. Akad. Wetensch. Proc.  [**51**]{}, 793 (1948). R. Balian, Seminaire Poincare [**1**]{}, 55 (2002). Y. Aghababaie and C. P. Burgess, Phys. Rev. [**D70**]{}, 085003 (2004). M. Bordag, U. Mohideen, V.M. Mostapanenko, Phys. Rep. [**353**]{}, 1 (2001), K. A. Milton, hep-th/0406024, J. Phys. [**A37**]{}, R209 (2004), S. K. Lamoreaux. Rep. Prog. Phys. [**68**]{}, 201 (2005). S. K. Lamoreaux, Phys. Rev. Lett. [**78**]{}, 5 (1997), U. Mohideen, A. Roy, Phys. Rev. Lett. [**81**]{}, 4549 (1998). A. Lambrecht, S. Reynaud, Seminaire Poincare [**1**]{}, 79 (2002). A. Kempf, gr-qc/9907084, J. Math. Phys. [**41**]{}, 2360 (2000). W. B. Ford, *Studies on Divergent Series and Summability & The Asymptotic Developments of Functions Defined by Maclaurin Series, Chelsea Pub. (1960), G. H. Hardy, *Divergent Series, Oxford University Press (1956), M. Abramowitz and I. Stegun, *Handbook of Mathematical Functions with Formula, Graphs and Mathematical Tables, Dover, New York (1972)*** J. Havil, *Gamma: Exploring Euler’s Constant, Princeton University Press, Princeton (2003)* G. H. Hardy, *Ramanujan: Twelve Lectures on Subjects Suggested by His Life and Work, AMS Chelsea Pub., New York (1999)* B. R. Greene, K. Schalm, G. Shiu, J. P. van der Schaar, JCAP [**0502**]{}, 001 (2005) D.J. Gross, P.F. Mende, Nucl. Phys. [**B303**]{}, 407 (1988),   D. Amati, M. Ciafaloni, G. Veneziano, Phys. Lett. [**B216**]{}, 41 (1989), A. Kempf, J. Math. Phys. hep-th/9311147, [**35**]{}, 4483 (1994), M.-J. Jaeckel, S. Reynaud, Phys. Lett. [**A185**]{}, 143 (1994), D.V. Ahluwalia, Phys. Lett. [**B339**]{}, 301 (1994), L. J. Garay, Int. J. Mod. Phys. [**A10**]{}, 145 (1995), E. Witten, Phys. Today [**49**]{} (4), 24 (1996), A. Kempf, G. Mangano, hep-th/9612084, Phys. Rev. [**D55**]{}, 7909 (1997). G. Amelino-Camelia, John Ellis, N.E. Mavromatos, D.V. Nanopoulos, Mod. Phys. Lett. [**A12**]{}, 2029 (1997). A. Kempf, hep-th/9905114, Phys. Rev. Lett. [**85**]{}, 2873 (2000) A. Kempf, gr-qc/0310035, Phys. Rev. Lett. [**92**]{}, 221301 (2004). J.W. Moffat, hep-th/0211167, Int. J. Mod. Phys. [**D12**]{}, 1279 (2003) G. Amelino-Camelia and T. Piran, astro-ph/0008107, Phys. Rev. [**D64**]{}, 036005 (2001), S. M. Carroll, J. A. Harvey, V.A. Kostelecky, C. D. Lane, T. Okamoto, Phys. Rev. Lett. [**87**]{}, 141601 (2001), J. Magueijo, L. Smolin, gr-qc/0207085, Phys. Rev. [**D67**]{}, 044017 (2003), D. Mattingly, T. Jacobson, S. Liberati, hep-ph/0211466, Phys. Rev. [**D67**]{}, 124012 (2003), T. A. Jacobson, S. Liberati, D. Mattingly, F.W. Stecker, astro-ph/0309681, Phys. Rev. Lett. [**93**]{}, 021101 (2004) U. Harbach, S. Hossenfelder, hep-th/0502142. A. Kempf, L. Lorenz, in preparation
--- abstract: 'We study the mid-infrared plasmonic response in Bernal-stacked bilayer graphene. Unlike its monolayer counterpart, bilayer graphene accommodates optically active phonon modes and a resonant interband transition at infrared frequencies. They strongly modifies the plasmonic properties of bilayer graphene, leading to Fano-type resonances, giant plasmonic enhancement of infrared phonon absorption, narrow window of optical transparency, and a new plasmonic mode at higher energy than the classical plasmon.' author: - Tony Low - Francisco Guinea - Hugen Yan - Fengnian Xia - Phaedon Avouris title: 'Novel mid-infrared plasmonic properties of bilayer graphene' --- Plasmonics[@maier2007plasmonics] is an important subfield of photonics that deals with the excitation, manipulation, and utilization of plasmons-polaritons[@pines1999elementary]. It is a key element of nanophotonics[@gramotnev2010plasmonics], metamaterials with novel electromagnetic phenomena[@shalaev2007optical; @luk2010fano] and also has potential applications in biosensing[@kabashin2009plasmonic]. Recently, graphene has emerged as a promising platform for plasmonics[@grigorenko2012graphene]. It has many desirable properties such as gate-tunability, extreme light confinement, long plasmon lifetime, and plasmonic resonances in the terahertz to mid-infrared (IR) regime[@JSB11; @KCG11; @HSS07; @WSSG06; @NGGM12; @nikitin2011edge]. Spatially resolved propagating plasmons has been observed with scanning near-field optical microscope[@FRABM12; @CBATH12]. Tunable plasmon resonances in the terahertz[@JGHGM11] to IR[@YLC12; @YLZW13] has been observed in graphene micro- and nano-ribbons, and the relative damping pathways have also been studied[@YLZW13]. Identified applications for graphene plasmonics range from notch filters[@YLC12], polarizers and modulators[@JGHGM11; @YLC12; @YLZW13] to beam reflectarrays[@carrasco2013tunable], biosensing[@wu2010highly] and IR photodetectors[@freitag2013photocurrent] via bolometric effect[@freitag2012photoconductivity]. In this paper, we discuss why Bernal AB-stacked bilayer graphene is important and interesting in its own right as a plasmonic material. Apart from a few theoretical studies of plasmons in bilayer graphene[@sensarma2010dynamic; @gamayun2011dynamical; @gorbar2010dynamics; @borghi2009dynamical; @kusminskiy2009electron; @hwang2010plasmon], there is still no experimental studies of bilayer graphene plasmonics. First indication that the plasmonic response in bilayer graphene might be very different than that of monolayer is its two prominent IR structures in its optical conductivity. IR optical measurements of bilayer graphene reveal a phonon peak at $\hbar\omega$$\,\approx\,$$0.2\,$eV, with a strong dependence of peak intensity and Fano-type lineshape on the applied gate voltage[@tang2009tunable; @kuzmenko2009gate]. The interlayer coupling in bilayer graphene also results in two nested bands, which presents a set of doping dependent IR features[@nilsson2006electronic; @abergel2007optical; @NC08]. This interband transitions between the two nested bands produced a conductivity peak at $\hbar\omega$$\,\approx\,$$0.4\,$eV in optical IR measurements[@wang2008gate; @kuzmenko2009infrared; @li2009band]. The impact of these IR structures on the bilayer plasmonic response has not been studied. We found several novel plasmonic effects in bilayer graphene: (i) giant plasmonic enhancement of infrared phonon absorption, (ii) an extremely narrow optical transparency window, and (iii) a new plasmonic mode at higher energy than the classical plasmon. Bilayer graphene arranged in the Bernal AB stacking order is considered, with basis atoms $A_1$, $B_1$ and $A_2$, $B_2$ in the top and bottom layers respectively. The intralayer coupling is $\gamma_0\approx 3\,$eV and the interlayer coupling between $A_2$ and $B_1$ is $\gamma_1\approx 0.39\,$eV, an average of values reported in optical IR and photoemission measurements[@kuzmenko2009infrared; @wang2008gate; @li2009band; @ohta2006controlling; @zhou2008origin]. We work within the $4\times 4$ atomic $p_z$ orbitals basis, i.e. $a^{\dagger}_{1\bold{k}},b^{\dagger}_{1\bold{k}},a^{\dagger}_{2\bold{k}},b^{\dagger}_{2\bold{k}}$, where $a^{\dagger}_{i}$ and $b^{\dagger}_{i}$ are creation operators for the $i^{th}$ layer on the $A/B$ sublattices. Within this basis, the Hamiltonian near the $\bold{K}$ point can be written as: ${\cal H}_{k}=v_f \pi_{+}I\otimes\sigma_{-}+v_f \pi_{-}I\otimes\sigma_{+}+\tfrac{\Delta}{2}\sigma_z\otimes I+\gamma_1/2[\sigma_x\otimes\sigma_x+\sigma_y\otimes\sigma_y]$, where $\sigma_i$ and $I$ are the Pauli and identity matrices respectively. We defined $\sigma_{\pm}\equiv\tfrac{1}{2}(\sigma_x \pm i \sigma_y)$ and $\pi_{\pm}\equiv\hbar(k_x\pm ik_y)$. Here, and $\Delta$ is the electrostatic potential difference between the two layers. Expressions for non-interacting ground state electronic bands $\xi_{n}(\bold{k})$ ($n=1-4$, see inset of Fig.\[figure1\]) and wavefunctions $\Phi_n(\bold{k})$ are obtained by diagonalizing ${\cal H}_{k}$, see Suppl. Info. We consider coupling of long wavelength longitudinal/transverse optical (LO/TO) phonons near $\Gamma$ point with the graphene plasmons. Relative displacement of the two sublattice in the is given by, $$\begin{aligned} \bold{u}_T(\bold{r}) = \sqrt{\frac{\hbar}{2\rho_m \omega_{op}{\cal A}}}\sum_{\bold{p}\lambda} (\hat{b}_{\bold{p}}+\hat{b}_{\mbox{-}\bold{p}}^{\dagger}) \bold{e}_{\lambda}(\bold{p}) e^{i\bold{p}\cdot\bold{r}}\end{aligned}$$ where ${\cal A}$ is the area of the unit cell, $\rho_m$ is the mass density of graphene, $\bold{p}=(p_x,p_y)$ is the phonon wavevector, $\lambda$ denotes the LO/TO modes where $\hat{b}_{\bold{p}\lambda}^{\dagger}$ is its creation operators, $\bold{e}_{\lambda}(\bold{p})$ are the polarization vectors given by $\bold{e}_{LO}(\bold{p})=i(\mbox{cos}\varphi,\mbox{sin}\varphi)$ and $\bold{e}_{TO}(\bold{p})=i(-\mbox{sin}\varphi,\mbox{cos}\varphi)$ where $\varphi=\mbox{tan}^{-1}(p_y/p_x)$. Due to the two graphene layers, there are two possible vibrational modes i.e. symmetric ($\bold{u}_B(\bold{r})$=$\bold{u}_T(\bold{r})$) and antisymmetric ($\bold{u}_B(\bold{r})$=$-\bold{u}_T(\bold{r})$), Hence, the electron-phonon coupling at the $\bold{K}$ valley for bilayer graphene is given by[@ando07; @NG07], $$\begin{aligned} H_{e-op}(\bold{r})=-\sqrt{2}\frac{2\beta\hbar v_F}{3 a^2} \boldsymbol{\sigma}^{\pm}\times \bold{u}(\bold{r})\end{aligned}$$ with $a\approx 1.4$ Å is the C-C distance, $\sigma_j^+$=$I\sigma_j$, $\sigma_j^{-}$=$\sigma_z\sigma_j$ and $\beta$=$-\partial \mbox{ln} \gamma_0/\partial a$ is a dimensionless parameter related to the deformation potential. Without loss of generality, we take the electric field polarization to be along $y$ and $\varphi=0$. Since only lattice vibration along $y$ can couple to light, we consider only the TO lattice mode. As a result, we can write the electron-phonon interaction for the $v$ mode in the following form, $$\begin{aligned} {\cal H}'_v = \frac{1}{\sqrt{{\cal A}}}\sum_{\bold{k}} \hat{a}_{\bold{k+p}}^{\dagger} {\cal V}_v(\bold{p}) \hat{a}_{\bold{k}} e^{i\bold{p}\cdot\bold{r}}(\hat{b}_{\bold{p}}+\hat{b}_{\bold{p}}^{\dagger})\end{aligned}$$ where $v=S,A$ denotes the symmetric and antisymmetric modes, with ${\cal V}_S(\bold{p}\rightarrow 0)=ig I\sigma_x$ and ${\cal V}_A(\bold{p}\rightarrow 0)=ig \sigma_z\sigma_x$, where, $$\begin{aligned} g\equiv \frac{\beta\hbar v_F}{L^2}\sqrt{\frac{\hbar}{2\rho_m\omega_{op} }}\approx 0.3\,eV\AA^{-1},\end{aligned}$$ since $\beta\approx 2$ and $\hbar\omega_{op} \approx 0.2\,$eV[@ando07]. The plasmonic response of bilayer graphene can be obtained from its dielectric function given by, $$\begin{aligned} \epsilon_T^{rpa}(q,\omega)=\kappa-v_{c}\Pi^0_{\rho,\rho}(q,\omega)-v_{c}\frac{q^2}{\omega^2}\delta\Pi_{j,j}(q,\omega),\end{aligned}$$ at arbitrary wave-vector $q$ and frequency $\omega$. $v_c=e^2/2q\epsilon_0$ is the $2D$ Coulomb interaction $\Pi_{\rho,\rho}^0(q,\omega)$ is the non-interacting part (i.e. the pair bubble diagram) of the charge-charge correlation function given by[@WSSG06; @HSS07], $$\begin{aligned} \nonumber \Pi_{\rho,\rho}^0(q,\omega)=-\frac{g_s g_v}{(2\pi)^2}\sum_{nn'}\int d\bold{k} \times\\ \frac{n_F(\xi_{n}(\bold{k}))-n_F(\xi_{n'}(\bold{k+q}))}{\xi_{n}(\bold{k})-\xi_{n'}(\bold{k+q})+\hbar\omega+i\hbar/\tau_e}\left|F_{nn'}(\bold{k},\bold{q})\right|^2\end{aligned}$$ where $n_F$ is the Fermi-Dirac distribution function, $F_{nn'}(\bold{k},\bold{q})$=$\left\langle \Phi_n(\bold{k}) \right.\left| \Phi_{n'}(\bold{k+q})\right\rangle$ is the band overlap, and $\tau_e$ is the electron lifetime, where we assumed a typical experimental value of $\eta\equiv\hbar/\tau_e\approx 10\,$meV[@YLZW13]. The effect of electron-phonon interaction is included within $\delta\Pi_{j,j}(q,\omega)$, Here, we employ a model for $\delta\Pi_{j,j}(q,\omega)$ which is consistent with the various electron-phonon selection rules for the symmetric/antisymmetric modes and Fano effect observed in optical spectroscopy experiments for bilayer graphene. The detailed implementation follows a formalism known as the charged-phonon theory[@rice1992charged; @CBK10; @CBMK12], $$\begin{aligned} \delta\Pi_{j,j}(q,\omega)=\sum_{vv'}\Gamma_{j,v}(q,\omega){\cal D}_{vv'}(\omega)\Gamma_{v'^{\dagger},j}(q,\omega)\end{aligned}$$ where $$\begin{aligned} \nonumber \Gamma_{j,v}(q,\omega) = -\frac{g_s g_v}{(2\pi)^2}\sum_{nn'}\int d\bold{k} \times\\ \frac{n_F(\xi_{n}(\bold{k}))-n_F(\xi_{n}(\bold{k+q}))}{\xi_{n}(\bold{k})-\xi_{n'}(\bold{k+q})+\hbar\omega+i\hbar/\tau_e} [{\cal J}]_{nn'} [{\cal V}_v]_{n'n}\end{aligned}$$ where $\left[{\cal J}\right]_{nn'} = \left\langle \Phi_n(\bold{k}) \right|{\cal J} \left| \Phi_{n'}(\bold{k+q})\right\rangle$ and $\left[{\cal V}_v\right]_{nn'} = \left\langle \Phi_n(\bold{k}) \right|{\cal V}_v \left| \Phi_{n'}(\bold{k+q})\right\rangle$ with $v=A,S$ and the current operator defined as ${\cal J}\equiv v_F I\sigma_y$ with the direction of the electric field. ${\cal D}$ is the phonon Green’s function, $$\begin{aligned} [{\cal D}^{-1}(\omega)]_{vv'} = \delta_{vv'}[{\cal D}_0^{-1}(\omega)]-\Gamma_{v^{\dagger},v'}(\omega)\end{aligned}$$ where ${\cal D}_0=2\omega_{op}/\hbar((\omega+i/\tau_{op})^2-\omega_{op}^2)$ is the free phonon Green’s function and $\tau_{op}$ describes the phonon lifetime. In this calculation, we assumed $\tau_{op}\approx 1\,$ps[@bonini2007phonon]. \[0.45\][![ Real part of bulk bilayer graphene conductivity (solid line) computed at $T=300\,$K at chemical potential of $\mu=0.3\,$eV, constant damping of $\eta=10\,$meV, zero gap (i.e. $\Delta=0\,$eV) and $q=0$. This is compared with the case where $\gamma_1=0\,$eV (dashed line). $\sigma_0$ is universal conductivity of $e^2/2\hbar$. []{data-label="figure1"}](figure1.pdf "fig:")]{} Fig.\[figure1\] shows the optical conductivity of bilayer graphene calculated from the relation[@CBK10], $$\begin{aligned} \sigma(q,\omega) = \underbrace{i\frac{e^2\omega}{q^2}\Pi^0_{\rho,\rho}(q,\omega)}_{\bar{\sigma}} + \underbrace{i\frac{e^2}{\omega}\delta\Pi_{j,j}(q,\omega)}_{\delta\sigma}\end{aligned}$$ The calculation assumes $T=300\,$K, chemical potential of $\mu=0.3\,$eV and $\Delta=0\,$eV. $\bar{\sigma}$ is the non-interacting optical conductivity, which accounts for a Drude peak at $\omega=0$ and a universal conductivity of $e^2/2\hbar$. The peak conductivity at $\hbar\omega=\gamma$ is due to interband transitions between two perfectly nested bands, e.g. $\xi_3$ and $\xi_4$, separated in energy by $\gamma$, see inset. These conductivity peaks at $\omega=0$ and $\hbar\omega=\gamma$ are phenomenologically broadened by $\omega\rightarrow\omega+i/\tau_e$ in the model. $\delta\sigma$ accounts for the electronic interaction with the IR phonons modes ($v=A,S$), and agrees well with experimentally measured optical spectra of bilayer graphene[@CBK10]. In our zero gap case, only the $A$ mode (asymmetric mode) is IR active[@CBK10], see inset of Fig.\[figure1\]. This mode is responsible for the sharp resonance feature at $\omega=\omega_{op}$. \[0.45\][![ $\bold{(a)}$ shows the RPA electron loss function $L(q,\omega)$ for bilayer graphene computed at $T=300\,$K at chemical potential of $\mu=0.3\,$eV, constant damping of $\eta=10\,$meV, zero energy gap (i.e. $\Delta=0\,$eV) and a background dielectric constant of $\kappa=2.5$. Green lines are boundaries for the Landau damped regions. Spectra at different plasmon momenta $q$ are plotted in $\bold{(b)}$. []{data-label="figure2"}](figure2.pdf "fig:")]{} Longitudinal collective plasmonic dispersion is obtained by looking for the zeros in the real part of the dynamical dielectric function i.e. $\mbox{Re}[\epsilon_T^{rpa}(q,\omega)]=0$. For bilayer graphene, there are three solutions[@sensarma2010dynamic; @G11]; a ‘classical’ plasmon with $\sqrt{q}$ behavior, an acoustic plasmon with $\propto q$ behavior and a high energy $\gamma$-plasmon residing near the interband resonance $\gamma$. Only the former has been found to be fully coherent, whose dispersion in the long wavelength limit can be shown to follow, $$\begin{aligned} \omega_{pl}(q)=\frac{1}{\hbar}\sqrt{\frac{qe^2g}{4\pi\epsilon_0\kappa}\sum_j \frac{n_j(\mu)}{D_j(\mu)}} \label{om_pl}\end{aligned}$$ where $g=4$ is the degeneracy factor, $n_j(\mu)$ and $D_j(\mu)$ is the carrier density and density-of-states of the $j$-th band respectively. On the other hand, the other two solutions are overdamped. The acoustic plasmon lies in the intraband continuum and is always overdamped with insignificant spectral weight[@sensarma2010dynamic; @G11]. Under typical conditions, the high energy $\gamma$-plasmon is also overdamped, lying in the interband continuum (i.e. $\xi_1,\xi_2\rightarrow\xi_3,\xi_4$ transitions) when $2\mu<\gamma$ and the low-energy interband continuum (i.e. $\xi_1\rightarrow\xi_2$ or $\xi_3\rightarrow\xi_4$ transitions) when $2\mu>\gamma$. We show later that, under certain conditions, this mode can become fully coherent. Electron loss function, defined as the imaginary part of the inverse dielectric function i.e. $L(q,\omega)=[\epsilon_{T}^{rpa}(q,\omega)]^{-1}$, is a quantity that can be probed in various spectroscopy experiments[@YLZW13; @eberlein2008plasmon; @abstreiter1984light]. Fig.\[figure2\]a shows the calculated $L(q,\omega)$ assuming typical experimental conditions: $\mu=0.3\,$eV, $\Delta=0\,$eV, $T=300\,$K, $\kappa=2.5$, and $\eta=10\,$meV. The single particle continuums are also indicated: (1) intraband, (2) electron-hole interband and (3) low-energy interband. The $\sqrt{q}$-plasmon lies above the intraband continuum, and compares well with the long wavelength dispersion $\omega_{pl}(q)$, while the $\gamma$-plasmon is significantly broadened. The most important result is the appearance of distinctively sharp structure near $\omega\approx\omega_{op}$, not seen in monolayer graphene[@WSSG06; @HSS07]. Fig.\[figure2\]b plots the loss spectra at different momenta $q$. We observed an enhancement in the IR activity of the phonon mode as the plasmon resonance approaches $\omega_{op}$. Renormalized by many-body interactions, this ‘dressed’ phonon exhibits pronounced IR activity, and is also accompanied by a Fano asymmetric spectral line-shapes. The Fano feature is acquired through interference between the discrete phonon mode and the ‘leaky’ plasmonic mode; the electronic lifetime is significantly shorter than that of the phonon, broadening the former into a quasi-continuum. The loss spectra show the evolution of the plasmonic and phonon resonances as they approach each other. They evolve from separate resonances at small $q$ to a Fano line-shape, and eventually an induced narrow transparency at zero detuning. This very narrow transparent window emerged within the broadly opaque plasmonic absorption, a phenomenon analogous to the electromagnetically-induced transparency[@luk2010fano], and should also be accompanied by novel electromagnetic effects such as slow light[@sandtke2007slow]. Transmission spectroscopy studies has proven to be very effective in probing the plasmonic properties of graphene, where finite plasmon momentum $q$ can be sampled by simply patterning graphene into nanostructures[@JGHGM11; @YLC12]. Graphene nanostructures with dimensions down to $100\,$nm would allow us to access these predicted mid-IR plasmonic features under experimentally accessible doping conditions[@YLC12]. The enhancement of IR phonon activity with decreased detuning between the phonon and plasmon resonance might lead to interesting applications. Indeed, such plasmon-enhanced IR absorption has permitted an emerging field of spectroscopy by noble metals of surfaces and electrochemical systems[@aroca2004surface]. Tunable plasmonic resonance in graphene nanostructured surfaces might allow for detection of molecules through enhancement of its IR vibrational modes. Previously, we have seen that the $\gamma$-plasmon mode is overdamped. In the limit of small momenta, it has the following dispersion[@G11], $$\begin{aligned} \omega_{\gamma}(q)=\frac{1}{\hbar}\left[\gamma + \frac{qe^2}{8\pi\epsilon_0\kappa}\mbox{log}\left(1+2\frac{\mu}{\gamma}\right)\right]. \label{om_gam}\end{aligned}$$ If the $\gamma$-plasmon gains sufficient oscillator strength, e.g. by modifying its doping ($\uparrow$$\mu$) or dielectric environment ($\downarrow$$\kappa$), it can reside outside the low-energy interband continuum. This is shown in Fig.\[figure3\]a (dashed line), calculated using Eq.\[om\_gam\] assuming $\mu=0.6\,$eV and $\kappa=1$. The electron loss function in Fig.\[figure3\]a indicates several interesting features of this high energy $\gamma$-plasmon mode. First, its dispersion departs from the simple $\omega_{\gamma}-\gamma\propto q$ relation, acquiring an increasingly $q^2$ behavior with $q$. We find that the modified dispersion can be described within a model that accounts for the effective coupling between the classical and $\gamma$-plasmon as follows, $$\begin{aligned} \epsilon_{eff}\approx\kappa\left[1-\frac{\omega^2_{pl}}{\omega^2}-\frac{\alpha^2}{\omega^2-\omega^2_{\gamma}+\alpha^2}\right],\end{aligned}$$ where $\alpha$ is an effective coupling between the two modes. Using the long-wavelength expressions for these modes, i.e. Eq.\[om\_pl\] and \[om\_gam\] (dashed white lines), and a coupling energy $\alpha=85\,$meV, the coupled mode solutions (solid white lines) obtained by solving for $\epsilon_{eff}=0$ agrees well with the dispersions observed in the loss function. Second, we observed prominent spectral weight transfer from the conventional 2D plasmon to the $\gamma$-plasmon mode. Fig.\[figure3\]b plots the calculated $L(q,\omega)$ and $L(q,\omega)/\omega$ spectra at typical values of $q=2-10\times 10^{7}\,$m$^{-1}$. The integrated loss function $\int_{0}^{\infty} L(q,\omega) d\omega$ is related to the Coulomb energy stored in the electron fluid[@pines1966theory]. On the other hand, through the Kramers-Kronig relations, one can obtain the sum rule $\int_{0}^{\infty} L(q,\omega)/\omega d\omega=-1/\pi$[@marelarxiv], with conserved spectral weight at different $q$. We see that the $\gamma$-plasmon acquires a spectral weight an order larger than the conventional plasmon as the latter enters into the Landau damped region. Hence, it should be experimentally observable. The possibility of an ‘optical’-like high energy plasmonic mode, previously presumed to be overdamped with little spectral weight[@G11], might open up applications in higher mid-IR spectral range. With high enough doping, e.g. with electrolyte gating, this mode can gain enough oscillator strength and be pushed out of the Landau damped region, to become a coherent plasmonic mode. \[0.45\][![ $\bold{(a)}$ shows the RPA electron loss function $L(q,\omega)$ for bilayer graphene computed at $T=300\,$K at chemical potential of $\mu=0.6\,$eV, constant damping of $\eta=10\,$meV, zero energy gap (i.e. $\Delta=0\,$eV) and a background dielectric constant of $\kappa=1$. Spectra $L$ (solid lines) and $L/\omega$ (dashed lines) at different plasmon momenta $q$ are plotted in $\bold{(b)}$. []{data-label="figure3"}](figure3.pdf "fig:")]{} In summary, we have shown that bilayer graphene as a new plasmonic material, is important and interesting in its own right. The above-mentioned new mid-IR plasmonic effects can also be generalized to more complex graphene stacks[@GNP06], *Acknowledgement:* FG acknowledges financial support from the Spanish Ministry of Economy (MINECO) through Grant no. FIS2011-23713, from the European Research Council Advanced Grant, contract 290846, and from European Commission under the Graphene Flagship contract CNECT-ICT-604391. [55]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , ** (, ). , **, vol.  (, ). , ****, (). , ****, (). , , , , , , , ****, (). , , , , , , , , , ****, (). , , , ****, (). , , , ****, (). , , , ****, (). , ****, (). , , , , ****, (). , , , , ****, (). , , , , ****, (). , , , , , , , , , , , ****, (). , , , , , , , , , , , ****, (). , , , , , , , , , , , ****, (). , , , , , , , , , ****, (). , , , , , , , , , ****, (). , , , ****, (). , , , , ****, (). , , , , , , **** (). , , , , (). , , , ****, (). , ****, (). , , , ****, (). , , , , ****, (). , , , ****, (). , , , ****, (). , , , , , , , , , , , ****, (). , , , , , , , , ****, (). , , , , ****, (). , ****, (). , ****, (). , , , , , , , ****, (). , , , , , , , ****, (). , , , , , , , , ****, (). , , , , , ****, (). , , , , , , , , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , , , ****, (). , , , , ****, (). , , , , ****, (). , ****, (). , , , , , , , , , ****, (). , , , ** (, ). , ****, (). , , , , ****, (). , ** (). , ** (). , , , ****, (). , , , , , (). , , , ****, ().
--- abstract: 'We present a filtering procedure based on singular value decomposition to remove artifacts arising from sample motion during dynamic full field OCT acquisitions. The presented method succeeded in removing artifacts created by environmental noise from data acquired in a clinical setting, including in vivo data. Moreover, we report on a new method based on using the cumulative sum to compute dynamic images from raw signals, leading to a higher signal to noise ratio, and thus enabling dynamic imaging deeper in tissues.' author: - | Jules Scholler[^1]\ Institut Langevin\ ESPCI Paris, CNRS, PSL University\ 1 rue Jussieu, 75005 Paris, France\ `jules.scholler@espci.fr`\ bibliography: - 'references.bib' title: Motion artifact removal and signal enhancement to achieve in vivo dynamic full field OCT --- Introduction ============ Optical coherence tomography (OCT) is routinely used for 3D imaging of microstructures in tissue and relies on the endogenous backscattering contrast [@Huang_91; @drexler_optical_2015]. Full-field optical coherence tomography (FFOCT) is an *en face*, high transverse resolution version of OCT [@Beaurepaire_98; @Dubois_04]. Using a camera and an incoherent light source, FFOCT acquires *en face* virtual sections of the sample at a given depth and has been used for biology [@Benarous] and medicine [@Jain]. Recently, a novel contrast mechanism has been exploited by measuring temporal fluctuations of the backscattered light in a technique called dynamic full field OCT (D-FFOCT) [@Apelian_16]. In ex vivo fresh tissues, these dynamic measurements reveal subcellular structures that are very weak backscatterers and provide contrast based on their intracellular motility [@Leroux_16; @Thouvenin_motility]. A similar technique is used in regular OCT for retinal angiography called OCTA where speckle variance is analyzed on several B-Scans (typically 8 frames) to produce binary images of the retinal vasculature [@Kashani_17]. Due to the high spatial resolution ($<1~\mu m$) and the number of frames used in D-FFOCT (typically 512), our method is not only sensitive to capillaries but also to intracellular motility signals and produces a contrast that reveals living cells [@Scholler_19]. The penetration depth of D-FFOCT is typically ten times less than FFOCT due to the small cross-section of the moving scatterers leading to weak signals, limiting its use in thick samples. Up to now, using D-FFOCT for in vivo imaging has remained elusive as this technique is sensitive to nanometric axial displacements of the sample. The same problems arise for OCTA and several approaches have been developed to remove bulk motion of the eye [@Jia_12; @deCarlo2015]. In this paper we propose two methods to overcome the aforementioned limitations. First, we introduce a framework based on the singular value decomposition (SVD) to filter out the axial displacement of the sample from the local fluctuations linked to intracellular motility, enabling in vivo use of D-FFOCT. SVD based algorithms have been previously applied to OCT data, e.g. for smart OCT where the SVD is applied to the reflection matrix in order to extend the penetration depth [@Badon_svd_16]. An SVD filtering method for D-FFOCT has been previously proposed for simulated data [@Ammari_svd] but does not work on our experimental data mainly because the image formation model is different. Here we propose to find eigenvectors associated with axial motion and filter them out. Similar SVD based algorithms for spatio-temporal filtering have been used effectively in acoustics for Doppler acquisitions [@SVD_doppler; @SVD_doppler2]. In each case the goal is to use the SVD to transform the initial data in a new basis that is more suitable for filtering and identifying outliers. As opposed to [@SVD_doppler; @SVD_doppler2], our approach reconstructs the signals in the initial space before computing the dynamic image rather than constructing the image in the SVD space. The main advantage of using the SVD rather than Fourier analysis here is that the filter adapts to the data set which exhibits different amounts of artifacts with random patterns. Secondly, we present a new operator to compute the local dynamics based on the cumulative sum in order to enhance the non-stationary part of the signal, leading to a great increase in the signal to noise ratio (SNR). Finally, we report on the first D-FFOCT acquisition in vivo to image the mouse liver at $80~ \mu m$ depth where the two proposed algorithms greatly improved the image quality by removing motion artifacts and increasing the SNR by a factor of 3. Removing artifacts using SVD ============================ In order to construct a D-FFOCT image, a stack of typically 512 direct images ($1440 \times 1440$ pixels) is acquired with a standard FFOCT using our custom software [@FFOCT_JS]. The FFOCT setup consists of a Linnik interferometer where both arms contain identical microscope objectives. The reference arm contains a silicon mirror mounted on a piezoelectric translation (PZT) used for phase modulation. In a typical FFOCT experiment, at least two images are acquired with different phase modulations and the FFOCT image is constructed by using appropriate phase-shifting algorithms [@Dubois_04; @Scholler_19]. For D-FFOCT experiments the PZT position is not modulated, fluctuations arise by scatterers motions inside the coherence volume. In this paper we used data acquired from two different setups. The first one is a laboratory setup shown in Fig. \[fig1\] and the second one is a LightCT commercial setup manufactured by LLTech SAS. The characteristics of both of these setups are summarized in Table \[table1\]. In the first report on D-FFOCT, the level of the dynamic signal at each pixel was computed using a running standard deviation averaged over the whole acquisition [@Apelian_16] so that each pixel is processed independently. Calculating the standard deviation of the signal in time removes highly scattering stationary structures such as collagen or myelin fibers and reveals cells with a much better contrast. Indeed, strongly backscattering structures can dominate the signal even outside the coherence volume thereby masking weakly scattering structures such as cells. In the laboratory, we succeeded in stabilizing the setup by mounting it on a sturdy optical bench, and carried out ex vivo experiments without motion artifacts. For real life applications however, D-FFOCT devices are currently being used by clinicians in hospitals for imaging biopsied tissues [@LLTech_spie_19] and by anatomo-pathologists in busy environments with vibrations arising from vibrational modes of the building, from people walking around the device and from air conditioning. Mechanical vibrations can lead to sample arm motion or oscillations, creating strong signal fluctuations, especially from highly reflective structures such as collagen fibers. ![(a) FFOCT setup in inverted configuration *top view*. Microscope objectives are Olympus UPlanSApo 30x 1.05 NA. OCT Camera: ADIMEC Q-2A750-CXP. Light source: Thorlabs M660L3. - PZT: piezoelectric translation - BS: 50/50 beam splitter. (b) Schematic of the sample axial oscillations around the coherence volume due to mechanical vibrations, simulated on a graph in the top right corner. The setup is illustrated with oil immersed objectives where the probed volume depth is $1~\mu m$.[]{data-label="fig1"}](fig1.pdf){width="0.7\linewidth"} Setup Lab. setup Fig. \[fig1\](a) LightCT (LLTech SAS) -------------------------------------- ----------------------------- ---------------------- -- Transverse resolution $[\mu m]$ 0.4 1.5 Axial resolution $[\mu m]$ 1 1 Field of view $[\mu m \times \mu m]$ $390\times 390$ $1260\times 1260$ Framerate $[Hz]$ 150 150 : **Setup Characteristics.**[]{data-label="table1"} Motion artifact model --------------------- The intensity recorded by the camera is the sum of the backscattered light from both the sample and the reference arm [@Scholler_19]: $$\begin{aligned} I(\boldsymbol{r},t) = \eta \frac{I_0}{4} \left( R(\boldsymbol{r},t) + R_{inc} + R_{ref} + 2\sqrt{R(\boldsymbol{r},t) R_{ref}}cos \left( \Delta \phi (\boldsymbol{r},t)) \right) \right)\end{aligned}$$ where $I(\boldsymbol{r},t)$ is the intensity recorded at position $\boldsymbol{r}=(x,y)$ and time $t$, $\eta$ is the camera quantum efficiency, $I_0$ is the power LED output impinging on the interferometer considering a 50/50 beam-splitter, $R_{ref}$ is the reference mirror reflectivity (i.e. the power reflection coefficient), $R(\boldsymbol{r},t)$ is the sample reflectivity (i.e. the power reflection coefficient) at position $\boldsymbol{r}$ and time $t$, $\Delta \phi(\boldsymbol{r},t)$ is the phase difference between the reference and sample back-scattered signals at position $\boldsymbol{r}$ and time $t$, $I_{incoh} = R_{inc}I_0/4$ is the incoherent light back-scattered by the sample on the camera, mainly due to multiple scattering and reflections out of the coherence volume. The dynamic signal is computed as the average of the running temporal standard deviation and the processed dynamic signal can be written: $$\begin{aligned} I_{dyn}(\boldsymbol{r}) =\frac{1}{N} \sum_i SD\left(\frac{\eta I_0}{2}\sqrt{R_s(\boldsymbol{r},t_{[i,i+\tau]}) R_{ref}}cos\left(\Delta \phi_s(\boldsymbol{r},t_{[i,i+\tau]})\right)\right)\end{aligned}$$ where $SD$ is the standard deviation operator, $N$ is the total number of sub-windows, $\tau$ is the sub-windows length so that $t_{[i,i+\tau]}$ is the time corresponding to one sub-window, $R_s$ and $\Delta \phi_s$ are respectively the reflectivity and phase of the local scatterers that induce the temporal fluctuations that D-FFOCT aims to measure. In the event of small displacements of the entire sample, on the order of the depth of field or smaller, the processed signals will be the sum of the actual local fluctuations and the modulation created by the bulk sample motion creating a global phase shift. The resulting artifacts are therefore proportional to the sample reflectivity, which is orders of magnitude higher than the reflectivity of the scatterers probed by D-FFOCT (e.g. mitochondria and vesicles) leading to strong artifacts on the dynamic image, which mask the signal of interest. In the presence of mechanical noise, the measured fluctuation can be written: $$\begin{aligned} I_{mes}(\boldsymbol{r}) = I_{dyn}(\boldsymbol{r}) + I_{art}(\boldsymbol{r})\end{aligned}$$ where the artifactual signal can be expressed as: $$\begin{aligned} I_{art}(\boldsymbol{r}) = \frac{1}{N} \sum_i SD\left(\frac{\eta I_0}{2}\sqrt{R_s(\boldsymbol{r},t_{[i,i+\tau]}) R_{ref}}cos\left(\frac{4\pi}{\lambda}z(t_{[i,i+\tau]})\right)\right)\end{aligned}$$ where $z(t_{[i,i+\tau]})$ is the sample axial displacement on the $i-th$ sub-window. Here we neglected the sample deformation for the sake of clarity. Nonetheless it could be taken into account by processing the stack in spatial patches where deformations are negligible. For a highly reflective zone we have $I_{dyn}(\boldsymbol{r}) \ll I_{art}(\boldsymbol{r})$ and the dynamic signal is completely masked by artifacts that look like the corresponding static FFOCT image that could be obtained by randomly sampling the path difference instead of using the standard PZT modulation mentioned before. Indeed, modulating the position of the sample around the coherence volume is equivalent to modulating the piezo position, which explains why artifacts look like the standard FFOCT image. Proposed algorithm ------------------ ![(a) First few temporal eigenvectors. *Top:* for an acquisition with motion artifacts where $V_1$ and $V_2$ were detected as motion artifact and removed. *Bottom:* for an acquisition without visible artifacts. (b) Zero crossing rate computed for each temporal eigenvectors. (c) Absolute value of the derivative of the zero crossing rate computed for each temporal eigenvector. Artifacts were detected by thresholding the curve above 3 standard deviations. The baseline was arbitrarily increased for the red curves in (b) and (c) in order to increase readability.[]{data-label="fig2"}](fig2.pdf){width="0.8\linewidth"} In order to remove motion artifacts we want to use the SVD as an adaptive filter that could separate motility signals from motion induced signals. The first step is to unfold the 3D $M(x,y,t)$ cube of data into a 2D matrix $M_u(\boldsymbol{r},t)$ to perform the decomposition. Higher dimensions of SVD do exist but are not required here as the horizontal $x$ and vertical $y$ dimension do not differ when considering axial motion artifacts. SVD is the generalization of the eigendecomposition of a positive semidefinite normal matrix, and can be thought of as decomposing a matrix into a weighted, ordered sum of separable matrices which will become handy when reconstructing the SVD-denoised signals: $$\begin{aligned} M_u = U\Sigma V^\star = \sum _ { i } \sigma _ { i } \mathbf { U } _ { i } \otimes \mathbf { V } _ { i }\end{aligned}$$ where $\otimes$ is the outer product operator, $U$ contains the spatial eigenvectors, $V$ contains the temporal eigenvectors and $\Sigma$ contains the eigenvalues associated with spatial and temporal eigenvectors. Performing the SVD on an unfolded $M_u(1440\times 1440,512)$ dynamic stack takes around 30 seconds on a workstation computer (Intel i7-7820X CPU, 128 Gbyte of DDR4 2666 MHz RAM) and requires 45 GByte of available RAM using LAPACK routine for SVD computation without computing full matrices. Investigating such decomposition for artifact-free data sets we found that spatial eigenvectors related to motion artifacts have particular and easily identifiable associated temporal eigenvectors. Indeed, when looking at the first temporal eigenvectors, we observed sinusoid-like patterns with increasing frequency, see Fig. \[fig2\](a). In presence of motion artifacts, temporal eigenvectors appeared with random, high-frequency components that are easy to detect with simple features. Here, the zero crossing rate (the number of times a function crosses $y=0$) is used to detect temporal eigenvectors involved in motion artifacts. In presence of motion artifacts some of the firsts temporal eigenvectors present a high zero crossing rate, see Fig. \[fig2\](b) and \[fig2\](c). In order to detect these outliers we computed the absolute value of the derivative of the zero crossing rate (D-ZRC $= |ZRC_{i+1}-ZRC_{i}|$) and applied a threshold: if the D-ZRC is higher than three times the D-ZRC standard deviation then the corresponding eigenvalue $\sigma_i$ is set to zero in $\hat{\Sigma}$ and the SVD-denoised stack $\hat{M}_u$ is reconstructed as: $$\begin{aligned} \hat{M}_u = \sum _ { i } \hat{\sigma _ { i }} \mathbf { U } _ { i } \otimes \mathbf { V } _ { i }\end{aligned}$$ The SVD-denoised stack $\hat{M}_u$ can then be folded back to its original 3D shape $\hat{M}$ and the dynamic computation can be performed. Interestingly, the use of an automatic selection of eigenvectors allows a more reproducible analysis. For example, the SVD can be performed on spatial sub-regions without visible artifacts, something very hard to obtain with manual selection of eigenvectors. This can also improve the filtering procedure in the case of sample spatial deformation or if the computation requires too much RAM. Results ------- ![Lung biopsy for cancer detection taken on the LLTech clinical setup. Artifacts arise mainly from mechanical vibration and air conditioning. (a)(d) Original D-FFOCT images computed on the raw stack. (b)(e) Denoised images computed with SVD filtering. (c)(f) Sum of the spatial eigenvectors absolute value removed by the SVD filtering. Red arrows are highlighting cells that were partially masked by motion artifacts.[]{data-label="fig3"}](fig3.pdf){width="\linewidth"} We tested the proposed SVD filtering on different acquisitions taken on different setups (the one presented in Fig. \[fig1\](a) and the LightCT system manufactured by LLTech SAS). When motion artifacts were present, image quality after denoising was greatly improved in each case. In Fig. \[fig3\] we present lung biopsy images taken with the LightCT system in a clinic, where imaged tissues were waste tissues from biopsy procedures that were destined to be destroyed, and we imaged them just before destruction. The imaging was carried out according to the tenets of the Declaration of Helsinki and followed international ethic requirements for human tissues. SVD filtering effectively removes motion artifacts from collagen fibers and reveals cells in Fig. \[fig3\](b) and \[fig3\](e). D-FFOCT images were also higher contrast after SVD denoising, allowing easier interpretation for clinical applications, e.g. lung tumor detection in the presented images. We imaged fibroblasts with the setup presented in Fig. \[fig1\](a). Cells were very flat leading to fringes pattern created by the specular reflection on their surface. These fringes were highly visible on the processed D-FFOCT image preventing the visualization of subcellular structures, see Fig. \[fig4\](a). After SVD filtering it is possible to distinguish single subcellular entities and track them, see Fig. \[fig4\](b), enabling biological study without the need of a costly optical bench setup. ![Fibroblast image taken on the setup presented Fig. \[fig1\]. Artifacts arise mainly from mechanical vibrations leading to fringes pattern. (a) Original D-FFOCT image computed on the raw stack. (b) SVD-denoised image computed with SVD filtering. Subcellular features appear with a much better contrast enabling segmentation and tracking. (c) Sum of the spatial eigenvectors absolute value removed by the SVD filtering. Red arrows are highlighting subcellular features, only the bottom one was visible on the original image.[]{data-label="fig4"}](fig4.pdf){width="0.45\linewidth"} Extending penetration depth using non-stationarities ==================================================== In addition to motion artifacts, a drawback of D-FFOCT compared to standard static FFOCT is the reduced penetration depth. While FFOCT can acquire images as deep as $1~mm$, D-FFOCT is limited to about $100~\mu m$ due to the weak signal level produced by the sample fluctuations we wish to measure. In order to enhance the dynamic signal strength and so improve penetration depth, we propose computation of the dynamic image from the cumulative sum of the signal, rather than the raw signal. Indeed, the model for the dynamic image formation is small scatterers moving in the coherence volume during the acquisition leading to phase and amplitude fluctuations in the conjugated camera pixel. While a pure Brownian motion is stationary, hyper-diffusive displacements are not and we therefore propose to use the cumulative sum to enhance these non-stationarities. Intuitively, summing a centered noise will give a noisy trajectory that stays close to zero whereas if there is a small bias it will be summed for every sample and the cumulative sum will therefore have a slope equal to this bias. Theoretical considerations -------------------------- Let us consider an array of random values drawn from a zero centered Gaussian distribution. If the number of samples is large the mean of the array will be close to zero and equivalently the sum of all the samples will also be close to zero. Taking the cumulative sum of such an array gives a so called *Brownian bridge* (the curve starts and ends close to zero and makes a “bridge” between these two points). Theoretically, Brownian bridges are expected to be maximal close to the edges as the probability distribution of the maximum is the third arcsin law which has a typical U-shape [@Levy_brownian_bridge]. More importantly, the Brownian bridge maximum follows a Rayleigh distribution. If we consider a Brownian bridge $W_s ~ \forall ~ s \in [0,1]$: $$\begin{aligned} W_M=sup\{W_s:s\in [0,1]\} \\ \mathbb { P } [ W_M \leq u ] = 1 - e^{ - 2 u ^ { 2 }}\end{aligned}$$ where $W_M$ is the supremum of the bridge and $\mathbb { P } [ W_M \leq u ]$ is the probability of the supremum being less than $u$. According to the Rayleigh distribution, the Brownian bridge maximum must therefore scale as $\sqrt{t}$ with $t$ scaling as the number of frames. Now, if there is a bias in the distribution, which is the case if a scatterer is moving with constant velocity in the coherence volume, the cumulative sum will scale as $\frac{t}{2}$ due to the slope introduced by the bias. It will also be either always positive or negative and the maximum will be reached around the center of the bridge. The cumulative sum will therefore exhibit a completely different behavior for centered noise than for actual motility signals, leading to a better signal to noise ratio on dynamic images. Note that for Brownian bridges it is often observed that the function changes sign regularly (the probability of sign changes is also well established [@Levy_brownian_bridge]), which is not the case when there are non-stationarities. We simulated an experiment by introducing a linear bias of $\sigma/3$ on a centered Gaussian distribution which is not perceptible on the signals presented in Fig. \[fig5\](a). Looking at the cumulative sum the bias is much more obvious as the maximum reached by the bridge is three times higher, hence motility signals are detected with a higher sensitivity using the cumulative sum. Results ------- The dynamic image is computed by taking the average of the maxima of the absolute values of the running cumulative sum: $$\begin{aligned} I'_{dyn}(\boldsymbol{r}) =\frac{1}{N} \sum_i max\left(|CumSum\left(I(\boldsymbol{r},t_{[i, i+\tau]})-\bar{I}(\boldsymbol{r},t_{[i, i+\tau]})\right)|\right)\end{aligned}$$ where $CumSum$ is the cumulative sum operator, $N$ is the total number of sub-windows, $\tau$ is the sub-windows length so that $t_{[i,i+\tau]}$ is the time corresponding to one sub-window and $\bar{I}(\boldsymbol{r},t_{[i, i+\tau]})$ is the signal mean on the sub-window. We tested the proposed method with $\tau=50$ on the photoreceptor layer of an explanted macaque retina at $85~\mu m$ depth that presents a horizontal gradient of SNR, see Fig. \[fig5\](b) and \[fig5\](c). In order to quantify the gain in SNR we segmented 192 single cells using Trainable Weka [@Weka_17] and computed the SNR for each cell (the SNR was computed as the mean intensity of the pixels inside the cells divided by the mean intensity of the background), see Fig. \[fig5\](d). In this case the SNR is doubled with the proposed method and the camera column noise is almost completely removed. We tested the proposed algorithm on several acquisitions on tissues and cell cultures and the average SNR improvement factor was 1.9, allowing imaging deeper into tissues with D-FFOCT. Applying both methods for in vivo dynamic imaging ================================================= ![In vivo mouse liver dynamic image taken on LLTech clinical setup with custom mount. Artifacts mainly arise from the breathing and heartbeat. (a) In vivo D-FFOCT image of a mouse liver computed on the raw stack with the standard deviation. (b) In vivo D-FFOCT image of a mouse liver computed on the SVD-denoised stack with the standard deviation. (c) in vivo D-FFOCT image of a mouse liver computed on the SVD-denoised stack with the cumulative sum.[]{data-label="fig6"}](fig6.pdf){width="0.65\linewidth"} The proposed SVD filtering procedure is of great interest for applying D-FFOCT in vivo for removing sample motion such as eye motion for retinal imaging. In order to limit lateral drifts and to maintain contact, a custom head was adapted on the sample arm of a LightCT setup combined with a pump to create a weak suction force. We acquired a stack of images on a living mouse liver at $80 ~\mu m$ depth. The animal manipulation protocol was approved by our local animal care committee. The mouse (4 week-old C57BL/6 (Janvier Lab, Le Genest Saint Isle, France)), was anesthetized by isoflurane, and sacrificed after the imaging procedure by $CO_2$ inhalation. The standard D-FFOCT images were very noisy mainly due to the heartbeat and breathing of the mouse, leading to tissue motion that creates artifacts, see Fig. \[fig6\](a). On applying the proposed SVD filtering, we were able to remove motion artifacts Fig. \[fig6\](b). Nonetheless signals are still very low due to the deep imaging in a strongly scattering organ and applying the cumulative sum algorithm dramatically increased the SNR by a factor of $3$ Fig. \[fig6\](c). In the end, there are remaining artifacts produced by the coherence volume axial drift during the acquisition. Indeed, if the coherence volume shifts more than its axial extension, even if motion artifacts are perfectly removed, the probed dynamics would be averaged over several depths leading to an axial blur. To overcome this issue, the position of the coherence gate inside the sample may be compensated by monitoring the breathing and moving the reference arm with a precision corresponding to the optical sectioning ($1~\mu m$ for the in vivo acquisition here) in order to compensate for the axial drift. Conclusion ========== We proposed a filtering algorithm based on the SVD to effectively remove motion artifacts from dynamic images. The proposed method adds $\sim 40$ seconds for a $(1440,1440,512)$ stack which will require GPU processing in order to speed up the process for real time applications. This method was applied on an in vivo data set and is promising as long as axial motion is smaller than the coherence volume depth. Tracking and compensating methods are currently being investigated in order to acquire D-FFOCT stacks in a completely artifact-free manner for cornea [@Mazlin_18] and retina [@Xiao_18; @Mece_spie_19]. We also proposed a method based on the cumulative sum to enhance non-stationarities in temporal signals which led to an SNR factor increase of 1.9 on average for ex vivo samples and 3 on our in vivo data set. These general techniques could be applied to any other imaging modality with sub-diffraction phase sensitivity. Funding {#funding .unnumbered} ======= HELMHOLTZ grant, European Research Council (ERC) (610110). Acknowledgments {#acknowledgments .unnumbered} =============== The author would like to thank LLTech SAS for sharing its raw data, especially Émilie Benoit and Louis Dutheil for carrying out in vivo and clinical experiments. The author is also grateful to Olivier Thouvenin, Pedro Mecê, Kassandra Groux, Viacheslav Mazlin, Mathias Fink, Claude Boccara and Kate Grieve for fruitful discussions and valuable comments regarding this paper. The data and algorithms used during the current study are available from the corresponding author upon reasonable request. [^1]: <https://www.jscholler.com>
--- abstract: | In this work we studied the two-dimensional ionization structure of the circumnuclear and extranuclear regions in a sample of six low$-z$ Ultraluminous Infrared Galaxies using Integral Field Spectroscopy. The ionization conditions in the extranuclear regions of these galaxies ($\sim 5 - 15$ kpc) are typical of LINERs as obtained from the Veilleux-Osterbrock line ratio diagnostic diagrams. The range of observed line ratios is best explained by the presence of fast shocks with velocities of 150 to 500 km s$^{-1}$, while the ionization by an AGN or nuclear starburst is in general less likely. The comparison of the two-dimensional ionization level and velocity dispersion in the extranuclear regions of these galaxies shows a positive correlation, further supporting the idea that shocks are indeed the main cause of ionization. The origin of these shocks is also investigated. Despite the likely presence of superwinds in the circumnuclear regions of these systems, no evidence for signatures of superwinds such as double velocity components are found in the extended extranuclear regions. We consider a more likely explanation for the presence of shocks, the existence of tidally induced large scale gas flows caused by the merging process itself, as evidenced by the observed velocity fields characterized by peak-to-peak velocities of 400 km s$^{-1}$, and velocity dispersions of up to 200 km s$^{-1}$. author: - 'A. Monreal-Ibero' - 'S. Arribas' - 'L. Colina' title: 'LINER-like Extended Nebulae in ULIRGs: Shocks Generated by Merger Induced Flows' --- INTRODUCTION ============ Ultraluminous Infrared Galaxies (ULIRGs), defined as objects with an infrared luminosity similar to that of optically selected quasars ($L_{bol} \approx L_{IR} \ga 10^{12} L_{\sun}$) may be the local counterpart of some high-z galaxy populations [see @san96; @gen00; @fra03; @lef04]. Most (if not all) of them show signs of mergers and interactions [e.g. @cle96; @sco00; @sur00b; @bor00] and it has been found that they could be the progenitors of intermediate-mass elliptical galaxies [@gen01; @tac02 and references therein]. They have large amounts of gas and dust and they are undergoing intense starburst activity. Some of these objects may harbor an active galacic nucleus (AGN) although its importance as source of energy in ULIRGs is still under debate. The origin of the ionization of gas in these objects has been mainly studied in the innermost (nuclear) regions [e.g. @kim98]. These studies show that $\sim 35 \%$ of their nuclei have a LINER-like ionization independently of the luminosity while the fraction of Seyfert-like spectrum increases with luminosity. However, due to their complex structure, studies based on nuclear optical spectroscopy may lead to misclassifications. This can be due to several causes. For instance, the actual nucleus of the system may be obscured in the optical or, alternatively, the dominant region in the emission-line may not be coincident with the nucleus. An example where both effects have been reported is IRAS 12112+0305 [@col00]. In addition, standard slit spectroscopic observations may be affected by other type of technical uncertainties like, for instance, misalignment of the slit, differential atmospheric refraction, etc. ULIRGs, as systems that are undergoing an intense starburst activity phase (and with AGN activity in some cases), are good candidates to produce superwinds [see @vei05]. Evidence of superwinds has already been reported in several systems, on the basis of the properties of the emission [@hec90; @leh96; @arr01] and absorption [@hec00; @rup02; @rup05a; @rup05b; @mar05] lines. Although superwinds are probably playing a role in the ionization of the circumnuclear region of ULIRGs, their importance in the extranuclear regions is unclear. In them, tidally induced forces associated to the interaction process itself has been also proposed as the mechanism responsible of shocks [@mcd03; @col04]. The present article is focused in the study of the 2D structure and mechanisms of ionization of six ULIRGs based on the use of Integral Field Spectroscopy (IFS). This observational technique is well suited for this goal since it allows to obtain simultaneously spectral information of in a 2D field. The present work is part of a program aimed at studying the internal structure and kinematics of (U)LIRGs, on the basis of this technique and high-resolution images obtained with HST [see @col05 and refereces therein]. The paper is structured as follows: In section §2, we briefly describe the sample of galaxies analysed and summarize how observations where performed. In section §3, the reduction process and data analysis are described. Section §4 presents the results obtained both in the external parts and the nuclear regions and discuss the mechanisms responsible of the observed ionization. Section §5 summarize the main conclusions. Thoroughout the paper, a Hubble constant of 70 km s$^{-1}$ Mpc$^{-1}$ is assumed. This implies a linear scale between 0.89 and 2.58 kpc arcsec$^{-1}$ for the systems analyzed. SAMPLE AND OBSERVATIONS ======================= The Sample of Galaxies ---------------------- The sample of galaxies consists of six low-[*z*]{} ULIRGs (see general properties in table \[misulirgs\]) covering a relative wide range of dynamical states of the merging process. Three of the galaxies (IRAS 08572+3915, IRAS 12112+0305, and IRAS 14348$-$1447) are interacting pairs separated by projected distances of up to 6 kpc, while the rest of the galaxies (IRAS 15206+3342, IRAS 15250+3609, and IRAS 17208$-$0014) are more evolved, single nucleus ULIRGs, some with a light profile and overall kinematics close to that of intermediate mass ellipticals (e.g. IRAS 17208$-$0014, Genzel et al. 2001). The two-dimensional kinematic properties (velocity field and velocity dispersion) have been studied in detail before using integral field optical spectroscopy (see Colina et al. 2005 and references therein). The complexity of the two-dimensional ionization field in these galaxies is such that previous long-slit spectroscopic studies have classified the nucleus of several of the galaxies differently. For example, IRAS 08572+3915 was originally classified as Seyfert 2 [@san88] although both nuclei were classified as LINERs later on [@kim98]; IRAS 14348$-$1447 has been classified either as LINER [@kim98] or Seyfert 2 [@san88]; IRAS 15206+3342 has been identified both as a Seyfert 2 [@san88; @sur00a] and as an <span style="font-variant:small-caps;">H ii</span> [@kim98], and IRAS 15250+3609 is classified both as <span style="font-variant:small-caps;">H ii</span> and LINER [@kim95; @baa98]. The other two galaxies, IRAS 12112+0305 and IRAS 17208$-$0014 are classified as LINER [@vei99] and <span style="font-variant:small-caps;">H ii</span> [@kim95], respectively. Our IFS data disagree with some of the previous classifications, in particular the two nuclei of IRAS 08572+3915 are classified as <span style="font-variant:small-caps;">H ii</span> (Arribas et al. 2000), and the true, optically hidden, nucleus of IRAS 17208$-$0014 is classified as a LINER [@arr03]. Observations ------------ Data were obtained with the INTEGRAL system [@arr98] plus the WYFFOS spectrograph [@bin94] in the 4.2 m WHT at the Observatorio del Roque de los Muchachos (Canary Islands). Spectra were taken using the fiber bundle SB2 and a 600 lines mm$^{-1}$ grating with an effective resolution of $\sim$4.8 Å. Fibers in an INTEGRAL bundle are arranged in two sets: most of them (189 for SB2) form a rectangular area centered on the object while the rest of them form a circle around it and observe simultaneously the sky. In the case of SB2 the covered field is of 165$\times$123. Data were taken under photometric conditions and the seeing was of $\sim$ 10–15 except for the 1 April 1998 observing run when it was about 20. Table \[obs\] summarises the parameters of the observations. Besides, HST imaging in the I-band (WFPC2 F814W filter) is available for all of them and, with the exception of , also in the H-band (NICMOS F160W filter). DATA REDUCTION AND ANALYSIS \[reduc\] ===================================== The basic reduction process includes bias subtraction, scattered light removal, extraction of the apertures, wavelength calibration, throughput and flatfield correction, sky substraction, cosmic ray rejection and relative flux calibration. Though it is not strictly necessary for the present paper, an absolute flux calibration was also performed [@mon04]. For the present analysis the strongest optical emission lines including \[<span style="font-variant:small-caps;">O i</span>\]$\lambda6300$, H$\alpha$, \[<span style="font-variant:small-caps;">N ii</span>\]$\lambda\lambda6548,6584$ and \[<span style="font-variant:small-caps;">S ii</span>\]$\lambda\lambda6717,6730$ have been used. Each emission line profile was fitted to a single Gaussian function using the DIPSO package inside the STARLINK environment[^1]. The set of lines H$\alpha$+\[<span style="font-variant:small-caps;">N ii</span>\]$\lambda\lambda$6548,6584 was fitted simultaneously, fixing the separation in wavelength between the three lines, assuming that all lines had the same width and fixing the ratio between the nitrogen lines to 3. Sulfur lines were fitted fixing the distance between them and assuming the same width for both lines. In all cases, a constant value was assigned to the local continuum. A single (gaussian) component is in general a good representation of the observed line profiles, with the exception of some nuclear regions. For these regions a two-component fit was necessary. The ionized gas velocity dispersion was derived from the H$\alpha$ line width (after subtracting the instrumental profile in quadrature). To study the ionization state, the line ratios \[<span style="font-variant:small-caps;">O i</span>\]$\lambda$6300/H$\alpha$, \[<span style="font-variant:small-caps;">N ii</span>\]$\lambda$6584/H$\alpha$ and \[<span style="font-variant:small-caps;">S ii</span>\]$\lambda\lambda$6717,6731/H$\alpha$ were calculated for each spectrum. Since the H$\beta$ emission line was detected in an area substantially smaller than for the H$\alpha$, these line ratios were not corrected from extinction. For the \[<span style="font-variant:small-caps;">N ii</span>\]$\lambda$6584/H$\alpha$ ratio, the two lines involved are so close one from the other that reddening is negligible. In the case of the \[<span style="font-variant:small-caps;">S ii</span>\]$\lambda\lambda$6717,6731/H$\alpha$ ratio, if extinction is moderate, the value of the ratio may change slightly, while in the regions where it is more elevated ($E(B-V) {\hbox{\rlap{\lower.55ex\hbox{$\sim$}} \kern-.3em \raise.4ex \hbox{$>$}}}1.0$), the extinction produces somewhat smaller ratios (typically by 0.1 dex). However, this small difference does not change the main conclusions of the present analysis. To better visualize the spatial distribution of the relevant magnitudes (e.g. H$\alpha$ flux, velocity dispersion, \[<span style="font-variant:small-caps;">N ii</span>\]$\lambda$6584/H$\alpha$), two-dimensional images (maps) were created using a Renka & Cline two-dimensional interpolation method (Fig. 1). All theses images have 81$\times$81 pixels, with an scale of 021 pix$^{-1}$. RESULTS AND DISCUSSION ====================== The kinematical properties of the galaxies under analysis have already been studied by @col05, who conclude that the global motions of the gas (i.e. velocity fields) are dominated by merger-induced flows, showing peak-to-peak velocity differences of $\sim$ 400 km s$^{-1}$. Only one out of our six systems () shows clear evidences of ordered rotational motions although there are also some hints of rotation in . The ionized gas velocity dispersion maps revealed high-velocity regions ($\sim$70-200 km s$^{-1}$) that do not trace any special mass concentration. In the following section, we discuss the results derived from Figure \[panel\], and in particular those from the H$\alpha$ and velocity dispersion maps. Two-dimensional Ionization Structure of the extranuclear Emission Line Nebulae ------------------------------------------------------------------------------ Typical values of <span style="font-variant:small-caps;">\[O$\;$iii\]</span>$\lambda$5007/H$\beta$ in the brightest regions of these galaxies are around 1–2.5. Assuming similar values for fainter regions (where this ratio cannot be obtained due to the faintness of the lines), the <span style="font-variant:small-caps;">\[N$\;$ii\]</span>$\lambda$6584/H$\alpha$ ratio can be used to distinguish between LINER and H <span style="font-variant:small-caps;">ii</span>-like ionization [see the diagnostic diagrams of @vei87]. In general, the <span style="font-variant:small-caps;">\[N$\;$ii\]</span>$\lambda$6584/H$\alpha$ maps (see Fig. 1) show a complex ionization structure. According to this ratio, LINER-like emission is found in the extended extranuclear regions in three systems: , , and . Contrary, for , and this line ratio suggests a dominant H <span style="font-variant:small-caps;">ii</span>-like ionization. Similarly, the <span style="font-variant:small-caps;">\[S$\;$ii\]</span>$\lambda\lambda$6717,6731/H$\alpha$ and the <span style="font-variant:small-caps;">\[O$\;$i\]</span>$\lambda$6300/H$\alpha$ ratios have also been obtained, although in smaller field due to poorer signal (the maps are not shown, but individual values are presented in Figures 2 and 4). As discussed by @dop95, these line ratios, and specially <span style="font-variant:small-caps;">\[O$\;$i\]</span>$\lambda$6300/H$\alpha$, are more reliable in distinguishing between <span style="font-variant:small-caps;">H$\;$ii</span> and LINER like ionization. It is interesting to note that in general these line ratios indicate an ionization state higher than that inferred from the <span style="font-variant:small-caps;">\[N$\;$ii\]</span>$\lambda$6584/H$\alpha$ ratio. This is shown in Figure \[cocicoci\], which shows the \[<span style="font-variant:small-caps;">N$\;$ii</span>\]$\lambda$6584/H$\alpha$ vs. \[<span style="font-variant:small-caps;">S$\;$ii</span>\]$\lambda\lambda$6717,6731/H$\alpha$ for all the individual spectra/regions of the six systems of the sample (\[<span style="font-variant:small-caps;">S$\;$ii</span>\]/H$\alpha$ instead of <span style="font-variant:small-caps;">\[O$\;$i\]</span>/H$\alpha$ has been selected for this plot since it covers a larger 2D region). In this figure, vertical and horizontal lines represent the frontier between <span style="font-variant:small-caps;">H$\;$ii</span> and LINER type of ionization. Many more spectra are classified as LINER according the \[<span style="font-variant:small-caps;">S ii</span>\]$\lambda\lambda$6717,6731/H$\alpha$ ratio (i.e. points located to the right of the vertical line) than according to \[<span style="font-variant:small-caps;">N ii</span>\]$\lambda$6584/H$\alpha$ (i.e. points above the horizontal line). In addition, most of the data points are located either in the LINER-like region according to both line ratios (i.e. upper right quadrant) or in the region where the \[<span style="font-variant:small-caps;">S ii</span>\]$\lambda\lambda$6717,6731/H$\alpha$ ratio is typical of LINER but \[<span style="font-variant:small-caps;">N ii</span>\]$\lambda$6584/H$\alpha$ typical of <span style="font-variant:small-caps;">H$\;$ii</span> regions (i.e. right bottom quadrant). For the shake of the following discussion we define circumnuclear regions as those confined within the central $\sim$ 3 arcsec (i.e. r $<$ 1.5 arcsec, and it is covered by $\sim$ 6 fibers/spectra), and extranuclear regions those which typically extends for several kpc outwards of this region (i.e. r $>$ 1.5 arcsec). In Fig. 2 we represent the circumnuclear and extranuclear regions with solid and open symbols, respectively. Note that the circumnuclear region corresponds roughly with the areas studied previously via long-slit spectroscopy. The circumnuclear data in this plot seem to be distributed in a more compact region, where the extranuclear data cover in general a wider range of these line quotients. In order to investigate the different ionization alternatives, the line ratios predicted by different mechanisms are shown in Figure \[cocicoci\]. Based in the apparent continuity between LINER and Seyfert spectra, along with the discovery of X-ray emision and the existence of a wide component in the H$\alpha$ emision line of some LINERs, photoionization by a power law spectrum coming from an AGN has been proposed as a possible ionizing mechanism [e.g. @ho93; @gro04]. Although some of these models could in principle explain the line ratios measured in the circumnuclear regions, none of the nuclei of the sample is clearly located in the region identified by these models (see Figure \[cocicoci\], where we have shown one of the models of as example). In general, the circumnuclear data show either an <span style="font-variant:small-caps;">H$\;$ii</span> like spectra (indicative of an intense star formation) or spectra of compound nature (LINER+<span style="font-variant:small-caps;">H$\;$ii</span>). This agrees with the classification in the mid-infrared for these objects [@tan99; @rig99] that includes them in the group of *starburst* using the line-to-continuum ratio of PAH at $7.7$ $\mu$m. Regarding the extranuclear regions, the AGN models [@ho93; @gro04] are not, in general, likely to be representative of these low-density ($n_e < 10^{3} $ cm$^{-3}$) regions, as it is also indicated by the relatively small fraction of data points located within the area defined by these models. However it is interesting to note that the case of IRAS 17206$-$0014 may represent an exception in this context (see discussion in 4.2). In short, although we cannot discard a possible contribution of AGN-like ionization in some regions, clearly this mechanism cannot explain in general the observed lines ratios represented in Figure 2. Ionization by young stars could be an obvious alternative mechanism to explain the line ratios. @bar00 have shown that starburst models during the Wolf-Rayet dominated phase can explain the spectra of some LINERs, but only under very restricted conditions. In Figure \[cocicoci\] we have plotted the Barth & Shields’ model that best fits the locus of our data as a red line. This corresponds to an instantaneous burst model of 4 Myr, $Z = Z_\odot$, Initial Mass Function (IMF) power-law slope of $-$2.35 and $M_{up} = 100$ M$_\odot$ and an interstellar medium characterized by an electron density ($n_e$) of 10$^3$ cm$^{-3}$. These conditions are very unlikely to be representative of the extranuclear regions of all these ULIRGs, especially taking into account the relatively young and short-lived population involved (i.e. for clusters younger than 3 Myr or older than 6 Myr, and for models with a constant star formation rate, the softer ionizing continuum results in an emission spectrum more typical of <span style="font-variant:small-caps;">H$\;$ii</span> regions. Furthermore, the fraction of ULIRGs with hints of WR signatures in their spectrum (i.e. broad optical feature at 4660 Å) is less than 10% [@arm89]. The most likely mechanism to explain the observed ionization in the extended, extranuclear regions is the presence of large scale shocks. Figure 2 presents the predicted line ratios for a representative set of shock models [@dop95]. These ratios agree with the range of observed values for shock velocities of 150 km s$^{-1}$ to 500 km s$^{-1}$ in either a neutral (continuous lines), or a pre-ionized medium (dashed lines). Moreover, such high speed flows are routinely detected in the extranuclear regions of ULIRGs as shown by detailed two-dimensional kinematic studies [@col05]. Velocity fields inconsistent in general with ordered motions, and with typical peak-to-peak velocities of 200 to 400 km s$^{-1}$ are detected in the tidal tails, and extranuclear regions of ULIRGs on scales of few to several kpc away from the nucleus, and almost independent of the dynamical phase of the merger [see @col04; @col05 and references therein]. Moreover, the presence of highly turbulent gas, as identified by large velocity dispersions of 70 to 200 km s$^{-1}$ (Colina et al. 2005), further supports the scenario of fast shocks as the main ionization mechanism in these regions. In summary, the ionization of the extranuclear regions in the ULIRGs studied here, can be hardly explained by accretion powered AGN or by young starbursts, but is consistent with fast, large scale shocks. Excitation and Velocity Dispersions: Further Evidence for Ionization by Shocks ------------------------------------------------------------------------------- The positive correlation between the velocity dispersion and ionization found by some authors using circumnuclear (slit) spectra of ULIRGs has been considered as further evidence supporting the presence of shocks in these objects [@arm89; @dop95; @vei95]. The present study also supports the correlation previously observed. As shown in Figure 3, the <span style="font-variant:small-caps;">\[S$\;$ii\]</span>$\lambda\lambda$6717,6731/H$\alpha$ and velocity dispersion values derived from the integrated spectra – i.e. combining the individual spectra for each object, are consistent with previous results [@arm89]. However, these spectra are not necessarily representative of the extranuclear regions. In Figure \[cociydisp\] we present similar plots, but now each data point represents the value for a specific spectrum (fiber) (i.e. different position in the extranuclear nebula) but excluding the circumnuclear region, for each individual galaxy, and using three different line ratios (<span style="font-variant:small-caps;">\[N$\;$ii\]</span>$\lambda$6584/H$\alpha$, <span style="font-variant:small-caps;">\[S$\;$ii\]</span>$\lambda\lambda$6717,6731/H$\alpha$, <span style="font-variant:small-caps;">\[O$\;$i\]</span>$\lambda$6300/H$\alpha$). The dashed horizontal lines indicate the borderline between <span style="font-variant:small-caps;">H$\;$ii</span> and LINER ionization. In the top panels the data for the individual galaxies (except IRAS 17208$-$0014, see below) are combined. These panels indicate that, while the correlation of the velocity dispersions with the line-ratio is not so well defined for <span style="font-variant:small-caps;">\[N$\;$ii\]</span>$\lambda$6584/H$\alpha$, for the other two line ratios, and specially <span style="font-variant:small-caps;">\[O$\;$i\]</span>$\lambda$6300/H$\alpha$ (i.e. the most reliable diagnostic ratio to detect ionization by shocks according to models by Dopita & Sutherland, 1995), the correlation is clear. The fact that the extranuclear data of these five systems follow a well defined relation between the line-ratio and the velocity dispersion reinforces the idea that shocks are also the dominant ionization source at large scales ($>$ 2-3 kpc). Individually, two systems, IRAS 12112+0305 and IRAS 14348$-$1447, show a clear correlation in the three line ratios. For three of the remaining systems, IRAS 08572+3915,IRAS 15206+3342 and IRAS 15250+3609, the range in velocity dispersion is relatively small to show the correlation. Finally, IRAS 17208$-$0014 does not follow the mean behavior observed in the other systems, showing a wider range of line ratio values. This galaxy has been studied in detail by @arr03 and @col05, and it is the only clear case in this sample showing rotation on scales of several kpc. This may be an indication that this system is in a different (probably more evolved) dynamical phase and/or it has had a different merging history. At any event, the fact that the gas kinematics indicates a more relaxed and virialized system suggests that shocks are not the dominant ionization mechanism in this galaxy and, therefore the above mentioned correlation should not be expected. For this galaxy the origin of the LINER-like ionization in the extranuclear region should be different (note that the three line ratios shown in Fig. 4 are consistent with LINER-like ionization). A hint that this is the case comes from the detection of an extended ($\sim$ 4 kpc) hard X-ray nebula in this galaxy [@pta03], which would provide an ionizing spectrum similar to that of an AGN. This may explain the fact that the excitation of this object is higher than that of the rest of the galaxies and similar to that expected from a low luminosity AGN [Fig.2; @ho93]. Origin of the Shocks in the Circumnuclear and Extranuclear Regions: Superwinds and Merger Induced Flows ------------------------------------------------------------------------------------------------------- In previous sections shocks have been identified as the main ionization mechanism in the extended, extranuclear ionized regions. Moreover, the detection of a positive correlation between the ionization status of the gas, as best indicated by the shock tracer <span style="font-variant:small-caps;">\[O$\;$i\]</span>$\lambda$6300/H$\alpha$ ratio, and the velocity dispersion of the gas, suggests a direct causal relation between the LINER ionization and the presence of shocks. What is the origin of the shocks in the circumnuclear, and in the extranuclear regions extended at distances of up to 10-15 kpc from the nucleus? Some authors have found evidence supporting the existence of so-called superwinds generated by the combined effect of massive stars winds and supernova explosions in intense nuclear starbursts [@hec90]. These superwinds, identified by the presence of kinematically distinct components in the profiles of the emission [@hec90] or absorption lines [@mar05; @rup05a; @rup05b], generate shocks in the circumnuclear regions as the stellar winds move through the interstellar medium. Recent studies in a large sample of LIRGs and ULIRGs conclude that the presence of superwinds has to be an almost universal phenomenon in the circumnuclear regions of ULIRGs (typical angular sizes of about of 1$^{\prime\prime}$ or 1 to 2 kpc, depending on redshift) as kinematically distinct components of the neutral interstellar NaD lines, blueshifted by a median velocity of 350 km s$^{-1}$ are detected in at least 70% of ULIRGs [@mar05; @rup05a; @rup05b]. These velocity components are also detected in our integral field spectra for some systems. Out of the six galaxies in the sample, our data show the presence of double H$\alpha$ line profiles in the circumnuclear regions of the northern and southern nucleus of the interacting pairs IRAS 12112+0305 and IRAS 14347$-$1448 (see Figure 5). These secondary velocity components are blueshifted with respect to system by 150 km s$^{-1}$ and 300 km s$^{-1}$, respectively. In addition to these galaxies, similar signatures have been identified in the circumnuclear regions of more evolved ULIRGs such as IRAS 15250+3609 ($V-V_{sys} =-170$ km s$^{-1}$, Monreal-Ibero 2004), and Arp 220 [peak-to-peak velocity of 1000 km s$^{-1}$, @arr01]. However, our IFS data show that the presence of double components, when detected, is always confined to the nuclear and circumnuclear regions, i.e. distances of 1 to 2 kpc from the nucleus. The lack of detection of double components in the extranuclear regions, at distances of several kpc away from the nucleus, can be interpreted as if the high velocity outflows associated with the nuclear superwinds were not present at these distances, or as if they were of much lower amplitude (less than 100 km s$^{-1}$), therefore not been detected as kinematically distinct components with the present spectral resolution. On the other hand, the complex two-dimensional velocity field and velocity dispersion structure of the extranuclear ionized regions of ULIRGs [@col04; @col05] shows in general large velocity gradients with peak-to-peak velocities of few to several hundreds km s$^{-1}$ associated with tidal tails and extranuclear regions at distances of several kpc away from the massive circumnuclear starbursts. Moreover, the largest values of the velocity dispersion in many ULIRGs (up to 200 km s$^{-1}$) are detected not in the nucleus but in extranuclear regions [@col05], implying therefore the presence of an extended, highly turbulent medium on kpc-size scales. As shown by specific models of the nearest ULIRG, Arp 220, tidally induced flows lead to relative gas velocities that are much larger than the original impact velocities of the galaxies [@mcd03], and therefore high speed flows of several hundreds km s$^{-1}$ are a natural consequence of the merging process associated with ULIRGs. The presence of these tidally induced, high velocity flows and highly turbulent medium will generate shocks that in turn will heat and ionize the interstellar medium producing the observed LINER type spectra as in the nearest ULIRG, Arp 220 [@mcd03; @col04]. In short, the lack of superwind signatures and the kinematic properties of the gas in the extranuclear regions, supports the idea that merger induced flows are the origin of the fast shocks producing the LINER-like excitation in these extended regions. CONCLUSIONS =========== Integral Field Spectroscopy with the INTEGRAL fiber system has been used to analyze the circumnuclear and extranuclear ionization structure of six low-$z$ ULIRGs. The main results can be summarized as follows: 1. The two-dimensional ionization characteristics of the extranuclear regions of these galaxies correspond to these of LINERs. This is clearly indicated by the <span style="font-variant:small-caps;">\[S$\;$ii\]</span>$\lambda\lambda$6717,6731/H$\alpha$ and especially the <span style="font-variant:small-caps;">\[O$\;$i\]</span>$\lambda$6300/H$\alpha$ line ratios which allow to discriminate reliably between the <span style="font-variant:small-caps;">H$\;$ii</span> and LINER ionization in low excitation conditions (i.e. <span style="font-variant:small-caps;">\[O$\;$iii\]</span>$\lambda$5007/H$\beta$ $\leq$ 2.5). 2. The observed LINER-type line ratios in the extranuclear regions are in general better explained with ionization by fast shocks with velocities of 150 to 500 km s$^{-1}$, rather than with AGN or starburst photoionization. Further evidence pointing to shocks as the dominant source of ionization comes from a positive correlation between the ionization state and the velocity dispersion of the ionized gas. The present two-dimensional data show that this correlation holds especially if the <span style="font-variant:small-caps;">\[O$\;$i\]</span>$\lambda$6300/H$\alpha$ line ratio is used. 3. Although signatures for superwinds are observed in the circumnuclear regions of some systems, no kinematic evidence for such a mechanism is found in the extranuclear regions. Alternatively, the shocks that produce the observed LINER-type ionization in the extranuclear regions could be due to a different phenomenon. Taking into account the general 2D kinematic characteristics of the extranuclear regions in these objects, which indicate disordered motions with peak-to-peak velocities of about 400 km s$^{-1}$, and velocity dispersions of up to 200 km s$^{-1}$, the origin of the shocks are most likely caused by tidally induced large scale flows produced during the merging process. 4. The galaxy IRAS 17208$-$0014 presents a peculiar kinematical and ionization structure. For this galaxy the origin of the LINER-type ionization in the extranuclear region is most likely explained by the presence of the recently detected hard X-ray extended (4 kpc) emission that would produce an ionizing spectrum similar to that of an AGN. This may explain the fact that the excitation of this object is higher than that of the rest of the galaxies, and compatible with that expected in low luminosity AGNs. AMI acknowledges support from the Euro3D Research Training Network, funded by the EC (HPRN-CT-2002-00305). Financial support was provided by the Spanish Ministry for Education and Science through grant AYA2002-01055. Work based on observations with the William Herschel Telescope operated on the island of La Palma by the ING in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofísica de Canarias. Armus, L., Heckman, T. M., & Miley, G. K. 1989, , 347, 727 Arribas, S. & Colina, L. 2003, , 591, 791 Arribas, S., Colina, L., & Clements, D. 2001, , 560, 160 Arribas, S. et al. 1998 , 3355,821 Baan, W. A., Salzer, J. J., & Lewinter, R. D. 1998, , 509, 633 Barth, A. J. & Shields, J. C. 2000, , 112, 753 Bingham, R.  G., Gellatly, D. W., Jenkins, C. R., and Worswick, S. P. 1994, , 2198, 56 Borne, K. D., Bushouse, H, Lucas, R. A., and Colina, L. 2000, , 529, 77 Clements, D. L., Sutherland, W. J., McMahon, R. G., and Saunders, W. 1996, , 279, 477 Colina, L., Arribas, S., & Monreal-Ibero, A. 2005, , 621, 725 Colina, L., Arribas, S., & Clements, D. 2004, , 602, 181 Colina, L., Arribas, S., Borne, K. D., & Monreal, A. 2000, , 533, L9 Dopita, M. A. & Sutherland, R. S. 1995, , 455, 468 Frayer, D. T., Armus, L., Scoville, N. Z., Blain, A. W., Reddy, N. A., Ivison, R. J., & Smail, I. 2003, , 126, 73 Genzel, R. & Cesarsky, C. J. 2000, , 38, 761 Genzel, R., Tacconi, L. J., Rigopoulou, D., Lutz, D., & Tecza, M. 2001, , 563, 527 Groves, B. A., Dopita, M. A., & Sutherland, R. S. 2004, , 153, 9 Heckman, T. M., Armus, L., & Miley, G. K. 1990, , 74, 833 Heckman, T. M., Lehnert, M. D., Strickland, D. K., & Armus, L. 2000, , 129, 493 Ho, L. C., Filippenko, A. V., & Sargent, W. L. W. 1993, , 417, 63 Kim, D.-C. & Sanders, D. B. 1998, , 119, 41 Kim, D.-C., Sanders, D. B., Veilleux, S., Mazzarella, J. M., & Soifer, B. T. 1995, , 98, 129 Le Floc’h, E., et al. 2004, , 154, 170 Lehnert, M. D. & Heckman, T. M. 1996, , 462, 651 Martin, C. L. 2005, , 621, 227 McDowell, J. C., et al. 2003, , 591, 154 Monreal-Ibero, A. (2004), Ph.D. Thesis, University of La Laguna Moshir, M., et al.  1993, VizieR Online Data Catalog, 2156, 0 Ptak, A., Heckman, T., Levenson, N. A., Weaver, K., & Strickland, D. 2003, , 592, 782 Rigopoulou, D., Spoon, H. W. W., Genzel, R., Lutz, D., Moorwood, A. F. M., & Tran, Q. D.  1999, , 118, 2625 Rupke, D. S., Veilleux, S., & Sanders, D. B. 2002, , 570, 588 Rupke, D. S., Veilleux, S., & Sanders, D. B. 2005, astro-ph/0506611 Rupke, D. S., Veilleux, S., & Sanders, D. B. 2005, astro-ph/0506610 Sanders, D. B., and Mirabel, I. F. 1996, , 34, 749 Sanders, D. B., Soifer, B. T., Elias, J. H., Madore, B. F., Matthews, K., Neugebauer, G. and Scoville, N. Z. 1988, , 325, 74 Scoville, N. Z. et al. 2000, , 119, 991 Surace, J.  A., and Sanders, D. B. 2000a, , 120, 604 Surace, J. A., Sanders, D. B., and Evans, A. S. 2000b, , 529, 170 Tacconi, L. J., Genzel, R., Lutz, D., Rigopoulou, D., Baker, A. J., Iserlohe, C., & Tecza, M.  2002, , 580, 73 Taniguchi, Y., Yoshino, A., Ohyama, Y., & Nishiura, S. 1999, , 514, 660 Veilleux, S. & Osterbrock, D. E. 1987, , 63, 295 Veilleux, S., Kim, D.-C., & Sanders, D. B. 1999, , 522, 113 Veilleux, S., Cecil, G., & Bland-Hawthorn, J. 2005, astro-ph/0504435 Veilleux, S., Kim, D.-C., Sanders, D. B., Mazzarella, J. M., & Soifer, B. T. 1995, , 98, 171 -- -- -- -- -- -- [cccccccc]{} IRAS 08572+3915 & 0.058 & 1.20 & 12.15 & 0.32 & 1.70 & 7.43 & 4.59\ IRAS 12112+0305 & 0.073 & 1.52 & 12.30 & 0.11 & 0.51 & 8.50 & 9.98\ IRAS 14348$-$1447 & 0.083 & 1.72 & 12.31 & 0.14 & 0.49 & 6.87 & 7.07\ IRAS 15206+3342 & 0.125 & 2.60 & 12.18 & 0.08 & 0.35 & 1.77 & 1.89\ IRAS 15250+3609 & 0.054 & 1.12 & 12.03 & 0.20 & 1.32 & 7.29 & 5.91\ IRAS 17208$-$0014 & 0.043 & 0.89 & 12.40 & 0.19 & 1.66 & 3.11 & 3.49\ [cccccccc]{} IRAS 08572+3915 & 09:00:25.4 & +39:03:54.1 & 5200$-$8100 & 1800$\times$6 & 1.093 & 0.0 & 1998 Apr 01\ IRAS 12112+0305 & 12:13:46.0 &+02:48:41.0 & 4900$-$7900 & 1800$\times$5 & 1.178 & 0.0 & 1998 Apr 02\ & 12:13:46.3 &+02:48:29.7 & 5100$-$8100 & 1500$\times$4 & 1.143 & 180.0 & 2001 Apr 14\ IRAS 14348$-$1447 & 14:37:38.4 &$-$15:00:22.8 & 5200$-$8200 & 1800$\times$4 & 1.438 & 0.0 & 1998 Apr 01\ IRAS 15206+3342 & 15:22:38.0 & +33:31:36.6 & 5000$-$8100 & 1800$\times$4 & 1.095 & 0.0 & 1998 Apr 03\ IRAS 15250+3609 & 15:26:59.4 & +35:58:37.6 & 4900$-$7900 & 1800$\times$5 & 1.031 & 0.0 & 1998 Apr 02\ IRAS 17208$-$0014 & 17:23:22.0 & $-$00:17:00.1 & 5000$-$8100 & 1800$\times$4 & 1.243 & 180.0 & 1998 Apr 03\ [^1]: http://www.starlink.rl.ac.uk/
--- abstract: 'We consider the concept of a rotating reference frame with the axis of rotation at each point and the applicability of this concept to different areas of physics. The transformation for the transition from the resting to rotating frame is assumed to be non-Galilean. This transformation must contain a constant with dimension of time. We analyze different possibilities of experimental testing this constant in optics, as most suitable field for measurements presently, and also in general relativity and quantum mechanics.' author: - 'Boris V. Gisin' title: Optical measurement of a fundamental constant with the dimension of time --- Introduction ============ The concept of a “point rotation reference frame”, i.e., the frame with the axis of rotation at every point, arises in optics. However, this concept is also applicable to other areas of physics. An example of such a frame is the optical indicatrix (index ellipsoid) [@Born]. Any rotating field, including spinor and gravitational, is the object of the point rotation. Coordinates of the frame are the angle, time and axis of rotation. The radial coordinate is not used in manipulations with the frames. Centrifugal forces don’t exist in such frames. Optically, the rotating half-wave plate is an equivalent to the resting electrooptical crystal with the rotating indicatrix [@Bur] but, physically, they are different because the plate has only one axis of rotation. The frames are not compatible with Cartesian’s frames. The main question in such a concept: what is the transformation for point rotating reference frames? Is the transformation Galilean or not? From the viewpoint of contemporary physics a non-Galilean transformation, with different time for frames rotating at different frequencies, is much more preferable in comparison with the Galilean, where time is the same. Moreover, such a transformation must contain a constant with the dimension of time similarly to the Lorentz transformation and the speed of light. This constant should define limits of applicability of basic physical laws.  In contrast to mechanics, where the relativity principle is used to deduce the transformation for the rectilinear motion, such a general principle does not exist for the point rotation. Therefore, this transformation cannot be explicitly determined. It is known that an electric field, rotating perpendicular to the optical axis of a 3-fold electrooptic crystal, causes rotation of the optical indicatrix at a frequency equal to half the frequency of the field. It means that the optical indicatrix of such a crystal possesses some properties of two-component spinor.  The sense of rotation of the circularly polarized optical wave propagating through this crystal, is reversed, and the frequency is shifted, if the amplitude of the applied electric field is equal to the half-wave value. The device for such a shifting is the electrooptical single-sideband modulator [@pat]. The use of the transformation makes the description of the light propagation in the electrooptical single-sideband modulator simpler and comprehensible. For the description of the phenomenon transit to a rotating reference frame associated with axes of the indicatrix. As result of such a transition, the frequency of the wave is shifted by half frequency of the modulating electric field. This shift is doubled at the modulator output due to the polarization reversal and transition to the initial reference frame [pat]{} . In this paper we study the general form of two-dimensional non-Galilean transformation and the possibility of its experimental verification. Emphasize, experiment always involves the direct and reverse transformation because an observer rotating at each point does not exist. The transformation ================== The general form of  the normalized non-Galilean transformation may be written as follows$$\tilde{\varphi}=\varphi -\Omega t,\text{ \ }\tilde{t}=-\tau \varphi +t, \label{trn1}$$where the tilde corresponds to the rotating frame, $\tilde{\varphi},\varphi $ and $\tilde{t},t$ are the normalized angle and time, $\Omega $ is the frequency of the rotating frame (the modulating frequency is $2\Omega $), $% \tau (\Omega )$ is a parameter with the dimension of time. The reverse transformation follows* *from (\[trn1\]) $$(1-\Omega \tau )\varphi =\tilde{\varphi}+\Omega \tilde{t},\text{ \ }% (1-\Omega \tau )t=\tau \tilde{\varphi}+\text{\ }\tilde{t}. \label{tr1}$$ Consider a plane circularly polarized light wave propagating through the modulator. Transit into the rotating frame. The optical frequency in this frame is$$\tilde{\omega}=\frac{\omega -\Omega }{-\tau \omega +1}, \label{f1}$$where the frequency in the resting and rotating frame is defined as $\omega =\varphi /t$ and $\tilde{\omega}=\tilde{\varphi}/\tilde{t}$ respectively. If the half-wave condition is fulfilled, the reversal of rotation occurs at the modulator output. For the circularly polarized wave the negative sign of the frequency corresponds to the opposite rotation. Making transition into the resting frame with changing the sign of $\tilde{\omega}$, obtain the output frequency as a function of $\omega $ and $\Omega $ $$\omega ^{\prime }=\frac{-\omega (1+\tau \Omega )+2\Omega }{-2\tau \omega +\tau \Omega +1}. \label{fr1}$$ In fact, we consider the single-sideband modulator in this approach as a black box. This box changes the sense of rotation of the circularly polarized light wave and shifts its frequency. Optics ====== For the evaluation of the parameter $\tau $ we use results of optical measurements from the work [@jpc]. In this work the principle of the single-sideband modulation was checked and the Galilean transformation was used for the theoretical description of the process. Circularly polarized light from Helium-Neon laser was modulated by a Lithium Niobate single-sideband modulator at the frequency 110 MHz. The experiment showed an asymmetry of the frequency shift for two opposite polarizations. The extra shifts was of the order of few MHz. Proof. W. H. Steier, one of the authors of the work [@jpc], kindly answered my question about the origin of this asymmetry: “Your are correct about the apparent asymmetry. We never noticed it earlier. I do not know if this is a property of the scanning mirror interferometer. It has been many many years since we did that work and all of the equipment has now been replaced. It would not be possible for us to redo any work or start the experiments again”. Possibly, the origin of this extra shift is a defect of the equipment. In any case this shift can be used for approximate estimates of the upper boundary of the parameter $\tau $. From this the important conclusion follows. The parameter $\tau $ is very small. For small  $\tau $ and $|\Omega |\ll |\omega |$ the output frequency ([fr1]{}) may be written as$$\omega ^{\prime }\approx -\omega +2\Omega +2\tau \omega ^{2}. \label{frs}$$The extra shift equals $2\tau \omega ^{2}.$ The exact form of the dependency $\tau (\Omega )$ is unknown. Therefore, assume that $\tau $ may be expanded in power series in $\Omega $$$\tau =\tau _{0}+\tau _{1}^{2}\Omega +\tau _{2}^{3}\Omega ^{2}+\ldots \label{tau}$$In such a form all the coefficients $\tau _{n}$ have the dimension of time. Since $\tau $ is very small and, usually, $\Omega \ll $ $\omega $, we can restrict ourselves only the first non-zero term in the expansion (\[tau\]). ### The case $\protect\tau _{0}\neq 0.$ This case is most favorable for optical measurements from the viewpoint of simplicity. The constant $\tau _{0}$ may be called “quantum of time”. The value of the extra shift defines the upper boundary of the quantum of time $% \sim 10^{-23}\sec $. That corresponds to a distance of the order of the proton size. The experiment, similar to [@jpc], provide an excellent opportunity to measure the parameter $\tau _{0}$. Accuracy of the measurement can be increased by several orders of magnitude by using modern technology. The advent of laser cooling has underpinned the development of cold $Cs$ fountain clocks, which now achieve frequency uncertainties of approximately $% 5\cdot 10^{-16}$ and even lesser [@NPL]. That can be used in the measurement. Best accuracy may be achieved in a ring schematic similarly measurements of the anomalous magnetic moment of electron. ### The case $\protect\tau _{0}=0$ If $\tau _{0}$ is exactly equal to zero, the accuracy should be increased by $1/(\tau _{1}\Omega )$ times. Accordingly to results of [@jpc], the upper boundary of $\tau _{1}$ is$\ \sim 10^{-16}\sec $. Using the optical range for the modulation is connected with the problem of phase matching [@pat]. Below briefly summarized results of the analysis and possibilities of measurements in other areas of physics. General relativity ================== Consider the case $\tau _{0}=0$ in application to general relativity. Now we restrict ourselves only by the second term of $\tau (\Omega )$ and consider (\[trn1\]) as the Lorentz transformation. Usually this name relates to the rectilinear motion in mechanics. Here the role of the coordinate and velocity is played by the angle and frequency respectively. After a normalization $$(\tilde{\varphi},\tilde{t})\rightarrow \frac{(\tilde{\varphi},\tilde{t})}{% \sqrt{1+\tau _{1}\Omega }},\text{ \ }(\varphi ,t)\rightarrow (\varphi ,t)% \sqrt{1-\tau _{1}\Omega }, \label{norm}$$ obtain$$\tilde{\varphi}=\frac{\varphi -\Omega t}{\sqrt{1-\tau _{1}^{2}\Omega ^{2}}},% \text{ \ }\tilde{t}=\frac{-\tau _{1}\Omega \varphi +t}{\sqrt{1-\tau _{1}^{2}\Omega ^{2}}}. \label{rL}$$Analogously mechanics, $\tau _{1}$ can be regarded as the minimum possible time interval and $1/\tau _{1}$ as the maximum possible frequency. The form $\ (\tau _{1}^{2}\varphi ^{2}-t^{2})$  is invariant under the transformation (\[rL\]).   Despite the fact that the Cartesian reference frames are not compatible with the point rotation reference frames, there exists a solution of Einstein’s equation invariant under the transformation (\[rL\]). Consider an exact solution with cylindrical symmetry [@Mar]$$ds^{2}=Ar^{a+b}dr^{2}+r^{2}d\varphi ^{2}+r^{b}dz^{2}+Cr^{a}dt^{2}, \label{mM}$$where $A,C,a,b$ are constants. This solutions is a invariant under the transformation (\[rL\]) provided $a=2$. Moreover, at $a=b=2$ and a normalization of $r$ and $t$ the metric can be reduced to the form $$ds^{2}=(1+\frac{1}{L}r)[dr^{2}+l^{2}d\varphi ^{2}+dz^{2}-c^{2}dt^{2}], \label{mp}$$where* *$l\equiv c\tau _{1}$ and $L$ are constants with the dimension of length. For a “center”, at $r=0$, this metric looks like “Euclidean metric” for the point rotations. Non-stationary solutions of Einstein’s equation, invariant under the transformation (\[rL\]), also exists. The existence of such metrics opens the way for applying the concept of the point rotation reference frames to general relativity. In this sence suitable solutions of Einstein’s equation are possible but searching for consequences of such solutions applicable for measurements of $\tau _{1}$ or $l$ is not simple problem. Quantum mechanics ================= Initially, quantum mechanics was considered as the most suitable area of physics for the measurement of the parameter $\tau $. However, the hope to find in quantum mechanics a consequence of the transformation (\[trn1\]) applicable to measurements, proved to be illusory. Quantum states in rotating magnetic or electromagnetic fields are not stationary. The problem becomes stationary by the transition to the rotating frame. Main role in the transition plays the phase transformation of the spinor, which is defined by the first equation in (\[trn1\]). The second equation, containing the parameter $\tau ,$ plays a minor role. The transition was used for finding a new class of exact localized solutions of the Dirac equation [@arx] in the rotating electromagnetic field. However, the further study showed that the parameter $\tau $ vanishes from final results due to the reverse transition into the resting frame. It allows also to conclude that the non-Galilean transformation is not related to the problem of anomalous magnetic moment in any case for the above exact localized solutions. Conclusion ========== We have considered the concept of the point rotating frame and the non-Galilean transformation for such frames. The concept is applicable to optics, general relativity and quantum mechanics. The parameter $\tau $ with the dimension of time is a distinguishing feature of the non-Galilean transformation. This parameter is very small. Presently optics can be considered as the main area of physics for measurements of the parameter $\tau $. The nonzero term $\tau _{0}$ in the expansion (\[tau\]) is the most favorable case for optical measurements. However, in the case $\tau _{0}=0$measurements are also possible. The experiment would be similar to [@jpc], but on the basis of modern technology. Best accuracy may be achieved in a ring schematic similarly measurements of the anomalous magnetic moment of electron. This schematic, regardless of results of experiments (positive or negative), can also be used for high-precise manipulations with the laser frequency in variety applications, in particular, for standards of length and time. A fundamental constant with dimension time must be on the list of basic physical constants. However, this constant is absent in this list. The parameter $\tau $ contains this constant and it should be the basic physical constant because it is determined by such a basic physical process as rotation. The investigation of this problem is very important since the constantdefines the limits of the applicability of the basic physical laws for very small intervals of time and length. Moreover, this constant might determine the minimum possible values of such intervals as well as the minimum possible value of energy. The above opinion of prof. W. H. Steier about the origin of the asymmetry is an argument against funding the high-precise measurements. Nevertheless, the problem of “to be or not to be” (in the sense of Galilean or not) must be solved. [9]{} M. Born, E. Wolf, *Principles of Optics*, Pergamon Press (Oxford, London, 1963) C. F. Buhrer, D. H. Baird, and E. M. Conwell, *Optical Frequency shifting by Electro-Optic Effect*, Appl. Phys. Lett. **1**, 46-49 (1962). D. H. Baird and C. F. Buhrer, *Single-Sideband Light Modulator*, U.S. Patent 3204104 (1965). J. P. Campbell, and W. H. Steier, *Rotating-Waveplate Optical-Frequency Shifting in Lithium Niobate*, IEEE J. Quantum Electron. **QE-7,** 450-457 (1971). P. Gill, *When Should We Change the Definition of the Second?* Phil. Trans. R. Soc. **A** **369** no.1953, 4109-4130 (2011). Marder L., *Gravitational Waves in General Relativity. I. Cylindrical Waves*,  Proc. R. Soc. Lond. **A 244** no.1239, 524-537 (1958). B. V. Gisin, *Magnetic moment of relativistic fermions*, arXiv: 1105.3832v1 \[math-ph\] 17 May 2011; *Singular states of relativistic fermions in the field of a circularly polarized electromagnetic wave and constant magnetic field*, 1203.2600v1 \[physics.gen-ph\] 9 Mar 2012.
--- author: - | Haiye Huo\ Department of Mathematics, Nanchang University,\ Nanchang 330031, China\ \ Email: hyhuo@ncu.edu.cn title: A new convolution theorem associated with the linear canonical transform --- *Abstract*. In this paper, we first introduce a new notion of canonical convolution operator, and show that it satisfies the commutative, associative, and distributive properties, which may be quite useful in signal processing. Moreover, it is proved that the generalized convolution theorem and generalized Young’s inequality are also hold for the new canonical convolution operator associated with the LCT. Finally, we investigate the sufficient and necessary conditions for solving a class of convolution equations associated with the LCT. *Keywords.* Convolution operator; Convolution theorem; Linear canonical transform; Young’s inequality; Convolution equations Introduction {#sec I} ============ The linear canonical transform (LCT) [@BKO1997; @Bernardo1996; @MQ1971; @WWL2016] is a class of linear integral transform characterized with parameter $A=(a,b,c,d)$. It is well known that Fourier transform, Fresnel transform, fractional Fourier transform [@Wei2016], and scaling operations, are all special cases of the LCT by choosing specific parameters of $A$. Therefore, the LCT has recently drawn much attention as a powerful mathematical tool in the fields of signal processing, communications and optics [@OZK2001; @QLL2013]. So far, many classical results in the Fourier transform domain have been extended to the LCT domain, for instance, sampling theorem [@HS2015; @SSS2015; @WL2014; @WL2016; @XS2013; @XTZ2017], uncertainty principle [@HZCX2016; @SHZ2016; @Stern2008; @Zhang2016; @ZTLW2009], convolution theorem [@DTW2006; @PD2001; @SLZ2014B; @WRL2012; @XQ2014], etc. In this paper, we revisit the convolution theorem for the LCT. As we all known, the classical convolution theorem for the Fourier transform states that the Fourier transform of the convolution of two functions is equal to the pointwise product of the Fourier transforms. That is to say, in the Fourier transform domain, the classical convolution transform is expressed as: $$\mathcal{F}(f*g)(u)=\mathcal{F}f(u)\mathcal{F}g(u),$$ where the convolution operator $*$ is defined by $$\label{Fourier1} f*g(t)=\int_{-\infty}^{+\infty}f(\tau)g(t-\tau){\rm d}\tau.$$ Unfortunately, this claim is not true for the LCT. Therefore, by defining different forms of convolution operators (called canonical convolution operators in order to distinguish from the aforementioned convolution operator associated with the Fourier transform), a variety of convolution theorems for the LCT have been derived, see for example, Pei and Ding [@PD2001], Deng et al. [@DTW2006], Wei et al. [@WRL2012; @WRL2009], Shi et al. [@SLZ2014B; @SSZZ2012]. Pei and Ding [@PD2001] introduced a canonical convolution operator $O_{conv}^A$, which is denoted as $$\begin{aligned} &O_{conv}^A(f(t),g(t))=\mathcal{L}_{A^{-1}}\big\{\mathcal{L}_{A}f(u)\mathcal{L}_{A}g(u)\big\}(t)\nonumber\\ =&\sqrt{\frac{1}{j8\pi^3b^3}}\int_{\mathbb{R}^3}e^{j\frac{a}{2b}(v^2+u^2+\tau^2-t^2)-j\frac{u}{b}(t-\tau-v)}\nonumber\\ &\times f(v)g(\tau){\rm d}v {\rm d}u{\rm d}\tau.\label{Pei1} \end{aligned}$$ Then, the convolution theorem associated with the LCT can be expressed as follows $$\mathcal{L}_{A}\{O_{conv}^A(f(t),g(t))\}(u)=\mathcal{L}_{A}f(u)\mathcal{L}_{A}g(u),$$ which is similar to the traditional convolution theorem in the Fourier transform domain. However, we can see from $(\ref{Pei1})$ that it is difficult to reduce $O_{conv}^A(f(t),g(t))$ into a single integral form as in the traditional convolution operator formula (\[Fourier1\]). Deng et al. [@DTW2006] proposed another canonical convolution operator $\Theta$, which is defined by $$\begin{aligned} (f\Theta g)(t) =&\sqrt{\frac{1}{j2\pi b}}e^{-j\frac{a}{2b}t^2}\bigg(\Big(e^{j\frac{a}{2b}t^2}f(t)\Big)*\Big(e^{j\frac{a}{2b}t^2} g(t)\Big)\bigg)\nonumber\\ =&\int_{-\infty}^{+\infty}f(\tau)g(t-\tau)e^{-j\frac{a}{b}\tau(t-\tau)}{\rm d}\tau.\label{DW1} \end{aligned}$$ Thus, the convolution theorem in the LCT domain can be represented as $$\label{eq:Q1} \mathcal{L}_{A}(f\Theta g)(u)=\mathcal{L}_{A}f(u)\mathcal{L}_{A}g(u)e^{-j\frac{d}{2b}u^2}.$$ Later, Wei et al. [@WRL2012] independently investigated the convolution theorem (\[eq:Q1\]) and obtained some extended results. Moreover, Wei et al. [@WRL2009] proposed a new canonical convolution operator $\overset{A}{\Theta}$ as follows $$\label{Wei1} \Big(f\overset{A}{\Theta} g\Big)(t)=\int_{-\infty}^{+\infty}f(\tau)g(t\theta\tau){\rm{d}}\tau.$$ Here, $g(t\theta\tau)$ is the $\tau$-generalized translation of $g(t)$, which is defined by $$\begin{aligned} g(t\theta\tau) =&\sqrt{\frac{1}{j2\pi b}}\sqrt{\frac{1}{-jb}}e^{-j\frac{a}{2b}(t^2-\tau^2)}\nonumber\\ &\times\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{+\infty}\mathcal{L}_{A}g(u)e^{j\frac{1}{b}(t-\tau)u}{\rm{d}}u.\label{theta} \end{aligned}$$ Hence, the convolution theorem associated with the LCT becomes $$\mathcal{L}_{A}\Big(f\overset{A}{\Theta} g\Big)(u)=\mathcal{L}_{A}f(u)\mathcal{L}_{A}g(u).$$ The form (\[Wei1\]) is also quite simple with respect to that of Fourier transform. Furthermore, Shi et al. [@SSZZ2012] introduced a new canonical convolution structure for the LCT, and the canonical convolution operator $\Theta_M$ is denoted as $$\label{Shi11} (f\Theta_M g)(t)=\int_{-\infty}^{+\infty}f(\tau)g(t-\tau)e^{-j\frac{a}{b}\tau(t-\frac{\tau}{2})}{\rm d}\tau.$$ Therefore, the convolution theorem has the following form $$\mathcal{L}_{A}(f\Theta_M g)(u)=\sqrt{2\pi}\mathcal{L}_{A}f(u)\mathcal{F}g\big(\frac{u}{b}\big).$$ It is shown in [@SSZZ2012] that the new canonical convolution operator is quite useful for signal processing. Later, Shi et al. [@SLZ2014B] proposed another canonical convolution operator in the following $$\begin{aligned} &(f\Xi_{A_1,A_2,A_3} g)(t)\nonumber\\ =&\int_{-\infty}^{+\infty}(\mathrm{T}_{\tau}^{A_1}f)(t)g(\tau)\rho_{a_1,a_2,a_3}(t,\tau){\rm d}\tau.\label{Shi22} \end{aligned}$$ Here, $$\label{Tau} (\mathrm{T}_{\tau}^{A_1}f)(t)=f(t-\tau)e^{-j\frac{a_1}{b_1}\tau(t-\frac{\tau}{2})},$$ and $$\rho_{a_1,a_2,a_3}(t,\tau)=e^{j\frac{a_2}{b_2}\tau^2+j\big(\frac{a_1}{2b_1}-\frac{a_3}{2b_3}\big)t^2}.$$ Then, the convolution theorem for the LCT has the form $$\begin{aligned} &\mathcal{L}_{A_3}(f\Xi_{A_1,A_2,A_3} g)(u)\nonumber\\ =&\epsilon_{d_1,d_2,d_3}(u)\mathcal{L}_{A_1}\Big(\frac{b_1}{b_3}u\Big)\mathcal{L}_{A_2}\Big(\frac{b_2}{b_3}u\Big),\label{ShiA} \end{aligned}$$ where $$\epsilon_{d_1,d_2,d_3}(u)=\sqrt{\frac{j2\pi b_1 b_2}{b_3}}e^{ju^2\Big(\frac{d_3}{2b_3}-\frac{d_1b_1^2}{2b_1b_3^2}-\frac{d_2b_2^2}{2b_2b_3^2}\Big)}.$$ It follows from [@SLZ2014B] that the classical convolution theorem for the Fourier transform, the generalized convolution theorem for the fractional Fourier transform, and some existing canonical convolution theorems associated with the LCT can be regarded as the special cases for (\[ShiA\]). The canonical convolution operators for the LCT introduced in the six papers mentioned above are very interesting, and can be applied to solving many theoretical or practical problems, since they can be considered as some extensions of classical convolution operator for the Fourier transform. In this paper, our goal is to introduce a new canonical convolution operator for the LCT, and then derive a generalized version of the classical convolution theorem, and Young’s inequality associated with the Fourier transform. Furthermore, we discuss the solvability of a class of convolution equations associated with the new canonical convolution operator. In Sec \[sec:New\], we verify that our new defined canonical convolution operator can be performed into two different ways for implement in filter design. This fact may have some advantages over others in filter design, for example, compared with the canonical convolution operators introduced in references [@DTW2006; @SLZ2014B; @SSZZ2012; @WRL2012; @WRL2009], which only have one way of convoluting, respectively. In fact, by considering the computational complexity or input conditions, we can have two options for choosing filtering, since in some cases, the first one may be perform better than the second one, or vice versa. Therefore, when applied to solving some specific problems, our canonical convolution introduced in this paper is much more flexible than the existing ones for the LCT mentioned in [@DTW2006; @SLZ2014B; @SSZZ2012; @WRL2012; @WRL2009]. The rest of paper is organized as follows. In Sec \[sec:Prel\], we briefly recall the definition of the LCT. In Sec \[sec:New\], we introduce a new canonical convolution operator, and prove that it satisfies the generalized convolution theorem for the LCT. In Sec \[sec:Some\], we present two applications for the new canonical convolution operator. First, we derive a generalized Young’s inequality for the new canonical convolution operator associated with the LCT. Second, we give some sufficient and necessary conditions for the solvability of a class of convolution equations associated with our new defined canonical convolution operator. Finally, we conclude the paper. The Linear Canonical Transform {#sec:Prel} ============================== The LCT of a signal $f(t)\in L^1(\mathbb{R})$ is defined by [@Stern2006B]: $$\begin{aligned} &\mathcal{L}_{A}f(u):=\mathcal{L}_{A}\{f(t)\}(u)\nonumber\\ =& \begin{cases} \displaystyle{\sqrt{\frac{1}{j2\pi b}}\int_{-\infty}^{+\infty}f(t)e^{j\frac{a}{2b}t^{2}-j\frac{1}{b}ut+j\frac{d}{2b}u^{2}}{\rm{d}}t}, & b\ne0,\\ \sqrt{d}e^{j\frac{cd}{2}u^{2}}f(du), & b=0.\\ \end{cases}\label{def:LCT:L0}\end{aligned}$$ where $A=(a,b,c,d)$, and parameters $a,\;b,\;c,\;d\in \mathbb{R}$ satisfy $ad-bc=1$. For $b=0$, the LCT becomes a Chirp multiplication operator. Hence, without loss of generality, we assume that $b\neq0$ in the rest of the paper. As aforementioned, the LCT includes many linear integral transforms as special cases. For instance, let $A=(0,1,-1,0),$ then the LCT (\[def:LCT:L0\]) reduces to the Fourier transform; let $A=(\cos\alpha,\sin\alpha,-\sin\alpha,\cos\alpha),$ then the LCT (\[def:LCT:L0\]) becomes to the fractional Fourier transform. A New Generalized Convolution Theorem for the LCT {#sec:New} ================================================= In this section, we first introduce a new canonical convolution operator which is quite different from the existing ones. It is shown that our new canonical convolution operator is much more flexible and useful in certain cases. Then, we study the corresponding generalized convolution theorem associated with the LCT. Finally, we give several properties that the new canonical convolution operator satisfies. First, we introduce a new notion of canonical convolution operator, which is related to the LCT parameter $A$. Our new definition is a generalized version of [@ACTT2017A Definition 1]. \[def:convo\] Given two functions $f,\;g\in L^1(\mathbb{R})$, the canonical convolution operator $\otimes_A$ is denoted as $$\begin{aligned} (f\otimes_A g)(t)&=&\sqrt{\frac{1}{j2\pi b}}\int_{-\infty}^{+\infty}f(u)g(t-u+b){}\nonumber\\ {}&&\times e^{j\frac{a}{b}u^{2}-j\frac{a}{b}ut+jat-jau}{\rm{d}}u,\label{def:convo:1}\end{aligned}$$ where $A=(a,b,c,d)$ is defined the same as the LCT parameter. The new canonical convolution expression (\[def:convo:1\]) can be rewritten into two different forms, according to the classical convolution operator $*$. First, it can be represented as $$\begin{aligned} h(t):=&(f\otimes_A g)(t)\nonumber\\ =&(e^{j\frac{a}{2b}s^2}\cdot f(s))*(e^{jas}\cdot e^{j\frac{a}{2b}s^2}\cdot g(s+b))(t)\nonumber\\ &\times \sqrt{\frac{1}{j2\pi b}}e^{-j\frac{a}{2b}t^2}.\label{relat:1}\end{aligned}$$ Second, it can also be described as $$\begin{aligned} h(t):=&(f\otimes_A g)(t)\nonumber\\ =&(e^{-jas}\cdot e^{-j\frac{a}{2b}s^2}\cdot f(s))*(e^{j\frac{a}{2b}s^2}\cdot g(s+b))(t)\nonumber\\ &\times \sqrt{\frac{1}{j2\pi b}}e^{j\frac{a}{2b}t^2+jat}.\label{relat:2}\end{aligned}$$ Thanks to (\[relat:1\]) and (\[relat:2\]), we give two realizations for the new canonical convolution operator $\otimes_A$ in Fig. \[Fig.1\], and Fig. \[Fig.2\], respectively. ![image](Fig1.eps){width="75.00000%"} ![image](Fig2.eps){width="76.00000%"} Compared to the canonical convolution operator defined in (\[Pei1\]), our new canonical convolution is a single integral, which is much simpler than the triple integral mentioned in (\[Pei1\]). Furthermore, the definition of canonical convolution operator in (\[Shi22\]) is so complicated that it is not quite useful in filter design. Although the form of the new defined canonical convolution operator (\[def:convo:1\]) is similar to those of (\[DW1\]), (\[Wei1\]) and (\[Shi11\]), it follows from (\[relat:1\]) and (\[relat:2\]) that the new canonical convolution operator $\otimes_A$ has two realizations in filer design. Therefore, in certain cases, our new canonical convolution operator is much more flexible and useful than those in [@DTW2006; @PD2001; @SLZ2014B; @SSZZ2012; @WRL2012; @WRL2009]. Based on the new canonical convolution operator $\otimes_A$, we derive the generalized convolution theorem associated with the LCT as follows. \[Thm:A\] Let $f,\;g\in L^1(\mathbb{R})$,$\Phi(u):=e^{ju-j\frac{d}{2b}u^2-j\frac{ab}{2}}$. Then, we have $$\label{ThmA:1} \|f\otimes_A g\|_1\le\sqrt{\frac{1}{2\pi|b|}}\|f\|_1\|g\|_1,$$ and $$\label{ThmA:2} \mathcal{L}_{A}(f\otimes_A g)(u)=\Phi(u)\mathcal{L}_{A}f(u)\mathcal{L}_{A}g(u).$$ First, we prove (\[ThmA:1\]). Let $s=t-u+b$. By the definition of canonical convolution operator $\otimes_A$ (\[def:convo:1\]), we have $$\begin{aligned} \|f\otimes_A g\|_1 &=&\int_{-\infty}^{+\infty}|(f\otimes_A g)(t)|{\rm{d}}t\nonumber\\ &\le&\sqrt{\frac{1}{2\pi|b|}}\int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}|f(u)g(t-u+b)|{\rm{d}}u{\rm{d}}t\nonumber\\ &=&\sqrt{\frac{1}{2\pi|b|}}\int_{-\infty}^{+\infty}|f(u)|{\rm{d}}u\int_{-\infty}^{+\infty}|g(s)|{\rm{d}}s\nonumber\\ &=&\sqrt{\frac{1}{2\pi|b|}}\|f\|_1\|g\|_1.\label{ThmA:4}\end{aligned}$$ Next, we prove (\[ThmA:2\]). By using the definition of the LCT, making change of variable $v=s-t+b$, and then utilizing the definition of canonical convolution operator (\[def:convo:1\]), we have $$\begin{aligned} &\Phi(u)\mathcal{L}_{A}f(u)\mathcal{L}_{A}g(u)\nonumber\\ =&e^{ju-j\frac{d}{2b}u^2-j\frac{ab}{2}}\displaystyle{\frac{1}{j2\pi b}\int_{-\infty}^{+\infty}f(t)e^{j\frac{a}{2b}t^{2}-j\frac{1}{b}ut+j\frac{d}{2b}u^{2}}{\rm{d}}t}\nonumber\\ &\quad\times\displaystyle{\int_{-\infty}^{+\infty}g(v)e^{j\frac{a}{2b}v^{2}-j\frac{1}{b}uv+j\frac{d}{2b}u^{2}}{\rm{d}}v}\nonumber\\ =&e^{ju-j\frac{d}{2b}u^2-j\frac{ab}{2}}\frac{1}{j2\pi b}\int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}f(t)g(v)\nonumber\\ &\quad \times e^{j\frac{a}{2b}(t^{2}+v^2)-j\frac{1}{b}u(t+v)+j\frac{d}{b}u^{2}}{\rm{d}}t{\rm{d}}v\nonumber\\ =&\frac{1}{j2\pi b}e^{-j\frac{ab}{2}}\int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}f(t)g(v)\nonumber\\ &\quad \times e^{j\frac{a}{2b}(t^{2}+v^2)-j\frac{1}{b}u(t+v-b)+j\frac{d}{2b}u^{2}}{\rm{d}}t{\rm{d}}v\nonumber\\ =&\frac{1}{j2\pi b}e^{-j\frac{ab}{2}}\int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}f(t)g(s-t+b)\nonumber\\ &\quad \times e^{j\frac{a}{2b}[t^{2}+(s-t+b)^2]-j\frac{1}{b}us+j\frac{d}{2b}u^{2}}{\rm{d}}s{\rm{d}}t\nonumber\\ =&\frac{1}{j2\pi b}e^{-j\frac{ab}{2}}\int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}f(t)g(s-t+b)\nonumber\\ &\quad\times e^{j\frac{a}{2b}[2t^{2}-2st+s^2+2bs-2bt+b^2]-j\frac{1}{b}us+j\frac{d}{2b}u^{2}}{\rm{d}}s{\rm{d}}t\nonumber\\ =&\sqrt{\frac{1}{j2\pi b}}\int_{-\infty}^{+\infty} e^{j\frac{a}{2b}s^{2}-j\frac{1}{b}us+j\frac{d}{2b}u^{2}}\nonumber\\ &\;\times \left\{\sqrt{\frac{1}{j2\pi b}}\int_{-\infty}^{+\infty}f(t)g(s-t+b) e^{j\frac{a}{b}t^{2}-j\frac{a}{b}st+jas-jat}{\rm{d}}t\right\}{\rm{d}}s\nonumber\\ =&\sqrt{\frac{1}{j2\pi b}}\int_{-\infty}^{+\infty} e^{j\frac{a}{2b}s^{2}-j\frac{1}{b}us+j\frac{d}{2b}u^{2}}(f\otimes_A g)(s){\rm{d}}s\nonumber\\ =&\mathcal{L}_{A}(f\otimes_A g)(u).\label{ThmA:5}\end{aligned}$$ This completes the proof. After simple computation, it follows from Theorem \[Thm:A\] that the canonical convolution operator $\otimes_A$ satisfies three properties: Commutative property, associative property, and distributive property. More clearly, the following three equalities hold for any $f,\;g,\;h\in L^1(\mathbb{R})$: 1. Commutativity: $f\otimes_A g=g\otimes_A f.$ 2. Associativity: $(f\otimes_A g)\otimes_A h=f\otimes_A(g\otimes_A h)$. 3. Distributivity: $f\otimes_A(g+h)=f\otimes_A g+f\otimes_A h$. Two Applications for the New Canonical Convolution Operator {#sec:Some} =========================================================== Generalized Young’s Inequality {#Sec:Generalized} ------------------------------ In this subsection, we investigate the generalized Young’s inequality for the new canonical convolution operator $\otimes_A$. First, let us recall the classical Young’s inequality as follows. \[Prop:A\] Let $f\in L^p(\mathbb{R}),\; g\in L^q(\mathbb{R})$, $\frac{1}{p}+\frac{1}{q}=1+\frac{1}{r}$, $\frac{1}{r}+\frac{1}{r^{\prime}}=1$. Then, $$\label{Prop:A1} \|f*g\|_r\le A_pA_qA_{r^{\prime}}\|f\|_p\|g\|_q,$$ where $$\label{Prop:A2} A_p=\Big({\frac{p^{1/p}}{{p^{\prime}}^{1/{p^{\prime}}}}}\Big)^{1/2},$$ and $1/p+1/p^{\prime}=1$. Next, we show that our new canonical convolution operator $\otimes_A$ also satisfies the Young’s inequality. \[Thm:C\] Let $f\in L^p(\mathbb{R}),\; g\in L^q(\mathbb{R})$, $\frac{1}{p}+\frac{1}{q}=1+\frac{1}{r}$,$\frac{1}{r}+\frac{1}{r^{\prime}}=1$. Then, $$\label{ThmC:1} \|f\otimes_A g\|_r\le \sqrt{\frac{1}{2\pi |b|}}A_p A_q A_{r^{\prime}} \|f\|_p\|g\|_q,$$ where $A_p$ is defined the same as in (\[Prop:A2\]). By (\[relat:1\]), we obtain $$\begin{aligned} &\|f\otimes_A g\|_r\nonumber\\ =&\bigg(\int_{-\infty}^{+\infty}\Big|(e^{j\frac{a}{2b}s^2}\cdot f(s))*(e^{jas}\cdot e^{j\frac{a}{2b}s^2}\cdot g(s+b))(t)\nonumber\\ &\times \sqrt{\frac{1}{j2\pi b}}e^{-j\frac{a}{2b}t^2}\Big|^r{\rm{d}}t\bigg)^{\frac{1}{r}}\nonumber\\ =& \sqrt{\frac{1}{2\pi |b|}}\bigg(\int_{-\infty}^{+\infty}\bigg|(e^{j\frac{a}{2b}s^2}f(s))*(e^{jas+j\frac{a}{2b}s^2} g(s+b))(t)\bigg|^r{\rm{d}}t\bigg)^{\frac{1}{r}}\nonumber\\ =&\sqrt{\frac{1}{2\pi |b|}}\left\|(e^{j\frac{a}{2b}(\cdot)^2} f(\cdot))*(e^{ja(\cdot)+j\frac{a}{2b}(\cdot)^2} g(\cdot+b))\right\|_r\nonumber\\ \triangleq&\sqrt{\frac{1}{2\pi |b|}}\|\tilde{f}* \tilde{g}\|_r,\label{ThmC:2}\end{aligned}$$ where $\tilde{f}(\cdot)\triangleq e^{j\frac{a}{2b}(\cdot)^2} f(\cdot)$, $\tilde{g}(\cdot)\triangleq e^{ja(\cdot)+j\frac{a}{2b}(\cdot)^2} g(\cdot+b)$. Note that $\tilde{f}\in L^p(\mathbb{R}),\; \tilde{g}\in L^q(\mathbb{R})$. Applying the classical Young’s inequality (\[Prop:A1\]) for the functions $\tilde{f}$ and $\tilde{g}$, we have $$\label{ThmC:3} \|\tilde{f}*\tilde{g}\|_r\le A_pA_qA_{r^{\prime}}\|\tilde{f}\|_p\|\tilde{g}\|_q,$$ Note that $\|\tilde{f}\|_p=\|f\|_p,\;\|\tilde{g}\|_q=\|g\|_q$. Substituting (\[ThmC:3\]) into (\[ThmC:2\]), we get $$\begin{aligned} \|f\otimes_A g\|_r &=&\sqrt{\frac{1}{2\pi |b|}}\|\tilde{f}* \tilde{g}\|_r\nonumber\\ &\le&\sqrt{\frac{1}{2\pi |b|}}A_p A_q A_{r^{\prime}}\|\tilde{f}\|_p\|\tilde{g}\|_q\nonumber\\ &=&\sqrt{\frac{1}{2\pi |b|}}A_p A_q A_{r^{\prime}}\|f\|_p\|g\|_q,\end{aligned}$$ which completes the proof. Solvability for One Class of Convolution Equations {#Sec:Equations} -------------------------------------------------- In this subsection, we mainly discuss the solution for a class of convolution equations associated with the canonical convolution operator $\otimes_A$. Assume that $\lambda\in \mathbb{C}$, and $f,\;g\in L^1(\mathbb{R})$ are given, $\phi$ is unknown, consider the following canonical convolution equation: $$\label{Eq:Cov1} \lambda \phi(t)+(g\otimes_A \phi)(t)=f(t).$$ In the sequel, we will determine the value of $\phi$ . Before presenting our main result, we give a lemma, which is very important for proving our theorem. \[Eq:L1\] Let $\Lambda(u):=\lambda+\mathcal{L}_{A}g(u)\Phi(u)$, then the following two statements hold: 1. If $\lambda\ne 0$, then there exists a constant C, such that $\Lambda(u)\neq 0$ for every $|u|> C$. 2. If for all $u\in \mathbb{R}$, $\Lambda(u)\neq 0$, then $\frac{1}{\Lambda(u)}$ is continuous and bounded on $\mathbb{R}$. The proof of Lemma \[Eq:L1\] is similar to those of [@ACTT2017B Proposition 7] and [@ACTT2017A Proposition 1], hence, we omit the proof. \[Thm:B\] Let $\Lambda(u)\neq 0$ for all $u\in \mathbb{R}$. Suppose that one of the following two conditions holds: 1. $\lambda\ne0$, and $\mathcal{L}_{A}f\in L^1(\mathbb{R})$; 2. $\lambda=0$, and $\frac{\mathcal{L}_{A}f}{\mathcal{L}_{A}g}\in L^1(\mathbb{R})$. Then equation (\[Eq:Cov1\]) has a solution in $L^1(\mathbb{R})$ if and only if $\mathcal{L}_{A^{-1}}\Big(\frac{\mathcal{L}_{A}f}{\Lambda}\Big)\in L^1(\mathbb{R})$. Furthermore, the solution has the form of $\phi=\mathcal{L}_{A^{-1}}\Big(\frac{\mathcal{L}_{A}f}{\Lambda}\Big)$. We only consider the case when the condition (1) is satisfied. Since $\Phi(u)=e^{ju-j\frac{d}{2b}u^2-j\frac{ab}{2}}$, $|\Phi(x)|=1$, we know that $\frac{1}{\Phi}$ is continuous and bounded on $\mathbb{R}$. Hence, $\frac{\mathcal{L}_{A}f}{\mathcal{L}_{A}g}\in L^1(\mathbb{R})$ if and only if $\frac{\mathcal{L}_{A}f}{\Phi\mathcal{L}_{A}g}\in L^1(\mathbb{R})$. Therefore, the case (2) becomes to the case (1). Necessity: Suppose that equation (\[Eq:Cov1\]) has a solution $\phi\in L^1(\mathbb{R})$. Multiplying the operator $\mathcal{L}_{A}$ to the both sides of the equation (\[Eq:Cov1\]), then we have $$\label{ThmB:B2} \lambda \mathcal{L}_{A}\phi(u)+\mathcal{L}_{A}(g\otimes_A \phi)(u)=\mathcal{L}_{A}f(u).$$ By using (\[ThmA:2\]), we obtain $$\label{ThmB:B2} \lambda \mathcal{L}_{A}\phi(u)+\Phi(u)\mathcal{L}_{A}g(u) \mathcal{L}_{A}\phi(u)=\mathcal{L}_{A}f(u),$$ i.e., $$\label{ThmB:B3} (\lambda+\Phi(u)\mathcal{L}_{A}g(u))\mathcal{L}_{A}\phi(u)=\mathcal{L}_{A}f(u).$$ Since $\lambda\neq 0$, then $$\Lambda(u)=\lambda+\Phi(u)\mathcal{L}_{A}g(u)\neq 0$$ for all $u\in \mathbb{R}$. Therefore, the equation (\[ThmB:B3\]) becomes $$\label{ThmB:B4} \mathcal{L}_{A}\phi(u)=\frac{\mathcal{L}_{A}f(u)}{\Lambda(u)}.$$ By Lemma \[Eq:L1\], we know that $\frac{1}{\Lambda(u)}$ is continuous and bounded on $\mathbb{R}$. Since $\mathcal{L}_{A}f\in L^1(\mathbb{R})$, we have $\frac{\mathcal{L}_{A}f(u)}{\Lambda(u)}\in L^1(\mathbb{R})$. Taking inverse LCT transform to the both sides of equation (\[ThmB:B4\]), we obtain the solution $$\phi(t)=\mathcal{L}_{A^{-1}}\Big(\frac{\mathcal{L}_{A}f(u)}{\Lambda(u)}\Big)(t).$$ Since $\phi\in L^1(\mathbb{R})$, then $\mathcal{L}_{A^{-1}}\Big(\frac{\mathcal{L}_{A}f}{\Lambda}\Big)\in L^1(\mathbb{R}).$ Sufficiency: Let $$\phi(t):=\mathcal{L}_{A^{-1}}\Big(\frac{\mathcal{L}_{A}f(u)}{\Lambda(u)}\Big)(t).$$ Then, we have $\phi\in L^1(\mathbb{R})$. Applying the LCT to $\phi$, we get $$\mathcal{L}_{A}\phi(u)=\frac{\mathcal{L}_{A}f(u)}{\Lambda(u)}.$$ That is to say, $$(\lambda+\Phi(u)\mathcal{L}_{A}g(u))\mathcal{L}_{A}\phi(u)=\mathcal{L}_{A}f(u).$$ By using (\[ThmA:2\]) again, we obtain $$\mathcal{L}_{A}\left\{\lambda \phi(t)+(g\otimes_A \phi)(t)\right\}(u)=\mathcal{L}_{A}f(u).$$ Due to the uniqueness of LCT operator $\mathcal{L}_{A}$, $\phi$ satisfies the equation (\[Eq:Cov1\]) for almost every $t\in \mathbb{R}$, which means that equation (\[Eq:Cov1\]) has a solution. This completes the proof. Let $A=(\cos\alpha,\sin\alpha,-\sin\alpha,\cos\alpha),$ then Theorem \[Thm:B\] reduces to the Theorem 3 mentioned in [@ACTT2017A]. Similar to the Definition \[def:convo\], we can define another canonical convolution operator $\odot_A$ by $$\begin{aligned} \Big(f\odot_A g\Big)(t)&=&\sqrt{\frac{1}{j2\pi b}}\int_{-\infty}^{+\infty}f(u)g(t-u-b){}\nonumber\\ {}&&\times e^{j\frac{a}{b}u^{2}-j\frac{a}{b}ut-jat+jau}{\rm{d}}u.\label{def:Rem1}\end{aligned}$$ Then, the new canonical convolution operator $\odot_A$ also has three properties: Commutative property, associative property, and distributive property. In addition, the statements in Theorem \[Thm:A\], Theorem \[Thm:C\], and Theorem \[Thm:B\] also hold for operator $\odot_A$ with some minor adjustments. Due to the similarity, we omit the proof of this claim. Conclusion {#Sec:Conclusion} ========== In this paper, we first define a new canonical convolution operator, which is much more flexible and simple than the existing ones. Then, we show that it satisfies the generalized convolution theorem and Young’s inequality. Finally, we investigate the solvability of a class of convolution equations associated with the new canonical convolution operator. Acknowledgements {#acknowledgements .unnumbered} ================ The author thanks the referees very much for carefully reading the paper and for elaborate and valuable suggestions. [99]{} P. K. Anh, L. P. Castro, P. T. Thao, and N. M. Tuan. Inequalities and consequences of new convolutions for the fractional [Fourier]{} transform with [Hermite]{} weights. In [*AIP Conf Proc*]{}, volume 1798, page 020006. AIP Publishing, 2017. P. K. Anh, L. P. Castro, P. T. Thao, and N. M. Tuan. Two new convolutions for the fractional [Fourier]{} transform. , 92(2):623–637, 2017. B. Barshan, M. A. Kutay, and H. M. Ozaktas. Optimal filtering with linear canonical transformations. , 135(1):32–36, 1997. L. M. Bernardo. matrix formalism of fractional [Fourier]{} optics. , 35(3):732–740, 1996. B. Deng, R. Tao, and Y. Wang. Convolution theorems for the linear canonical transform and their applications. , 49(5):592–603, 2006. K. Gr[ö]{}chenig. . Birkh[ä]{}user, New York, 2001. L. Huang, K. Zhang, Y. Chai, and S. Xu. Uncertainty principle and orthogonal condition for the short-time linear canonical transform. , 10(6):1177–1181, 2016. H. Huo and W. Sun. Sampling theorems and error estimates for random signals in the linear canonical transform domain. , 111:31–38, 2015. M. Moshinsky and C. Quesne. Linear canonical transformations and their unitary representations. , 12(8):1772–1780, 1971. H. M. Ozaktas, Z. Zalevsky, and M. A. Kutay. . Wiley, New York, 2001. S.-C. Pei and J.-J. Ding. Relations between fractional operations and time-frequency distributions, and their applications. , 49(8):1638–1655, 2001. W. Qiu, B.-Z. Li, and X.-W. Li. Speech recovery based on the linear canonical transform. , 55(1):40–50, 2013. K. K. Sharma, L. Sharma, and S. Sharma. On bandlimitedness of signals in the [2D]{}-nonseparable linear canonical transform domains. , 9(4):941–946, 2015. J. Shi, M. Han, and N. Zhang. Uncertainty principles for discrete signals associated with the fractional [Fourier]{} and linear canonical transforms. , 10(8):1–7, 2016. J. Shi, X. Liu, and N. Zhang. Generalized convolution and product theorems associated with linear canonical transform. , 8:967–974, 2014. J. Shi, X. Sha, Q. Zhang, and N. Zhang. Extrapolation of bandlimited signals in linear canonical transform domain. , 60(3):1502–1508, 2012. A. Stern. Why is the linear canonical transform so little known? In [*AIP Conf Proc Ser*]{}, volume 860, pages 225–234, 2006. A. Stern. Uncertainty principles in linear canonical transform domains and some of their implications in optics. , 25(3):647–652, 2008. D. Wei. Image super-resolution reconstruction using the high-order derivative interpolation associated with fractional filter functions. , 10(9):1052–1061, 2016. D. Wei and Y. Li. Reconstruction of multidimensional bandlimited signals from multichannel samples in linear canonical transform domain. , 8(6):647–657, 2014. D. Wei and Y. M. Li. Generalized sampling expansions with multiple sampling rates for lowpass and bandpass signals in the fractional [Fourier]{} transform domain. , 64(18):4861–4874, 2016. D. Wei, Q. Ran, and Y. Li. A convolution and correlation theorem for the linear canonical transform and its application. , 31(1):301–312, 2012. D. Wei, Q. Ran, Y. Li, J. Ma, and L. Tan. A convolution and product theorem for the linear canonical transform. , 16(10):853–856, 2009. D. Wei, R. Wang, and Y.-M. Li. Random discrete linear canonical transform. , 33(12):2470–2476, 2016. Q. Xiang and K. Qin. Convolution, correlation, and sampling theorems for the offset linear canonical transform. , 8(3):433–442, 2014. L. Xiao and W. Sun. Sampling theorems for signals periodic in the linear canonical transform domain. , 290:14–18, 2013. L. Xu, R. Tao, and F. Zhang. Multichannel consistent sampling and reconstruction associated with linear canonical transform. , 24(5):658–662, 2017. Q. Zhang. Zak transform and uncertainty principles associated with the linear canonical transform. , 10(7):791–797, 2016. J. Zhao, R. Tao, Y.-L. Li, and Y. Wang. Uncertainty principles for linear canonical transform. , 57(7):2856–2858, 2009.
--- abstract: 'Training population selection for genomic selection has captured a great deal of interest in animal and plant breeding. In this article, we derive a computationally efficient statistic to measure the reliability of estimates of genetic breeding values for a fixed set of genotypes based on a given training set of genotypes and phenotypes. We adopt a genetic algorithm scheme to find a training set of certain size from a larger set of candidate genotypes that optimizes this reliability measure. Our results show that, compared to a random sample of the same size, phenotyping individuals selected by our method results in models with better accuracies. We implement the proposed training selection methodology on four data sets, namely, the arabidopsis, wheat, rice and the maize data sets. Our results indicate that dynamic model building process which uses genotypes of the individuals in the test sample into account while selecting the training individuals improves the performance of GS models.' address: ' , , ' author: - bibliography: - 'PEVmean.bib' title: 'Training population selection for (breeding value) prediction' --- Introduction {#introduction .unnumbered} ============ Breeding through genomic selection (GS) in animal or plant breeding is based on estimates of genetic breeding values (GEBVs). Prediction of the GEBVs usually involves implementing a whole-genome regression model where the known phenotypes are regressed on the markers. In GS, first a set of genotypes to be phenotyped (a training population) are identified and phenotyped. Once the phenotypes are measures for the training set of individuals, a regression model is trained to predict GEBVs for individuals which were not phenotyped. Finally, these GEBVs are used for evaluation of individuals. Since phenotyping is a time consuming and costly process selecting a good training population is essential for the success of GS. In this article we concentrate on the first step of GS, i.e., the selection of training population, to address the accuracy of the GS models. We imagine a scenario in which we are given two sets of individuals and their markers. The first set includes the candidate individuals from which a training set is to be selected for phenotyping to predict the GEBVs of the individuals in the second test set. It will be shown that a model building process which uses genotypes of the individuals in the test sample into account while selecting the training individuals improves the performance of prediction models. Various regression models have been successfully used for predicting the breeding values in plants and animals. In both simulation studies and in empirical studies of dairy cattle, mice and in bi-parental populations of maize, barley and *Arabidopsis* marker based GEBVs have been quite accurate. However, it has also been shown that as the training and testing population diverge the accuracies of the GEBVs decrease. As the breeding populations tend to change over time, the result is that the accuracies of the GEBVs obtained from the training population decrease over time. Similarly, in the existence of strong population structure, the GEBVs obtained by using sub-populations are usually not accurate for individuals in other sub-populations. In breeding, the problem of training population selection has captured some attention. For example, the relaibility measure of VanRaden ([@vanraden2008]) is expressed as $$\label{VanRaden}K_{21}(K_{11}+\delta I)^{-1}K'_{21}$$ where $K_{21}$ is the matrix of genomic relationships between the individuals in the test set to each of the individuals in the training set and $K_{11}$ measures the genomic relationships in the training set and finally the parameter $\delta$ is related to the heritability ($h$) of the trait by $\delta=(1-h^2)/h^2.$ This reliability measure is related to Henderson’s prediction error variance (PEV) ([@henderson1975best]) and the more recent coefficient of determination (CD) of Laloe ([@laloe1996considerations]) which were both utilized in ([@rincent2012maximizing]) for the training population selection problem. The optimization of the reliability measure in \[VanRaden\] and the related PEV and CD require expensive evaluations (inversion of large matrices) many times therefore they are not computationally feasible for large applications. In the next sections, we derive a computationally efficient approximation to the PEV and use this measure for the training population selection. Another novelty in our method compared to the optimization schemes recommended in ([@rincent2012maximizing]) is that in our case we calculate the prediction error variance for the individuals in the test set instead of evaluating it within the candidate set, i.e., we use domain information about the test data while building the estimation model by selecting in the individuals to the training set such that they minimize the PEV in the test set. The methods developed here can be used for dynamic the model building, in other words, different test sets will amount to different individuals be selected from the candidate set and hence different estimation models. Methods {#methods .unnumbered} ======= Traditionally, the breeder is interested in the total additive genetic effects as opposed to the total genetic value. Therefore, a linear model is assumed between the markers and the phenotypes. This is expressed as writing $$y=\beta_0+{\boldsymbol{m}}'\beta+e$$ where $y$ stands for the phenotype, $\beta_0$ is the mean parameter, ${\boldsymbol{m}}$ is the $m-$vector of marker values, $\beta$ is the $m-$vector of marker effects and $e,$ the difference between the observed and the fitted linear relationship, has a normal distribution with zero mean and variance $\sigma_e^2.$ In order to estimate the parameters of this model, we will acquire $n_{Train}$ individuals from a larger candidate population. The model the will be used to estimate a fixed set of $n_{Test}$ individuals. Let M bet the matrix of markers partitioned as $$M=\left[ \begin{array}{c} M_{Candidate}\\ \hline M_{Test} \end{array} \right]$$ where $M_{Candidate}$ is the $n\times m$ matrix of markers for the individuals in the candidate set and $M_{Test}$ is the matrix of markers for the individuals in the test set. We would like to identify $n_{Train}$ training set individuals from the candidate set (and therefore a matrix $M_{Train}$) for which the average prediction variance for the individuals in the test set needs to be minimized. Given we have determined $M_{Train}$ and observed their phenotypes ${\boldsymbol{y}}_{Train},$ we can write $${\boldsymbol{y}}_{Train}=({\boldsymbol{1}}, M_{Train})(\beta_0, \beta')'+{\boldsymbol{e}}.$$ Under the assumptions of this model the uniformly minimum variance estimators for the phenotypes in the test data is expressed as $$\widehat{{\boldsymbol{y}}}_{Test}=({\boldsymbol{1}}, M_{Test})(({\boldsymbol{1}},M_{Train})'({\boldsymbol{1}},M_{Train}))^{-}({\boldsymbol{1}},M_{Train})'{\boldsymbol{y}}_{Train}$$ where the $^{-}$ denotes the pseudo inverse of a matrix. Ignoring the constant term, $\sigma_e^2,$ the covariance matrix (Prediction Error Variance (PEV)) for $\widehat{{\boldsymbol{y}}}_{Test}$ is $$PEV(M_{Test})=({\boldsymbol{1}}, M_{Test})(({\boldsymbol{1}},M_{Train})'({\boldsymbol{1}},M_{Train}))^{-}({\boldsymbol{1}},M_{Test})'.$$ With the emergence of modern genotyping technologies the number of markers can vastly exceed the number of individuals. To overcome the problems emerging in these large $m$ with small $n$ regressions, estimation procedures performing variable selection, shrinkage of estimates, or a combination of both are commonly used while estimating the effects of markers. These methods trade the decreasing variance to increasing bias due to shrinkage of individual marker effects to obtain a better overall prediction performance. Since the variance of these selection-shrinkage methods will be smaller than the least squares estimators, the $PEV(M_{Test})$ is an upper bound on the covariance matrix of the PEV of these models. To see this consider the PEV from the ridge regression: $$\label{PEVRIDGE}PEV^{Ridge}(M_{Test})=({\boldsymbol{1}}, M_{Test})(({\boldsymbol{1}},M_{Train})'({\boldsymbol{1}},M_{Train})+\lambda I)^{-1}({\boldsymbol{1}},M_{Test})'$$. Clearly, $PEV^{Ridge}(M_{Test})\leq PEV(M_{Test})$ for any $\lambda\geq 0.$ We would like to obtain minimum variance for our predictions in the test data set. Therefore, we recommend minimizing $$tr(PEV(M_{Test}))$$ with respect to $M_{Train}$ when selecting individuals to the training set. The training data evaluation criterion $PEV$ is related to the integrated average prediction variance (IV), where $$IV=\frac{1}{A}\int_\chi{\boldsymbol{x}}'(X'_{Train}X_{Train})^{-1}{\boldsymbol{x}}d{\boldsymbol{x}}$$ where $A$ is the volume of the space of interest $\chi.$ See Box and Draper ([@box1959basis]) for a detailed discussion of this criterion. A design that minimizes IV is referred to as IV-optimal. However, since we are dealing with a large number of markers and any optimization scheme would involve numerous evaluation of this objective function the formula for the $PEV(M_{Test})$ is not practically applicable. A more suitable numerically efficient approximation to $PEV(M_{Test})$ can be obtained by using the first few principal components (PCs) of the markers matrix $M$ instead of $M$ in the training population selection stage. Let $P$ be the matrix of first $k\leq min(m, n)$ PCs partitioned as $$P=\left[ \begin{array}{c} P_{Candidate}\\ \hline P_{Test} \end{array} \right]$$ where $P_{Candidate}$ is the matrix of PCs for the individuals in the candidate set and $P_{Test}$ is the matrix of PC’s for the individuals in the test set. Now, $PEV^{Ridge}(M_{Test})$ can be approximated by $$\label{PEVRIDGEAPPROX}PEV(M_{Test})\approx ({\boldsymbol{1}}, P_{Test})(({\boldsymbol{1}},P_{Train})'({\boldsymbol{1}},P_{Train})+\lambda I)^{-1}({\boldsymbol{1}},P_{Test})'.$$ Finally, we would like to note that the $PEV(M_{Test})$ is related to the reliability measure in (\[VanRaden\]). To see this, write $$(M'_{Train}M_{Train}+\lambda I)^{-1}=\frac{1}{\lambda}(I-M'_{Train}(M_{Train}M'_{Train}+\lambda I)^{-1}M_{Train}.$$ Lettting $\delta=m\lambda,$ $K_{21}= M_{Test}M'_{Train}/m,$ $K_{11}=$ $ M_{Train}M'_{Train}/m$ and $K_{22}=$ $M_{Test}M'_{Test}/m$ then using the Woodbury matrix identity at the third step ([@petersen2008matrix]), we have $$\begin{aligned} PEV(M_{Test})&= M_{Test}(M_{Train}M_{Train}+\lambda I)_{-1} M'_{Test}\\ &=M_{Test}(\lambda(\frac{M'_{Train}M_{Train}}{\lambda}+I))^{-1}M'_{test}\\ &=\frac{1}{\lambda}M_{Test}(I-M'_{Train}(M_{Train}M'_{Train}+\lambda I)^{-1}M_{Train})M'_{Test}\\ &= \frac{1}{\lambda}\left[M_{Test}M'_{Test}-M_{Test}M'_{Train}(M_{Train}M'_{Train}+\lambda I)^{-1}M_{Train}M'_{Test}\right]\\ &\propto K_{22} -K_{21}(K_{11}+m\lambda I)^{-1}K'_{21}.\end{aligned}$$ Therefore, maximizing average reliability is equivalent to minimizing the total $PEV^{Ridge}$ in (\[PEVRIDGE\]), however since we would like to be evaluate many candidate training sets in the course of optimization we prefer the computationally efficient approximation in (\[PEVRIDGEAPPROX\]). The scalar measure obtained by taking the trace of (\[PEVRIDGEAPPROX\]) will be used to evaluate training populations subsequently. The training selection optimization is a combinatorial optimization problem. Genetic algorithms where a population of candidate solutions that are represented as binary strings of 0s and 1s is evolved toward better solutions. At each iteration of the algorithm a fitness function is used to evaluate and select the elite individuals and subsequently the next population is formed from the elites by genetically motivated operations like crossover, mutation. Genetic algorithms are particularly suitable for optimization of combinatorial problems, therefore its our choice here. It should be noted that the solutions to the obtained by the genetic algorithm will usually be sub-optimal and different solutions can be obtained different starting points are used. In the following section we will compare our training population selection scheme will be evaluated by fitting a semi-parametric mixed model (SPMM) ([@de2010semi; @gianola2008reproducing]) using the genotypes and phenotypes in the training set and calculating the correlation of the test set phenotypes to the estimates based on this model. In these mixed models genetic information in the form of a pedigree or markers are used to construct an additive relationship matrix that describes the similarity of line specific additive genetic effects. These models have been successfully used for predicting the breeding values in plants and animals. A SPMM for the $n\times 1$ response vector ${\boldsymbol{y}}$ is expressed as $$\label{eq:spmm} {\boldsymbol{y}}=X\beta+Z{\boldsymbol{g}}+{\boldsymbol{e}}$$ where $X$ is the $n\times p$ design matrix for the fixed effects, $\beta$ is a $p\times 1$ vector of fixed effects coefficients, $Z$ is the $n\times q$ design matrix for the random effects; the random effects $({\boldsymbol{g}}',{\boldsymbol{e}}')'$ are assumed to follow a multivariate normal distribution with mean ${\boldsymbol{0}}$ and covariance $$\left( \begin{array}{cc} \sigma^{2}_{g} K & {\boldsymbol{0}}\\ {\boldsymbol{0}}& \sigma^{2}_{e} I_{n} \end{array} \right)$$ where $K$ is a $q\times q$ relationship matrix. For fitting the mixed models we have developed and utilized the EMMREML package ([@EMMREMLpackage]) which is available in R ([@team2005r]). The rest of the software was also programmed in R and are available in the supplementary files. An additive relationship matrix can be calculated from the centered scaled markers $M$ as $K=MM' /m.$ Given a similarity matrix $K$ the principal components used in our algorithm can be calculated from this matrix therefore the statistic in (\[PEVRIDGEAPPROX\]) can also be used in these cases. Results {#results .unnumbered} ======= Data sets of different origins are used for illustrations in this section. The Arabidopsis data set was published by Atwell et al. (2010) and is available at <https://cynin.gmi.oeaw.ac.at/home/resources/atpolydb/>. The wheat data was downloaded from [triticeaetoolbox.org](triticeaetoolbox.org). The rice data was published in [@zhao2011genome] and was downloaded from <http://www.ricediversity.org/data/>. These data sets are also available for download in the supplementary files. In order to evaluate the performance of the selection algorithm, we have devised the following illustrations. Arabidopsis data set consisted of genotypes of $199$ inbred lines along with observations on $107$ traits. Here we will report the result for 50 of these traits. For each trait first a test sample of size $n_{Test}=50$ was identified. From the remaining genotypes $n_{Train}=25,50,80$ were selected in the training population by random sampling or by the optimization method described in the previous section. The accuracies of the models were calculated by comparing the GEBVs with the observed phenotypes. This was repeated 30 times and the results are summarized in Figure \[fig:Arabidopsis\]. At all sample sizes and for the vast majority of the traits the optimized samples improve accuracies as compared of the random samples. The difference is larger in general for smaller sample sizes and seems to decrease as the sample size increases. The accuracies of the genomic selection models tend to decrease as the training and test populations diverge. We claim that this can be partially remedied by optimizing training populations for the target population where the estimates are needed. The results from the next examples justify this claim. 5087 markers for 3975 elite wheat lines in the National Small Grains Collection (NSGC) were used for this example. In this experiment the thousand kernel weights were observed for non-overlapping subsets of the genotypes over five years (108 genotypes in year 2005, 416 in 2006, 281 in 2007,1358 in 2008 and 1896 in 2009). We want to obtain the GEBVs of the genotypes for each of the years 2007 to 2009 from the genotypes that were observed before that year. The GEBVs for a random sample of $n_{Test}=200$ genotypes in the current year are estimated using by first a random sample and then an optimized sample of sizes $n_{Train}=100,300$ genotypes and phenotypes from the years preceding the test year. The experiment was repeated 30 times and the results are summarized with the box plots in Figure \[fig:WheatDataFiveYears\].The results are similar, models from optimized samples outperform the models from same size random samples and this difference decreases as the training sample size increases. In the next example, we use a highly structured population and apply our population selection method in two different scenarios. A diverse collection of 395 O. sativa (rice) accessions including both land races and elite varieties which represent the range of geographic and genetic diversity of the species was used in this example. In addition to measurements for 36 continuous traits, genetic data on 40K SNPs were available for these 395 accessions. This data was first presented in [@zhao2011genome] and was also analyzed in [@wimmer2013genome]. We have selected five of these traits for our analysis, namely florets per panicle (FP), panicle fertility, seed length (SL), seed weight (SW), seed surface area (SSA) and straighthead susceptability (SHS). For each of these traits a different subset of genotypes had the trait values. In the first scenario, for each trait first a test sample of size $n_{Test}=100$ was identified. From the remaining genotypes $n_{Train}=25,50,100$ were selected in the training population by random sampling or by the optimization method described in the previous section. The accuracies of the models were calculated by comparing the GEBVs with the observed phenotypes. This was repeated 30 times and the results are summarized in Figure \[fig:Rice1\]. Our last example is about evaluating the ability of estimating across clusters in a highly structured Maize data set. This data is given in [@romay2013comprehensive] and was also analyzed in [@wimmer2013genome]. 68120 markers on 2279 USA national inbred maize lines and their phenotypic means for degree days to silking compose the data set. We have first clustered the data into five clusters using the Euclidean distance matrix and the Ward’s criterion for hierarchical clustering. The number of individuals in the resulting clusters were 1317 genotypes in the first cluster, 184 in second, 552 in third, 95 in forth and 131 in the fifth. From each of these clusters a test data set of size $n_{Test}=50$ was selected at random and a training population of size $n_{Train}=50, 100, 200$ genotypes from the remaining clusters were selected by random sampling or with the optimization scheme recommended in this article. The accuracies for estimating the observed trait values in each of these clusters were calculated for 30 independent replications and they are summarized in Figure \[fig:AmesData\]. Once again the optimized training sets outperform the random samples of the same size. Conclusions {#conclusions .unnumbered} =========== In this article we have taken on the training selection problem and have shown by examples that incorporating information about the test set when available can improve the accuracies of prediction models. The approach we developed here is also computationally efficient. As seen from the examples in the previous section,the accuracy of the prediction models can be improved if the genotypes selected in the training population using our scheme especially when the required training sample size is small. By eliminating the irrelevant, outlier or influential individuals to enter into the model, and by ensuring that the a diverse training data set that adequately represent the test data set optimized training populations attain highly accurate models even when the training and test sets are not sampled from the same populations. In the examples in previous section, we have selected the training populations separately for each trait. This was mainly because a different subset of genotypes were observed for different traits in the data sets. In practice however it would be satisfactory to select a single training population for all the traits with similar heritabilities because in the real setting phenotyping will follow this step and procedure is robust to the choice of the shrinkage parameter $\lambda.$ We have discussed the training population problem in the context of the regression of continuous traits on the genotypes based on SPMMs. However, this approach can be used to obtain more accurate prediction models in different domains, i.e., in the general statistical learning domain. Our methods are useful for all high dimensional prediction problems where per individual cost of observing / analyzing the response variable is too high and a small number of training examples is sought and when the candidate data set is not representative of the test data set. Our results also indicate that genetic algorithm scheme adopted in this article is very efficient in finding a good solution in training population selection problem. However, there is no guarantee that the solutions found by this algorithm are the globally optimal solutions. Since the purpose of the article was to evaluate the overall improvement over many replications of the same experiments it was not feasible for us to start the genetic algorithm at different starting points but when it is affordable it would be safer to do so. A dynamic model building approach might be more suitable when the genotypes in the test set are highly structured. It might be possible to improve accuracies using a different model for different parts of the tests set built on only the genotypes selected by the training population selection algorithm. Another approach we have not tried, but worth additional inquiry, is to estimate each test point with a different model. Competing interests {#competing-interests .unnumbered} =================== The authors declare that they have no competing interests. Author’s contributions {#authors-contributions .unnumbered} ====================== Deniz Akdemir (Corresponding Author): Idea, text & programs.\ Acknowledgments {#acknowledgments .unnumbered} =============== This research was supported by the USDA-NIFA-AFRI Triticeae Coordinated Agricultural Project, award number 2011-68002-30029. Figures {#figures .unnumbered} ======= ![The difference between the accuracies of the models trained on optimized populations versus random samples. Positive values indicate the cases where the optimized population performed better as compared to the random sample. The median accuracies of the optimized sample for the traits are also available by the corresponding bar.[]{data-label="fig:Arabidopsis"}](ArabidopsisFigure.eps){width=".8"} ![The comparisons of the mean accuracies (measured by correlation) when the test data set is selected from years 2007 through 2009 for different training sample sizes. For each of these cases the training set was selected from the genotypes in the years preceding the test year.[]{data-label="fig:WheatDataFiveYears"}](WheatDataOverYears.eps){width=".8"} ![The comparisons of mean accuracies (measured by correlation) for the traits florets per panicle (FP), panicle fertility, seed length (SL), seed weight (SW), seed surface area (SSA) and straighthead susceptability (SHS) for different training sample sizes. Optimized samples outperform random samples almost exclusively.[]{data-label="fig:Rice1"}](RICEDATAALLTRAITS.eps){width=".8"} ![The comparisons of the accuracies for prediction across clusters in the highly structured Maize data set. For a test data set of size $n_{Test}=50$ was selected at random in a particular cluster and a training population of size $n_{Train}=50, 100, 200$ genotypes was selected from the remaining clusters. The accuracies vary significantly from cluster to cluster however the optimized training set performs better on average.[]{data-label="fig:AmesData"}](AMESDATACLUSTERED.eps){width=".8"} Additional Files {#additional-files .unnumbered} ================ Additional file — R programs and the data sets. {#additional-file-r-programs-and-the-data-sets. .unnumbered} ------------------------------------------------ trainingpopulationselection.zip
--- abstract: 'The billiard dynamics inside an ellipse is integrable. It has zero topological entropy, four separatrices in the phase space, and a continuous family of convex caustics: the confocal ellipses. We prove that the curvature flow destroys the integrability, increases the topological entropy, splits the separatrices in a transverse way, and breaks all resonant convex caustics.' address: - | Departamento de Matemática\ Universidade Federal de Ouro Preto\ 35.400–000, Ouro Preto, Brazil - | Departamento de Matemática\ ICEx, Universidade Federal de Minas Gerais\ 30.123–970, Belo Horizonte, Brazil - | Departament de Matemàtiques\ Universitat Politècnica de Catalunya\ Diagonal 647, 08028 Barcelona, Spain author: - Josué Damasceno - 'Mario J. Dias Carneiro' - 'Rafael Ramírez-Ros' title: The billiard inside an ellipse deformed by the curvature flow --- [^1] Introduction ============ One can shorten a smooth plane curve by moving it in the direction of its normal vector at a speed given by its curvature. This evolution generates a flow (called *curvature flow* or *curve shortening flow*) in the space of smooth plane curves that coincides with the negative $L^2$-gradient flow of the length of the curve. That is, the curve is shrinking as fast as it can using only local information. M. Gage and R. Hamilton [@GageHamilton1986] described the long time behavior of smooth convex plane curves under the curvature flow. They proved that convex curves stay convex and shrink to a point as they become more circular. This convergence to a “limit” circle takes place in the $C^\infty$-norm after a suitable normalization. M. Grayson proved that any embedded planar curve becomes convex before it shrinks to a point [@Grayson1987]. The length, the enclosed area, the maximal curvature, the number of inflection points, and other geometric quantities never increase along the curvature flow [@ChouZhu2001]. On the contrary, we present an example of how the curvature flow can increase the topological entropy of the billiard dynamics inside convex curves. The *topological entropy* of a dynamical system is a nonnegative extended real number that is a measure of the complexity of the system [@KatokH1995]. To be precise, the topological entropy represents the exponential growth rate of the number of distinguishable orbits as the system evolves. Therefore, increasing entropy means a more complex billiard dynamics, which is a bit surprising since the curvature flow rounds any convex smooth curve and circles are the curves with the simplest billiard dynamics. Birkhoff [@Birkhoff1927] introduced the problem of *convex billiard tables* almost 90 years ago as a way to describe the motion of a free particle inside a closed convex smooth curve. The particle is reflected at the boundary according to the law “angle of incidence equals angle of reflection”. If the boundary is an ellipse, then the billiard dynamics is *integrable* [@ChangFriedberg1988; @KozlovTreshchev1991; @Tabachnikov1995]. In particular, billiards inside ellipses have zero topological entropy. The motion along the major axis of the ellipse corresponds to a hyperbolic two-periodic orbit whose unstable and stable invariant curves coincide, forming four *separatrices*. The points on these separatrices correspond to the billiard trajectories passing through the foci of the ellipse. The interior of an ellipse is foliated with a continuous family of convex caustics: its confocal ellipses. A *caustic* is a curve inside the billiard table with the property that a billiard trajectory, once tangent to it, stays tangent after every reflection. Caustics with Diophantine rotation numbers persist under small smooth perturbations of the boundary [@Lazutkin1973], but *resonant caustics* —the ones whose tangent trajectories are closed polygons, so that their rotation numbers are rational— are fragile structures that generically break up [@RamirezRos2006; @PintodeCarvalhoRamirezRos2013]. All these dynamical and geometric manifestations of the integrability of billiards inside ellipses disappear when the ellipse is slightly deformed by the curvature flow. \[thm:MainTheorem\] The curvature flow breaks all resonant convex caustics, splits the separatrices in a transverse way, increases the topological entropy, and destroys the integrability of the billiard inside an ellipse. The proof of this theorem has two steps. First, we introduce the subharmonic and homoclinic Melnikov potentials associated to the perturbation of the ellipse under the curvature flow following the theory developed in [@DelshamsRamirez1996; @DelshamsRamirez1997; @RamirezRos2006; @PintodeCarvalhoRamirezRos2013]. In order to study these Melnikov potentials, we need several explicit formulas for the unperturbed billiard dynamics that can be found in [@ChangFriedberg1988; @DelshamsRamirez1996]. Second, we check that none of these Melnikov potentials is constant, which implies that the separatrices split and all resonant convex caustics break up. The loss of integrability follows directly from a theorem of Cushman [@Cushman1978], whereas the increase of the topological entropy follows from a theorem of Burns and Weiss [@BurnsWeiss1995]. We also find all the critical points of the Melnikov potentials, so we can locate all primary homoclinic points and all Birkhoff periodic trajectories, at least for small enough perturbations. Finally, we relate the homoclinic Melnikov potential to the limit of the subharmonic Melnikov potential when the resonant caustic tends to the separatrices. This is, up to our knowledge, the first time that such relation is explicitly shown up in a discrete system. Similar relations in continuous systems (that is, for ODEs) have been known from the eighties, see [@GuckemheimerHolmes1989 §4.6]. Our perturbed ellipses are static, we do not deal with time-dependent billiards. This paper is strongly inspired by Dan Jane’s example [@Jane2007] of a Riemannian surface for which the Ricci flow increases the topological entropy of the geodesic flow. His example is also based in a Melnikov computation, although the final step of his argument require the numerical evaluation of some Melnikov function. On the contrary, our result is purely analytic, since we characterize in a quite explicit way our Melnikov potentials using the theory of elliptic functions. We complete this introduction with a note on the organization of the article. In Section \[sec:BilliardEllipse\] we review some known results concerning billiards inside ellipses. The first order deformation of the ellipse under the curvature flow is given in Section \[sec:EllipseCurvatureFlow\]. We review the Melnikov theory for area preserving twist maps in the framework of billiards inside perturbed ellipses in Section \[sec:MelnikovPotentials\]. Finally, we check that these Melnikov potentials are not constant by analyzing their complex singularities in Section \[sec:Computations\]. The billiard inside an ellipse {#sec:BilliardEllipse} ============================== We consider the billiard dynamics inside the unperturbed ellipse $$\label{eq:Ellipse} Q_0 = \left\{ (x,y) \in {\mathbb{R}}^2 : x^2/a^2 + y^2/b^2 = 1 \right\},\qquad 0 < b < a.$$ Let $c = \sqrt{a^2-b^2}$ be the semi-focal distance of $Q_0$, so the foci of $Q_0$ are the points $(\pm c,0)$. We recall a geometric property of ellipses [@Tabachnikov1995]. Let $$C_\lambda = \left\{ (x,y) \in {\mathbb{R}}^2 : \frac{x^2}{a^2-\lambda^2} + \frac{y^2}{b^2-\lambda^2} = 1 \right\},\qquad \lambda \not \in \{a, b\},$$ be the family of *confocal conics* to the ellipse $Q_0$. It is clear that $C_\lambda$ is an ellipse for $0 < \lambda < b$ and a hyperbola for $b < \lambda < a$. No real conic exists for $\lambda > a$. The fundamental property of the billiard inside $Q_0$ is that any segment (or its prolongation) of a billiard trajectory is tangent to $C_\lambda$ for some fixed caustic parameter $\lambda > 0$. The notion of tangency in the degenerate case $\lambda = b$ is the following. A line is tangent to $C_b$ when it passes alternatively through the foci. We refer to [@AbramowitzS72; @WhittakerW27] for a general background on Jacobian elliptic functions. Let us recall some basic facts about them. Given a quantity $k\in(0,1)$, called the *modulus*, the *complete elliptic integral of the first kind* is $$K = K(k) = \int_{0}^{\pi/2}(1-k^2 \sin^2 \phi)^{-1/2} {{\rm d}}\phi.$$ We also write $K' = K'(k) = K(\sqrt{1-k^2})$. The *amplitude* function $\varphi = \operatorname{am}t = \operatorname{am}(t,k)$ is defined through the inversion of the integral $$t = \int_{0}^{\varphi}(1-k^{2} \sin^2 \phi)^{-1/2}{{\rm d}}\phi.$$ The *elliptic sine* and the *elliptic cosine* are defined by the trigonometric relations $$\label{eq:PeriodicVariable} \operatorname{sn}t = \operatorname{sn}(t,k) = \sin \varphi,\qquad \operatorname{cn}t = \operatorname{cn}(t,k) = \cos \varphi.$$ If the angular variable $\varphi$ changes by $2\pi$, then the angular variable $t$ changes by $4K$. Thus, any $2\pi$-periodic function in $\varphi$, becomes a $4K$-periodic function in $t$. We will usually denote the functions in $t$ by putting a tilde above the name of the function in $\varphi$. For instance, the $4K$-periodic parameterization of the ellipse $$\label{eq:SubharmonicParameterization} \tilde{q}_0: {\mathbb{R}}\to Q_0,\qquad \tilde{q}_0(t) = (a\operatorname{sn}t, b\operatorname{cn}t),$$ is obtained from the $2\pi$-periodic parameterization $$\label{eq:ClassicalParameterization} q_0:{\mathbb{R}}\to Q_0,\qquad q_0(\varphi) = (a\sin\varphi, b\cos\varphi).$$ Clearly, $\tilde{q}_0(t) = q_0(\varphi)$. The billiard dynamics associated to the convex caustic $C_\lambda$ becomes a rigid rotation $t \mapsto t + \delta$ in the angular variable $t$. It suffices to find the modulus $k$ and shift $\delta$ associated to each convex caustic $C_\lambda$. \[lem:ChangFriedberg\] Once fixed a caustic parameter $\lambda \in (0,b)$, we set the modulus $k \in (0,1)$ and the shift $\delta \in (0,2K)$ by the formulas $$\label{eq:ModulusShift} k^2 = (a^2-b^2)/(a^2 - \lambda^2),\qquad \operatorname{sn}(\delta/2) = \lambda/b.$$ The segment joining $\tilde{q}_0(t)$ and $\tilde{q}_0(t+\delta)$ is tangent to the caustic $C_\lambda$ for all $t \in {\mathbb{R}}$. Let $m$ and $n$ be two relatively prime integers such that $1 \le m < n/2$. Let $\rho(\lambda)$ be the rotation number of the convex caustic $C_\lambda$. We want to characterize the convex caustic $C_\lambda$ whose tangent billiard trajectories form closed polygons with $n$ sides that makes $m$ turns inside $Q_0$ or, equivalently, the caustic parameter $\lambda \in (0,b)$ such that $\rho(\lambda) = m/n$. Such caustic parameter is unique because $\rho:(0,b) \to {\mathbb{R}}$ is an increasing analytic function such that $\rho(0) = 0$ and $\rho(b) = 1/2$, see [@CasasRamirez2010]. Any $(m,n)$-periodic billiard trajectory gives rise to a $(n-m,n)$-periodic one by inverting the direction of motion. Hence, a convex caustic is $(m,n)$-resonant if and only if it is also $(n-m,n)$-resonant. This explains why we can assume that $m < n/2$. The caustic $C_\lambda$ is the $(m,n)$-resonant convex caustic if and only if $$\label{eq:ResonantCondition} n \delta = 4 K m.$$ This identity has the following geometric interpretation. When a billiard trajectory makes one turn around $C_\lambda$, the old angular variable $\varphi$ changes by $2\pi$, so the new angular variable $t$ changes by $4K$. On the other hand, we have seen that the variable $t$ changes by $\delta$ when a billiard trajectory bounces once. Hence, a billiard trajectory inscribed in $Q_0$ and circumscribed around $C_\lambda$ makes exactly $m$ turns around $C_\lambda$ after $n$ bounces if and only if (\[eq:ResonantCondition\]) holds. From now on, $k$ and $\delta$ will denote the modulus and the shift defined in (\[eq:ModulusShift\]). We will also assume that relation (\[eq:ResonantCondition\]) holds, since we only deal with resonant caustics. We will skip the dependence of the Jacobian elliptic functions on the modulus. The billiard dynamics through the foci of the ellipse can also be simplified by using a suitable variable $s \in {\mathbb{R}}$. If a billiard trajectory passes alternatively through the foci, its segments tend to the major axis of the ellipse both in future and past. We consider the change of variables $(-\pi/2,\pi/2) \ni \varphi \mapsto s \in {\mathbb{R}}$ given by $$\label{eq:NonperiodicVariable} \tanh s = \sin \varphi,\qquad \operatorname{sech}s = \cos \varphi,$$ in order to give explicit formulas for this dynamics. If $\varphi$ moves from $-\pi/2$ to $\pi/2$, then $s$ moves from $-\infty$ to $+\infty$. Thus, any $2\pi$-periodic function in $\varphi$ generates a non-periodic function in $s$. We will usually denote the function in $s$ by putting a hat above the name of the function in $\varphi$. For instance, the parametrization of the upper semi-ellipse $Q_0^+ = Q_0 \cap \{ y > 0\}$ given by $$\label{eq:HomoclinicParametrization} \hat{q}_0: {\mathbb{R}}\to Q_0^+,\qquad \hat{q}_0(s) = (a \tanh s, b \operatorname{sech}s),$$ is obtained from parameterization (\[eq:ClassicalParameterization\]). Clearly, $\hat{q}_0(s) = q_0(\varphi)$. The billiard dynamics through the foci becomes a constant shift $s \mapsto s+h$ in the variable $s \in {\mathbb{R}}$ for a suitable shift $h > 0$. \[lem:DelshamsRamirez\] Once fixed the semi-lengths $0 < b < a$, let $c = \sqrt{a^2 - b^2}$ be the semi-focal distance and let $h > 0$ be the quantity determined by $$\label{eq:CharacteristicExponent} \sinh(h/2) = c/b,\qquad \cosh(h/2) = a/b, \qquad \tanh(h/2) = c/a.$$ The segment from $\hat{q}_0(s)$ to $-\hat{q}_0(s+h)$ passes through the focus $(-c,0)$ for all $s \in {\mathbb{R}}$. Note that $\lim_{s \to \pm \infty} \hat{q}_0(s) = (\pm a,0)$, which shows up that the trajectories through the foci tend to bounce between the vertices of the major axis of the ellipse. It is known that these vertices form a two-periodic hyperbolic trajectory whose *characteristic exponent* is $h$. That is, the eigenvalues of the differential of the billiard map at the two-periodic hyperbolic points are $\lambda = {{\rm e}}^h$ and $\lambda^{-1} = {{\rm e}}^{-h}$. Following a standard terminology in problems of splitting of separatrices, we will say that the parameterizations (\[eq:SubharmonicParameterization\]) and (\[eq:HomoclinicParametrization\]) are *natural parameterizations* of the billiard dynamics tangent to the convex caustic $C_\lambda$ and through the foci, respectively. \[rem:One2Four\] We can associate four different billiard trajectories to each $s \in {\mathbb{R}}$. The first two ones are $\big( (-1)^n\hat{q}_0(s+nh) \big)_{n \in {\mathbb{Z}}}$ and $\big( (-1)^n\hat{q}_0(s-nh) \big)_{n \in {\mathbb{Z}}}$, which have the same starting point $\hat{q}_0(s) \in Q^+_0$ but are traveled in opposite directions. The last two ones are their symmetric images with respect to the origin: $\big( (-1)^{n+1}\hat{q}_0(s+nh) \big)_{n \in {\mathbb{Z}}}$ and $\big( (-1)^{n+1}\hat{q}_0(s-nh) \big)_{n \in {\mathbb{Z}}}$, which start in a point on the lower semi-ellipse $Q^-_0$. Hence, there is a one-to-four correspondence between $s$ and the homoclinic billiard trajectories inside the ellipse $Q_0$. Indeed, we should consider $s$ defined modulo $h$, because $s$ and $s+h$ give rise to the same set of four homoclinic trajectories. The billiard dynamics through the foci corresponds to the caustic parameter $\lambda = b$, so it should be obtained as the limit of the billiard dynamics tangent to the convex caustic $C_\lambda$ when $\lambda \to b^-$. See Figure \[fig:AlmostFlatCaustic\]. Note that $C_\lambda$ flattens into the segment of the $x$-axis enclosed by the foci of the ellipse when $\lambda \to b^-$. We confirm this idea in the following lemma. We also stress that $\lim_{\lambda \to b^-} \delta \neq h$. This has to do with the minus sign that appears in Lemma \[lem:DelshamsRamirez\] in front of the point $\hat{q}_0(s+h)$. ![A billiard trajectory (dashed line) tangent to the ellipse $C_\lambda$ tends to a billiard trajectory (dotted line) through the foci (the two solid squares) as $\lambda \to b^-$. The values of $t$ and $s$ are chosen in such a way that $\tilde{q}_0(t) = \hat{q}_0(s)$.[]{data-label="fig:AlmostFlatCaustic"}](AlmostFlatCaustic.eps){height="3in"} \[lem:SingularLimits\] Let $k \in (0,1)$, $K=K(k) > 0$, $K'=K'(k) = K(\sqrt{1-k^2})>0$, and $\delta \in (0,2K)$ be the modulus, the complete elliptic integral of the first kind, the complete elliptic integral of the first kind of the complementary modulus, and the constant shift associated to a convex caustic $C_\lambda$. Set $\zeta = 2K - \delta \in (0,2K)$. Then: $$\lim_{\lambda \to b^-} k = 1,\qquad \lim_{\lambda \to b^-} K = +\infty,\qquad \lim_{\lambda \to b^-} K' = \pi/2,\qquad \lim_{\lambda \to b^-} \zeta = h,$$ where $h > 0$ is the characteristic exponent defined in . Besides, $$\lim_{\lambda \to b^-} \tilde{q}_0(t) = \hat{q}_0(t),\qquad \lim_{\lambda \to b^-} \tilde{q}_0(t \pm \delta) = -\hat{q}_0(t \mp h),$$ and both limits are uniform on compacts sets of ${\mathbb{R}}$, but not on ${\mathbb{R}}$. The first limit follows from the definition $k^2 = (a^2 - b^2)/(a^2 - \lambda^2)$. We know that $\lim_{k \to 1^-} K(k) = +\infty$ and $K'(1) = K(0) = \pi/2$, which gives the second and third limits. The fourth limit is a tedious computation using properties of elliptic functions. The property $\lim_{\lambda \to b^-} \tilde{q}_0(t) = \hat{q}_0(t)$ is a direct consequence of the limits $$\lim_{k \to 1^-} \operatorname{sn}(t,k) = \tanh t,\qquad \lim_{k \to 1^-} \operatorname{cn}(t,k) = \operatorname{sech}t,$$ which can be found in [@AbramowitzS72]. Finally, $$\lim_{\lambda \to b^-} \tilde{q}_0(t \pm \delta) = -\lim_{\lambda \to b^-} \tilde{q}_0(t \pm \delta \mp 2K) = -\hat{q}_0(t \mp h),$$ where we have used that $\tilde{q}_0(t)$ is $2K$-antiperiodic and $\lim_{\lambda \to b^-} \zeta = h$. An ellipse under the curvature flow {#sec:EllipseCurvatureFlow} =================================== Let ${\mathbb{T}}= {\mathbb{R}}/2\pi{\mathbb{Z}}$. Let $Q_0 = q_0({\mathbb{T}})$, $q_0:{\mathbb{T}}\to {\mathbb{R}}^2$, be a closed smooth embedded curve in the plane. This curve may not be an ellipse. The $t$-time curvature flow of $Q_0$ is the curve $Q_t = q_t({\mathbb{T}}) = q({\mathbb{T}};t)$ where the map $q: {\mathbb{T}}\times [0,\tau) \to {\mathbb{R}}^2$, $q=q(\varphi;t)$, satisfies the initial value problem $$\label{eq:CurvatureFlow} \frac{\partial q}{\partial t} = \kappa N,\qquad q(\cdot,0) = q_0.$$ Here, $\kappa$ and $N$ are the curvature and the unit inward normal vector, respectively. Observe that $\varphi$ is not, in general, the arc-length parameter. M. Gage and R. Hamilton [@GageHamilton1986] showed that if $Q_0$ is strictly convex, then the curvature flow is defined for $t \in [0,\tau)$, where $\tau = A_0/2\pi$ and $A_0$ is the area enclosed by $Q_0$. Besides, $Q_t$ shrinks to a point and becomes more circular as $t \to \tau^-$. Let $Q_0$ be the ellipse (\[eq:Ellipse\]). We want to study a small deformation of $Q_0$ under the curvature flow. Henceforth, in order to emphasize that we are only interested in infinitesimal deformations of $Q_0$, we will denote the infinitesimally deformed ellipse by the symbol $Q_\epsilon$, instead of $Q_t$. We consider the elliptic coordinates $(\mu,\varphi)$ associated to the ellipse $Q_0$. That is, $(\mu,\varphi)$ are defined by relations $$\label{eq:EllipticCoordinates} x = c \cosh \mu \sin \varphi,\qquad y = c \sinh \mu \cos \varphi,$$ where $c = \sqrt{a^2-b^2}$ is the semi-focal distance of $Q_0$. The ellipse $Q_0$ in these elliptic coordinates reads as $\mu \equiv \mu_0$, where $\cosh \mu_0 = a/c$ and $\sinh \mu_0 = b/c$. Therefore, the deformation $Q_\epsilon$ of the ellipse $Q_0$ can be written in elliptic coordinates as $$\label{eq:EllipticPerturbation} \mu = \mu_\epsilon(\varphi) = \mu_0 + \epsilon \mu_1(\varphi) + \operatorname{O}(\epsilon^2),$$ for some $2\pi$-periodic smooth function $\mu_\epsilon: {\mathbb{R}}\to {\mathbb{R}}$. If a curve is symmetric with respect to a line, so is its curvature flow deformation, as long as it exists. Thus, the deformation $Q_\epsilon$ has the axial symmetries of the ellipse $Q_0$ with respect to both coordinates axis. This means that $\mu_\epsilon(\varphi)$ is even and $\pi$-periodic. Next, we compute the first order term of this function. That is, we compute the function $\mu_1(\varphi)$. \[lem:CurvatureFlow\] Let $Q_\epsilon$ be the deformation under the $\epsilon$-time curvature flow of the ellipse . If we write the deformed ellipse $Q_\epsilon$ as in equation , then $$\label{eq:mu1} \mu_1(\varphi) = \frac{-ab}{(a^2 \cos^2 \varphi + b^2 \sin^2 \varphi)^2}.$$ Let $q:{\mathbb{T}}\times [0,\tau) \to {\mathbb{R}}$, $q=q_t(\varphi) = q(\varphi;t)$, be the solution of the initial value problem (\[eq:CurvatureFlow\]), where $q_0(\varphi) = (a\sin\varphi,b\cos\varphi)$. On the one hand, we obtain from (\[eq:CurvatureFlow\]) that $q_\epsilon(\varphi) = q_0(\varphi) + \epsilon q_1(\varphi) + \operatorname{O}(\epsilon^2)$, where $q_1(\varphi) = \kappa_0(\varphi) N_0(\varphi)$, $$\kappa_0(\varphi) = \frac{ab}{\sqrt{(a^2 \cos^2 \varphi + b^2 \sin^2 \varphi)^3}}$$ is the curvature of the ellipse $Q_0$ at the point $q_0(\varphi)$, and $$N_0(\varphi) = \frac{-1}{\sqrt{a^2 \cos^2 \varphi + b^2 \sin^2 \varphi}} (b\sin \varphi, a\cos\varphi)$$ is the inward unit normal vector of the ellipse $Q_0$ at the point $q_0(\varphi)$. On the other hand, we deduce from the elliptic coordinates (\[eq:EllipticCoordinates\]) that $$\begin{aligned} q_\epsilon(\varphi) & = (c \cosh \mu_\epsilon(\varphi) \sin\varphi,c\sinh \mu_\epsilon(\varphi) \cos\varphi) \\ & = (a\sin \varphi,b\cos\varphi) + \epsilon \mu_1(\varphi) (b \sin \varphi, a \cos \varphi) + \operatorname{O}(\epsilon^2).\end{aligned}$$ By combining these two results, we get that $$\frac{-ab}{(a^2 \cos^2 \varphi + b^2 \sin^2 \varphi)^2} (b \sin \varphi, a \cos\varphi) = \mu_1(\varphi) (b \sin \varphi, a \cos \varphi),$$ which implies formula (\[eq:mu1\]). Subharmonic and homoclinic Melnikov potentials {#sec:MelnikovPotentials} ============================================== Let us introduce the Melnikov potentials associated to the billiard dynamics inside a perturbed ellipse that has the form (\[eq:EllipticPerturbation\]) in the elliptic coordinates (\[eq:EllipticCoordinates\]). We do not assume now that this perturbed ellipse is obtained through the curvature flow, but we still assume that the perturbation preserves the axial symmetries of the unperturbed ellipse (\[eq:Ellipse\]). This means that $\mu_\epsilon(\varphi) = \mu_0 + \epsilon \mu_1(\varphi) + \operatorname{O}(\epsilon^2)$ is an even $\pi$-periodic smooth function. We define the Melnikov potentials in a way already adapted to our specific billiard setting and then we list their main properties. See [@PintodeCarvalhoRamirezRos2013] (respectively, [@DelshamsRamirez1996; @DelshamsRamirez1997]) for a more detailed description of subharmonic (respectively, homoclinic) Melnikov potentials and their relation with the break up of resonant invariant curves (respectively, splitting of separatrices) of area-preserving twist maps. \[def:SubharmonicMelnikovPotential\] Let $m$ and $n$ be relatively prime integers such that $1 \le m < n/2$. The *$(m,n)$-subharmonic Melnikov potential* for the billiard dynamics inside the perturbed ellipse (\[eq:EllipticPerturbation\]) is $$\label{eq:SubharmonicMelnikovPotential} \tilde{L}^{(m,n)}_1: {\mathbb{R}}\to {\mathbb{R}},\qquad \tilde{L}^{(m,n)}_1(t) = 2 \lambda \sum_{j=0}^{n-1} \tilde{\mu}^{(m,n)}_1(t+j \delta),$$ where $C_\lambda$ is the $(m,n)$-resonant convex caustic inside $Q_0$, the modulus $k \in (0,1)$ and the shift $\delta \in (0,2K)$ are defined in (\[eq:ModulusShift\]), and $\tilde{\mu}_1^{(m,n)}(t) = \mu_1(\varphi)$. Variables $\varphi$ and $t$ are related through the change (\[eq:PeriodicVariable\]). \[pro:SunharmonicPotential\] The Melnikov potential  satisfies the following properties: 1. It is an even $\zeta$-periodic smooth function, where $\zeta = 2K - \delta$; 2. It has critical points at $t = 0$ and $t = \zeta/2$; 3. If it is not constant, the caustic $C_\lambda$ does not persist under perturbation ; 4. If it does not have degenerate critical points and $\epsilon > 0$ is small enough, then there is a one-to-one correspondence between its critical points (modulo its $\zeta$-periodicity) and the $(m,n)$-periodic Birkhoff billiard trajectories inside the deformed ellipse . The last two claims follow directly from results contained in [@PintodeCarvalhoRamirezRos2013]. Next, we prove the first two claims. On the one hand, $\tilde{L}^{(m,n)}_1(t)$ is $2K$-periodic because $\tilde{\mu}^{(m,n)}_1(t)$ is so. On the other hand, we get from the resonant condition (\[eq:ResonantCondition\]) that $\tilde{\mu}^{(m,n)}(t+n\delta) = \tilde{\mu}^{(m,n)}_1(t + 4Km) = \tilde{\mu}^{(m,n)}_1(t)$, so $$\tilde{L}^{(m,n)}_1(t+\delta) = 2 \lambda \sum_{j=0}^{n-1} \tilde{\mu}^{(m,n)}_1(t + \delta + j \delta) = 2 \lambda \sum_{j=0}^{n-1} \tilde{\mu}^{(m,n)}_1(t + j \delta) = \tilde{L}^{(m,n)}_1(t).$$ This means that $\tilde{L}^{(m,n)}_1(t)$ is also $\delta$-periodic. Hence, any linear combination with integer coefficients of the periods $2K$ and $\delta$ is also a period. We focus on the integer combination $\zeta = 2K - \delta$ due to the result presented in Lemma \[lem:SingularLimits\]. Finally, any even $\zeta$-periodic smooth function has critical points at $t = 0$ and $t = \zeta/2$. Since $\mu_\epsilon(\varphi) = \mu_0 + \epsilon \mu_1(\varphi) + \operatorname{O}(\epsilon^2)$ is even and $\pi$-periodic, we know that $$\breve{\mu}_\infty := \mu_1(-\pi/2) = \mu_1(\pi/2).$$ \[def:HomoclinicMelnikovPotential\] The *homoclinic Melnikov potential* for the billiard dynamics inside the perturbed ellipse (\[eq:EllipticPerturbation\]) is $$\label{eq:HomoclinicMelnikovPotential} \hat{L}_1: {\mathbb{R}}\to {\mathbb{R}},\qquad \hat{L}_1(s) = 2 b \sum_{j \in {\mathbb{Z}}} \hat{\mu}_1(s+j h),$$ where $\hat{\mu}_1(s) = \mu_1(\varphi) - \breve{\mu}_\infty$. Variables $\varphi$ and $s$ are related through the change (\[eq:NonperiodicVariable\]). The series $\sum_{j \in {\mathbb{Z}}} \hat{\mu}_1(s + j h)$ converges uniformly on compact subsets of ${\mathbb{R}}$ because $\lim_{s \to \pm \infty} \hat{\mu}_1(s) = \mu_1(\pm \pi/2) - \breve{\mu}_\infty = 0$ and the variable $\varphi$ tends geometrically fast to $\pm \pi/2$ when $s \to \pm \infty$. We subtract the constant $\breve{\mu}_\infty$ for this reason. \[pro:HomoclinicPotential\] The Melnikov potential  satisfies the following properties: 1. It is an even $h$-periodic smooth function; 2. It has critical points at $s = 0$ and $s = h/2$; 3. If it is not constant, then the separatrices of the unperturbed billiard map do not persist under the perturbation ; 4. If it does not have degenerate critical points and $\epsilon > 0$ is small enough, then there is a one-to-four correspondence between its critical points (modulo its $h$-periodicity) and the transverse primary homoclinic billiard trajectories inside the deformed ellipse . It follows from results contained in [@DelshamsRamirez1996; @DelshamsRamirez1997]. The correspondence is one-to-four because each critical point gives rise to two different homoclinic “paths” (mirrored by the central symmetry with respect to the origin) and each “path” can be traveled in two directions. See Remark \[rem:One2Four\]. Computations with elliptic functions {#sec:Computations} ==================================== Let us assume that the perturbed ellipse (\[eq:EllipticPerturbation\]) is the $\epsilon$-time curvature flow of the ellipse (\[eq:Ellipse\]), so that the first order term $\mu_1(\varphi)$ has the form (\[eq:mu1\]). We can not apply the result about non-persistence of resonant convex caustics established in [@PintodeCarvalhoRamirezRos2013] or the result about splitting of separatrices established in [@DelshamsRamirez1996] to this curvature flow setting, because the function (\[eq:mu1\]) is not entire in the variable $\varphi$. Nevertheless, many of the ideas developed in [@DelshamsRamirez1996; @PintodeCarvalhoRamirezRos2013] are still useful. Let $\tilde{\mu}^{(m,n)}_1: {\mathbb{R}}\to {\mathbb{R}}$ be the function defined by $\tilde{\mu}^{(m.n)}_1(t) = \mu_1(\varphi)$, so $$\label{eq:tildemu1} \tilde{\mu}^{(m,n)}_1(t) = \frac{-ab}{(a^2 \operatorname{cn}^2 t + b^2 \operatorname{sn}^2 t)^2}.$$ Here, $C_\lambda$ is the $(m,n)$-resonant convex caustic inside $Q_0$, the modulus $k \in (0,1)$ and the shift $\delta \in (0,2K)$ are defined in (\[eq:ModulusShift\]), and variables $\varphi$ and $t$ are related through the change (\[eq:PeriodicVariable\]). We skip the dependence of the Jacobian elliptic functions on the modulus $k$. Analogously, let $\hat{\mu}_1:{\mathbb{R}}\to {\mathbb{R}}$ be the function defined by $\hat{\mu}_1(s) = \mu_1(\varphi) - \breve{\mu}_\infty$, so $$\label{eq:hatmu1} \hat{\mu}_1(s) = \frac{a}{b^3} - \frac{ab}{(a^2 \operatorname{sech}^2 s + b^2 \tanh^2 s)^2}.$$ The key observation in what follows is that (\[eq:tildemu1\]) can be analytically extended to an elliptic function defined over ${\mathbb{C}}$, whereas (\[eq:hatmu1\]) can be analytically extended to a meromorphic function over ${\mathbb{C}}$. We list below the main properties of these extensions. \[lem:tildemu1\] Let $m$ and $n$ be two relatively prime integers such that $1 \le m < n/2$. Let $C_\lambda$ be the $(m,n)$-resonant elliptical caustic of the ellipse $Q_0$. Let $\delta \in (0,2K)$ be the shift defined by $\operatorname{sn}(\delta/2) = \lambda/b$, so relation  holds. Set $\zeta = 2K-\delta \in (0,2K)$. The function  is an even elliptic function of order four, periods $2K$ and $2K'{\mskip2mu{\rm i}\mskip1mu}$, and double poles in the set $$T = T_- \cup T_+, \qquad T_\pm = t_\pm + 2K {\mathbb{Z}}+ 2K'{\mskip2mu{\rm i}\mskip1mu}{\mathbb{Z}},\qquad t_\pm = \pm \zeta/2 + K'{\mskip2mu{\rm i}\mskip1mu}.$$ It has no other poles. There exist two Laurent coefficients $\alpha_2,\alpha_1 \in {\mathbb{C}}$, with $\alpha_2 \neq 0$, such that $$\tilde{\mu}^{(m,n)}_1(t_\pm + \tau) = \frac{\alpha_2}{\tau^2} \pm \frac{\alpha_1}{\tau} + \operatorname{O}(1),\qquad \tau \to 0.$$ We know that the square of the elliptic cosine is an even elliptic function of order two and periods $2K$ and $2K'{\mskip2mu{\rm i}\mskip1mu}$. Thus, the function $$f(t) = a^2 \operatorname{cn}^2 t + b^2 \operatorname{sn}^2 t = b^2 + (a^2-b^2) \operatorname{cn}^2 t$$ has the same properties. (We have used the identity $\operatorname{sn}^2 + \operatorname{cn}^2 \equiv 1$.) Hence, the function $f(t)$ has exactly two roots (counted with multiplicity) in the complex cell $$C = \left \{ t \in {\mathbb{C}}: -K \le \Re t < K,\ 0 \le \Im t < 2K' \right\}.$$ Let us find them. On the one hand, the values of the Jacobian elliptic functions at $t = K$ are $$\operatorname{sn}K = 1, \qquad \operatorname{cn}K = 0,\qquad \operatorname{dn}K = \sqrt{1-k^2} = \sqrt{(b^2 - \lambda^2)/(a^2 - \lambda^2)}.$$ On the other hand, the values of the Jacobian elliptic functions at $t = \delta/2$ are $\operatorname{sn}(\delta/2) = \lambda/b$, $$\operatorname{cn}(\delta/2) = b^{-1}\sqrt{b^2-\lambda^2},\qquad \operatorname{dn}(\delta/2) = a b^{-1} \sqrt{(b^2 - \lambda^2)/(a^2 - \lambda^2)}.$$ Therefore, the addition formula for the elliptic sine implies that $$\operatorname{sn}(\zeta/2) = \operatorname{sn}(K - \delta/2) = \frac{\operatorname{sn}K \operatorname{cn}(\delta/2) \operatorname{dn}(\delta/2) - \operatorname{sn}(\delta/2) \operatorname{cn}K \operatorname{dn}K} {1-k^2 \operatorname{sn}^2 K \operatorname{sn}^2(\delta/2)} = \sqrt{a^2 - \lambda^2}/a.$$ Next, we check that the function $f(t)$ vanishes at the points $t = t_\pm$: $$\begin{aligned} f(t_\pm) & = b^2 + (a^2-b^2) \operatorname{cn}^2(\pm \zeta/2 + K'{\mskip2mu{\rm i}\mskip1mu}) \\ & = b^2 + (a^2-b^2) \left(1 - k^{-2} \operatorname{sn}^{-2}(\pm \zeta/2) \right) = 0.\end{aligned}$$ We note that $t_\pm = \pm \zeta/2 + K'{\mskip2mu{\rm i}\mskip1mu}\in C$, so these are the two roots we were looking for and, in addition, they are simple roots. From the parity and periodicity of $f(t)$, we also deduce that $$f'(t_-) = f'(-\zeta/2 + K'{\mskip2mu{\rm i}\mskip1mu}) = -f'(\zeta/2 - K'{\mskip2mu{\rm i}\mskip1mu}) = -f'(\zeta/2 + K'{\mskip2mu{\rm i}\mskip1mu}) = -f'(t_+).$$ Finally, all the properties of the function (\[eq:tildemu1\]) follow directly from the fact that $\tilde{\mu}^{(m,n)}_1 = -ab/f^2$. It suffices to take $\alpha_2 = -ab/(f'(t_+))^2 = -ab/(f'(t_-))^2 \neq 0$. \[lem:hatmu1\] Let $h > 0$ be the characteristic exponent . The function  is an even meromorphic $\pi{\mskip2mu{\rm i}\mskip1mu}$-periodic function with double poles in the set $$S = S_- \cup S_+, \qquad S_\pm = s_\pm + \pi{\mskip2mu{\rm i}\mskip1mu}{\mathbb{Z}},\qquad s_\pm = \pm h/2 + \pi{\mskip2mu{\rm i}\mskip1mu}/2.$$ It has no other poles. There exist two Laurent coefficients $\beta_2,\beta_1 \in {\mathbb{C}}$, with $\beta_2 \neq 0$, such that $$\hat{\mu}_1(s_\pm + \sigma) = \frac{\beta_2}{\sigma^2} \pm \frac{\beta_1}{\sigma} + \operatorname{O}(1),\qquad \sigma \to 0.$$ The square of the hyperbolic secant is an even meromorphic $\pi{\mskip2mu{\rm i}\mskip1mu}$-periodic function. Thus, the function $$g(s) = a^2 \operatorname{sech}^2 s + b^2 \tanh^2 s = b^2 + c^2 \operatorname{sech}^2 s$$ has the same properties. We have used the identities $\operatorname{sech}^2 + \tanh^2 \equiv 1$ and $c^2 = a^2 - b^2$. Next, we look for all the roots of $g(s)$. We note that $$g(s) = 0 \Leftrightarrow \cosh^2 s = - c^2/b^2 = -\sinh^2(h/2) = \cosh^2(h/2 + \pi{\mskip2mu{\rm i}\mskip1mu}/2) \Leftrightarrow s \in S.$$ We have used that $\cosh^2 s = \cosh^2 r$ if and only if $s - r \in \pi{\mskip2mu{\rm i}\mskip1mu}{\mathbb{Z}}$ or $s + r \in \pi{\mskip2mu{\rm i}\mskip1mu}{\mathbb{Z}}$. These roots are simple. In fact, if $s_* \in S$, then $\cosh^2 s_* = -c^2/b^2$ and $\sinh^2 s_* = - a^2/b^2$, so $g(s_*) = 0$ and $g'(s_*) = -2c^2 \sinh s_* /\cosh^3 s_* \neq 0$. From the parity and periodicity of $g(s)$, we deduce that $g'(s_-) = -g'(h/2-\pi{\mskip2mu{\rm i}\mskip1mu}/2) = -g'(s_+)$. Finally, all the properties of $\hat{\mu}_1(s)$ follow directly from the fact that $\hat{\mu}_1 = a/b^3 -ab/g^2$. It suffices to take $\beta_2 = -ab/(g'(s_+))^2 = -ab/(g'(s_-))^2 \neq 0$. \[pro:SubharmonicNotConstant\] Let $\alpha_2 \neq 0$ be the dominant Laurent coefficient introduced in Lemma \[lem:tildemu1\]. The $(m,n)$-subharmonic Melnikov potential $$\tilde{L}^{(m,n)}_1(t) = 2 \lambda \sum_{j=0}^{n-1} \tilde{\mu}^{(m,n)}_1(t+j \delta),\qquad \tilde{\mu}^{(m,n)}_1(t) = \frac{-ab}{(a^2 \operatorname{sn}^2 t + b^2 \operatorname{cn}^2 t)^2},$$ is an even elliptic function of order two with periods $\zeta$ and $2K' {\mskip2mu{\rm i}\mskip1mu}$, poles in the set $$\label{eq:PolesMelnikovPotential} \mathcal{T} = t_\star + \zeta {\mathbb{Z}}+ 2K'{\mskip2mu{\rm i}\mskip1mu}{\mathbb{Z}},\qquad t_\star = \zeta/2 + K'{\mskip2mu{\rm i}\mskip1mu},$$ and principal parts $$\tilde{L}^{(m,n)}_1(t_\star + \tau) = \left\{ \begin{array}{ll} 4\lambda \alpha_2 \tau^{-2} + \operatorname{O}(1) \mbox{ as $\tau \to 0$}, & \mbox{ if $n$ is odd}, \\ 8\lambda \alpha_2 \tau^{-2} + \operatorname{O}(1) \mbox{ as $\tau \to 0$}, & \mbox{ if $n$ is even}. \end{array} \right.$$ In particular, it is not constant. Besides, its only real critical points are the points of the set $\zeta {\mathbb{Z}}/2$, and all of them are nondegenerate. We skip the dependence of $\tilde{\mu}^{(m,n)}_1(t)$ and $\tilde{L}^{(m,n)}_1(t)$ on $(m,n)$ for simplicity. The finite sum $\tilde{L}_1(t) = 2 \lambda \sum_{j=0}^{n-1} \tilde{\mu}_1(t+j \delta)$ can be analytically extended to an elliptic function $\tilde{L}_1: {\mathbb{C}}\to {\mathbb{C}}$ defined over the whole complex plane, see Lemma \[lem:tildemu1\]. The point $t_+ \in {\mathbb{C}}$ is a singularity of $\tilde{\mu}_1(t + j \delta)$ if and only if $t_+ + j\delta \in T = T_- \cup T_+$. Besides, $$\begin{aligned} t_+ + j \delta \in T_+ & \Leftrightarrow j\delta \in 2K{\mathbb{Z}}\Leftrightarrow 2jm \in n {\mathbb{Z}}\Leftrightarrow j \in \{0,n/2\},\\ t_+ + j \delta \in T_- & \Leftrightarrow (j-1)\delta \in 2K{\mathbb{Z}}\Leftrightarrow 2(j-1)m \in n {\mathbb{Z}}\Leftrightarrow j-1 \in \{0,n/2\}.\end{aligned}$$ We have used that $\delta = 4Km/n$, $t_- = t_+ + \delta - 2K$, and $\gcd(m,n) =1$. Equalities $j=n/2$ and $j-1 = n/2$ only can take place when $n$ is even. Hence, we distinguish two cases: - If $n$ is odd, then $\tilde{\mu}_1(t)$ and $\tilde{\mu}_1(t+\delta)$ are the only terms in the sum that have a singularity at $t = t_+$, so that $$\begin{aligned} \tilde{L}_1(t_+ + \tau) & = 2\lambda\tilde{\mu}_1(t_+ + \tau) + 2\lambda\tilde{\mu}_1(t_+ + \delta + \tau) + \operatorname{O}(1) \\ & = 2\lambda\tilde{\mu}_1(t_+ + \tau) + 2\lambda\tilde{\mu}_1(t_- + \tau) + \operatorname{O}(1) \\ & = 4\lambda \alpha_2 \tau^{-2} + \operatorname{O}(1) \quad \mbox{as $\tau \to 0$}.\end{aligned}$$ - If $n$ is even, then $n \delta /2 = 2Km$ and $\tilde{L}_1(t) = 4 \lambda \sum_{j=0}^{n/2-1} \tilde{\mu}_1(t+j \delta)$. We note that $\tilde{\mu}_1(t)$ and $\tilde{\mu}_1(t+\delta)$ are the only terms in this new sum that have a singularity at $t = t_+$, so $$\begin{aligned} \tilde{L}_1(t_+ + \tau) & = 4\lambda\tilde{\mu}_1(t_+ + \tau) + 4\lambda\tilde{\mu}_1(t_+ + \delta + \tau) + \operatorname{O}(1) \\ & = 4\lambda\tilde{\mu}_1(t_+ + \tau) + 4\lambda\tilde{\mu}_1(t_- + \tau) + \operatorname{O}(1) \\ & = 8\lambda \alpha_2 \tau^{-2} + \operatorname{O}(1) \quad \mbox{as $\tau \to 0$}.\end{aligned}$$ Thus, the analytic extension $\tilde{L}_1:{\mathbb{C}}\to {\mathbb{C}}$ has a double pole at $t = t_+$ in both cases, which implies that $\tilde{L}_1:{\mathbb{R}}\to {\mathbb{R}}$ is not constant. Next, let us prove that the points in the set $\zeta {\mathbb{Z}}/2$ are the only real critical points of $\tilde{L}_1(t)$, and all of them are nondegenerate. The derivative $\tilde{L}'_1(t)$ is odd, has periods $\zeta$ and $2K'{\mskip2mu{\rm i}\mskip1mu}$, has triple poles in the set (\[eq:PolesMelnikovPotential\]), and vanishes at the points in the set $\{0, \zeta/2, K'{\mskip2mu{\rm i}\mskip1mu}\} + \zeta {\mathbb{Z}}+ 2K'{\mskip2mu{\rm i}\mskip1mu}{\mathbb{Z}}$ due to its symmetry and periodicities. These critical points are nondegenerate and they are the only critical points because $\tilde{L}'_1(t)$ is an elliptic function of order three. \[pro:HomoclinicNotConstant\] Let $\beta_2 \neq 0$ be the dominant Laurent coefficient introduced in Lemma \[lem:hatmu1\]. The homoclinic Melnikov potential $$\hat{L}_1(s) = 2 b \sum_{j \in {\mathbb{Z}}} \hat{\mu}_1(s+jh),\qquad \hat{\mu}_1(s) = \frac{a}{b^3} - \frac{ab}{(a^2 \operatorname{sech}^2 s + b^2 \tanh^2 s)^2},$$ is an even elliptic function of order two with periods $h$ and $\pi{\mskip2mu{\rm i}\mskip1mu}$, poles in the set $$\mathcal{S} = s_\star + h{\mathbb{Z}}+ \pi{\mskip2mu{\rm i}\mskip1mu}{\mathbb{Z}},\qquad s_\star = h/2 + \pi{\mskip2mu{\rm i}\mskip1mu}/2,$$ and principal parts $$\hat{L}_1(s_\star + \sigma) = 4 b \beta_2 \sigma^{-2} + \operatorname{O}(1), \qquad \sigma \to 0.$$ In particular, it is not constant. Besides, its only real critical points are the points of the set $h {\mathbb{Z}}/2$, and all of them are nondegenerate. The series $\hat{L}_1(s) = 2 b \sum_{j=0}^{n-1} \hat{\mu}_1(s+jh)$ can be analytically extended to a meromorphic function $\hat{L}_1: {\mathbb{C}}\to {\mathbb{C}}$ defined over the whole complex plane, see Lemma \[lem:hatmu1\]. The point $s_+ \in {\mathbb{C}}$ is a singularity of the $j$-th term $\hat{\mu}_1(s+jh)$ if and only if $s_+ + jh \in S = S_- \cup S_+$. Besides, $$s_+ + j h \in S_+ \Leftrightarrow j = 0,\qquad s_+ + j h \in S_- \Leftrightarrow j = -1,$$ Here, we have used that $h \in {\mathbb{R}}$ and $s_- = s_+ - h$. Hence, $\hat{\mu}_1(s-h)$ and $\hat{\mu}_1(s)$ are the only terms in the sum that have a singularity at $s = s_+$, so that $$\begin{aligned} \hat{L}_1(s_+ + \sigma) & = 2 b \hat{\mu}_1(s_+ - h + \sigma) + 2 b \hat{\mu}_1(s_+ + \sigma) + \operatorname{O}(1) \\ & = 2 b \hat{\mu}_1(s_- + \sigma) + 2 b \hat{\mu}_1(s_+ + \sigma) + \operatorname{O}(1) \\ & = 4 b \beta_2 \sigma^{-2} + \operatorname{O}(1) \quad \mbox{as $\sigma \to 0$}.\end{aligned}$$ Therefore, the analytic extension $\hat{L}_1:{\mathbb{C}}\to {\mathbb{C}}$ has a double pole at $s = s_+$, which implies that the homoclinic Melnikov potential $\hat{L}_1:{\mathbb{R}}\to {\mathbb{R}}$ is not constant. Finally, the points in the set $h {\mathbb{Z}}/2$ are the only real critical points of $\hat{L}_1(s)$, and all of them are nondegenerate. This is proved following the same argument at the end of the proof of the previous proposition. The first claims of Theorem \[thm:MainTheorem\] —the break up of all resonant convex caustics and the splitting of the separatrices in a transverse way— are a direct consequence of the results above. K. Burns and H. Weiss [@BurnsWeiss1995] proved that such transverse intersection of separatrices implies that the perturbed system has positive topological entropy, which gives the third claim of Theorem \[thm:MainTheorem\]. R. Cushman [@Cushman1978] established that an analytic area-preserving map with a transverse intersection of stable and unstable invariant curves can not be integrable, which proves the last claim of Theorem \[thm:MainTheorem\]. We just note that the billiard dynamics inside an analytic convex curve is analytic and that the curvature flow preserves the analyticity of the unperturbed ellipse. Finally, we establish the relation between the homoclinic Melnikov potential and the limit of the $(m,n)$-subharmonic Melnikov potential when $m/n \to 1/2$ or, equivalently, when $\lambda \to b^-$. We still assume that the perturbed ellipse is obtained by using the curvature flow, so this is a very specific result. The relation depends on the parity of the period $n$, which is a phenomenon that, up to our knowledge, never takes place in continuous systems. This is the reason for our interest in it. If $m$ and $n$ are relatively prime integers such that $1 \le m < n/2$, $$\lim_{\frac{m}{n} \to \frac{1}{2}} \tilde{L}_1^{(m,n)}(t) = \mbox{constant} + \left\{ \begin{array}{rl} \hat{L}_1(t), & \mbox{ if $n$ is odd}, \\ 2 \hat{L}_1(t), & \mbox{ if $n$ is even}, \end{array} \right.$$ uniformly on compact subsets of ${\mathbb{R}}$. The proof is based on the fact that any elliptic function is determined, up to an additive constant, by its periods, its poles, and the principal parts of its poles. The periods, poles, and principal parts of the subharmonic and homoclinic Melnikov potentials $\tilde{L}^{(m,n)}_1(t)$ and $\hat{L}_1(s)$ are listed in Propositions \[pro:SubharmonicNotConstant\] and \[pro:HomoclinicNotConstant\], respectively. We only have to see that the former ones tend to the later ones. Let $\lambda \in (0,b)$ be the caustic parameter such that $C_\lambda$ is an $(m,n)$-resonant caustic. It is known that if $m/n \to 1/2$, then $\lambda \to b^-$. See [@CasasRamirez2010 Proposition 10]. Besides, we have seen in Lemma \[lem:SingularLimits\] that $\lim_{\lambda \to b^-} K' = \pi/2$ and $\lim_{\lambda \to b^-} \zeta = h$. Thus, it suffices to check that $\lim_{\lambda \to b^-} \alpha_2 = \beta_2$, where $\alpha_2$ and $\beta_2$ are the Laurent coefficients introduced in Lemmas \[lem:tildemu1\] and \[lem:hatmu1\]. This limit is a straightforward computation. [99]{} M. Abramowitz M and I. Stegun. *Handbook of Mathematical Functions*. Dover, New York, 1972. G. D. Birkhoff. *Dynamical Systems*. A. M. S. Colloquium Publications, Providence, RI, 1966 (Original ed. 1927). K. Burns and H. Weiss. A geometric criterion for positive topological entropy. *Comm. Math. Phys.* **172** (1995), 95–118. P. S. Casas and R. Ramírez[-]{}Ros. The frequency map for elliptic billiards. *SIAM J. Appl. Dyn. Syst.* **10** (2011), 278–324. S-J. Chang and R. Friedberg. Elliptical billiards and Poncelet’s theorem. *J. Math. Phys.* **29** (1988), 1537–1550. K.-S. Chou and X.-P. Zhu. *The Curve Shortening Problem*. Chapman & Hall/CRC, Boca Raton, 2001. R. Cushman. Examples of nonintegrable analytic [H]{}amiltonian vectorfields with no small divisors. *Trans. Amer. Math. Soc.* **238** (1978), 45–55. A. Delshams and R. Ram[í]{}rez-Ros. Poincaré-Melnikov-Arnold method for analytic planar maps. *Nonlinearity* **9** (1996), 1–26. A. Delshams and R. Ram[í]{}rez-Ros. Melnikov potential for exact symplectic maps. *Comm. Math. Phys.* **190** (1997), 213–245. M. Gage and R. S. Hamilton. The heat equation shrinking convex plane curves. *J. Differential Geom.* **23** (1986), 69–96. M. A. Grayson. The heat equation shrinks embedded plane curves to round points. *J. Differential Geom.* **26** (1987), 285–314. J. Guckenheimer and P. Holmes. *Nonlinear Oscillations, Dynamical Systems and Bifurcations of Vector Fields*, volume 42 of [*Applied Mathematical Sciences*]{}. Springer-Verlag, New York, Heidelberg, Berlin, 1983. V. V. Kozlov and D. Treshchëv. *Billiards: a Genetic Introduction to the Dynamics of Systems with Impacts*. Transl. Math. Monographs **89**, AMS, Providence, RI, 1991. D. Jane. An example of how the Ricci flow can increase topological entropy. *Ergodic Theory Dynam. Systems* **27** (2007), 1919–1932. A. Katok and B. Hasselblatt. *Introduction to the Modern Theory of Dynamical Systems*. Cambridge Univ. Press, Cambridge, UK, 1995. V. F. Lazutkin. The existence of caustics for a billiard problem in a convex domain. *Math. USSR Izvestija* **7** (1973), 185–214. S.  Pinto-de-Carvalho and R. Ram[í]{}rez-Ros. Non-persistence of resonant caustics in perturbed elliptic billiards. *Ergodic Theory Dynam. Systems* **33** (2013), 1876–1890. R. Ram[í]{}rez-Ros. Break-up of resonant invariant curves in billiards and dual billiards associated to perturbed circular tables. *Phys. D* **214** (2006), 78–87. S. Tabachnikov. *Billiards*. Coll. Panorama et Synthèses, SMF, Paris, 1995. E. T. Whittaker and G. N. Watson. *A Course of Modern Analysis*. Cambridge University Press, Cambridge, UK, 1927. [^1]: R. R.-R. is supported in part by CUR-DIUE Grant 2014SGR504 (Catalonia) and MINECO-FEDER Grant MTM2012-31714 (Spain).
--- abstract: 'Dynamical Dark Matter (DDM) is an alternative framework for dark-matter physics in which the dark sector comprises a vast ensemble of particle species whose Standard-Model decay widths are balanced against their cosmological abundances. Previous studies of this framework have focused on a particular class of DDM ensembles — motivated primarily by Kaluza-Klein towers in theories with extra dimensions — in which the density of dark states scales roughly as a polynomial of the mass. In this paper, by contrast, we study the properties of a different class of DDM ensembles in which the density of dark states grows [*exponentially*]{} with mass. Ensembles with this Hagedorn-like property arise naturally as the “hadronic” resonances associated with the confining phase of a strongly-coupled dark sector; they also arise naturally as the gauge-neutral bulk states of Type I string theories. We study the dynamical properties of such ensembles, and demonstrate that an appropriate DDM-like balancing between decay widths and abundances can emerge naturally — even with an exponentially rising density of states. We also study the effective equations of state for such ensembles, and investigate some of the model-independent observational constraints on such ensembles that follow directly from these equations of state. In general, we find that such constraints tend to introduce correlations between various properties of these DDM ensembles such as their associated mass scales, lifetimes, and abundance distributions. For example, we find that these constraints allow DDM ensembles with energy scales ranging from the GeV scale all the way to the Planck scale, but that the total present-day cosmological abundance of the dark sector must be spread across an increasing number of different states in the ensemble as these energy scales are dialed from the Planck scale down to the GeV scale. Numerous other correlations and constraints are also discussed.' author: - 'Keith R. Dienes$^{1,2}$[^1], Fei Huang$^{1}$[^2], Shufang Su$^{1}$[^3], Brooks Thomas$^{3}$[^4]' title: 'Dynamical Dark Matter from Strongly-Coupled Dark Sectors' --- =1 [.7ex]{} [.7ex]{} \#1[[**{[\#1]{}}**]{}]{} =cmss10 =cmss10 at 7pt 2[M\_[T2]{}]{} epsf Introduction\[sec:Intro\] ========================= Dynamical Dark Matter (DDM) [@DDM1; @DDM2] is an alternative framework for dark-matter physics in which dark-matter stability is not required. Instead, the dark sector within the DDM framework comprises a vast ensemble of individual constituent particles exhibiting a variety of different masses, lifetimes, and cosmological abundances. The phenomenological viability of such a dark sector is then ensured through a non-trivial [*balancing*]{} between cosmological abundances and Standard-Model (SM) decay widths across the ensemble. Indeed, under this balancing, those ensemble constituents with shorter lifetimes must have smaller cosmological abundances, while states with longer lifetimes may have larger cosmological abundances. As a result, the dark sector in such a scenario is [*dynamic*]{}: states in the dark sector are continually decaying into visible-sector states throughout the evolution of the universe — not just in previous epochs but even at the present time and into the future. Quantities such as the total energy density $\Omega_{\rm CDM}$ and the effective equation-of-state parameter $w_{\rm eff}$ are thus time-dependent quantities, and it is only an accident that these quantities happen to take particular values at the present time. Many methods have been developed for testing this framework, spanning from collider signatures [@DDMcolliders; @DDMcolliders2] to signatures in direct-detection [@DDMdirect] and indirect-detection [@DDMindirect; @DDMboxes1; @DDMboxes2] experiments. Of course, many of the constraints on such DDM ensembles depend on model-specific details associated with the ensemble in question, such as the specific particle nature of the individual dark constituent fields and the precise form of their decays into SM states. By contrast, other phenomenological properties of (and constraints on) these DDM ensembles depend simply on the manner in which the lifetimes and abundances of ensemble constituents scale with respect to each other, and thus have a greater degree of model-independence. For example, the effective equations of state for these ensembles are governed in large part solely by these scaling relations. As a result, all phenomenological/observational constraints on the equations of state of the dark sector are essentially constraints on the types of balancing relations that DDM ensembles may exhibit. These are thus model-independent constraints which can be placed on such ensembles simply as a result of their inherent scaling relations. One general class of DDM ensembles consisting of large numbers of dark particle species exhibiting suitable scaling relations between lifetimes and cosmological abundances are those whose constituents are the Kaluza-Klein (KK) modes of a gauge-neutral bulk field in a theory with extra spacetime dimensions in which cosmological abundances are established through misalignment production [@DDM1]. Indeed, explicit realizations of DDM ensembles of this type have been constructed [@DDM2; @DDMAxion]. Although many aspects of these ensembles depend on the details of the particular fields under study, certain general properties are common across all such ensembles in this class. One of these is that the cosmological abundance of each component scales as a power of the lifetime of that component. Likewise, the density of states within such ensembles is either insensitive to mass or scales roughly as a polynomial function of mass across the ensemble. For these reasons, most phenomenological studies of the DDM framework have focused on ensembles exhibiting polynomial scaling relationships. Polynomial scaling relations also emerge in other (purely four-dimensional) contexts as well. For example, under certain circumstances, thermal freeze-out mechanisms for abundance generation can also lead to appropriate polynomial inverse scaling relations between lifetimes and abundances [@DesigningDDM]. In fact, such inverse scaling relations can even emerge [*statistically*]{} in contexts in which the dynamics underlying the dark sector is essentially random [@RandomDDM]. There are, however, other well-motivated theoretical constructions which do not give rise to dark sectors with polynomial scaling relations. One example is a dark sector consisting of a set of fermions (dark “quarks”) charged under a non-Abelian gauge group $G$ which becomes confining below some critical temperature $T_c$. At temperatures $T \lesssim T_c$, when the theory is in the confining phase, the physical degrees of freedom are composite states (dark “hadrons”). Another well-motivated type of DDM ensemble consists of the bulk (, closed-string) states in Type I string theories. Such bulk states are typically neutral with respect to all brane gauge symmetries, and interact with those brane states only gravitationally. As such, from the perspective of brane-localized observers, these bulk states too are dark matter. At first glance, these two latter types of ensembles may seem to have little in common with each other. Indeed, many aspects of the detailed phenomenologies associated with these ensembles will be completely different. However, they nevertheless exhibit certain underlying model-independent commonalities which are relevant for their viability as DDM ensembles. Indeed, these features are identical to those which characterize the “visible” sector of ordinary hadrons, namely - mass distributions which follow linear Regge trajectories (, $\alpha' M^2_n\sim {n}$ where $\alpha'$ is a corresponding Regge slope), and - exponentially growing (“Hagedorn-like”) degeneracies of states (, $g_n\sim e^{\sqrt{n}} \sim e^{\sqrt{\alpha'} M_n}$). These features — especially the appearance of an [*exponential*]{} scaling of the state degeneracies with mass — represent a behavior which is markedly different from that exhibited by DDM ensembles with polynomial scaling relations. For example, as a result of their exponentially growing densities of states, such ensembles have a critical temperature [@Hagedorn] beyond which their partition functions diverge. In this paper, we shall study the generic properties of DDM ensembles which exhibit the two features itemized above. We shall calculate the effective equations of state $w_{\rm eff}(t)$ for such ensembles, and subject these ensembles to those immediate model-independent observational constraints that follow directly from these equations of state. We shall therefore be able to place zeroth-order model-independent bounds on some of the quantities that parametrize these features, such as the effective Regge slope as well as the rate of exponential growth in the state degeneracies. Our primary motivation is to understand the phenomenology that might apply to strongly-coupled dark sectors in their confined (“hadronic”) phase, imagining nothing more than that our DDM ensemble resembles the visible hadronic sector in the two respects itemized above. However, the results of such analyses might also be useful in constraining the bulk sector of various classes of string theories, since these bulk sectors also give rise to ensembles of dark-matter states which share these two grossest features. We shall therefore aim to keep our discussion as model-independent as possible, subject to our assumption of the above two properties itemized above. In this way, our analysis and the constraints we obtain can serve as useful phenomenological guides in eventually building realistic dark-matter models of this type. This paper is organized as follows. In Sect. \[sec:DensityOfStates\], we begin by reviewing the properties that we shall assume for the mass spectrum and density of states of our DDM dark “hadron” ensemble. We shall also discuss the physical interpretations of these properties in terms of a variety of underlying flux-tube models and string theories. This section will also serve to establish our conventions and notation. Then, in Sect. \[sec:Balancing\], we discuss how the required balancing between lifetimes and abundances naturally arises for such DDM ensembles. In particular, we examine the mechanism through which primordial abundances for these hadron resonances are generated, and we determine how these abundances scale across the ensemble as a function of the hadron mass. We also discuss the scaling behavior of the decay widths that characterize the decays of the hadronic ensemble constituents to SM states, as well as the assumptions that enter into such calculations. In Sect. \[sec:OmegaEtaWeff\], we then derive expressions for the total abundance $\Omegatot(t)$, the tower fraction $\eta(t)$, and the effective equation-of-state parameter $\weff(t)$ for these DDM ensemble as functions of time. As discussed in Refs. [@DDM1; @DDM2] and reviewed in Sect. \[sec:OmegaEtaWeff\], these three functions characterize the time-evolution of DDM ensembles and allow us to place a variety of general, model-independent constraints on such ensembles. In Sect. \[sec:Results\], we then present the results of our analysis of the phenomenological viability of such DDM ensembles, identifying those regions of the corresponding parameter space which lead to the most promising ensembles and uncovering generic phenomenological behaviors and correlations across this space. One of our key findings is that these DDM ensembles can satisfy our constraints across a broad range of energy scales ranging from the GeV scale all the way to the Planck scale, but that the present-day cosmological abundance of the dark sector must be distributed across an increasing number of different states in the ensemble as the fundamental mass scales associated with the ensemble are dialed from the Planck scale down to the GeV scale. Finally, in Sect. \[sec:Conclusion\], we summarize our results and discuss possible avenues for future work. DDM ensembles of dark hadrons: Fundamental assumptions \[sec:DensityOfStates\] ============================================================================== As discussed in the Introduction, in this paper we are primarily concerned with the properties of DDM ensembles whose constituents are the “hadronic” composite states or resonances of a strongly-coupled dark sector. As has been well known since the 1960’s, many of the attributes of such an ensemble can be successfully modeled by strings. These attributes include linear Regge trajectories, linear confinement, an exponential rise in hadron-state degeneracies, and $s$- and $t$-channel duality. It is not a complete surprise that there is a deep connection between hadronic spectroscopy and the spectra of string theory. Hadronic resonances (particularly mesons) can be viewed as configurations of dark “quarks” linked together by flux tubes. The spectrum of excitations in such a theory therefore corresponds to the spectrum of fluctuations of these flux tubes. However, it is well known that these flux tubes can be modeled as non-critical strings. Thus string theory can provide insight into the properties of such collections of composite states. In what follows, we shall use this analogy between hadronic physics and string theory to motivate our parametrization for the mass spectrum and for the density of states of our dark-“hadronic” DDM ensembles. We shall also make recourse to modern string technology, when needed, for refinements of our basic picture. Throughout, however, we shall attempt to keep our parametrizations as general as possible so that they might apply to the widest possible set of DDM ensembles sharing these properties. As discussed in the Introduction, this will allow our analysis and eventual constraints to serve as useful guides in future attempts to build realistic models exhibiting these features. The mass spectrum:  Regge trajectories -------------------------------------- The first feature that we shall assume of our hadronic dark sector is a mass spectrum consistent with the existence of Regge trajectories. The existence of such trajectories follows directly from nothing more than our assumption that our dark-sector bound states can be modeled by dark quarks connected by the confining flux tube associated with a strong, attractive, dark-sector interaction. Taking meson-like configurations as our guide and temporarily assuming massless quarks, it can easily be shown that the mass $M_n$ associated with a relativistic rotating flux tube scales with the corresponding total angular momentum $n$ as , where $\alpha'$ is the so-called Regge slope. In the [*visible*]{} sector, this successfully describes the so-called leading Regge trajectory of the observed mesons, with appropriate for QCD.  Moreover, there also exist subleading (parallel) Regge trajectories of observed mesons which have the same Regge slope but different intercepts: $n\sim \alpha' M_n^2 +\alpha_0$. Regge trajectories of this form, both leading and subleading, also emerge in string theory. For example, the perturbative states of a quantized open bosonic string have masses $M$ and spins $J=0,1,...,J_{\rm max}$ which satisfy $J_{\rm max}= \alpha' M^2 + 1$ where $\alpha'$ is now the Regge slope associated with string theory \[typically assumed to be $\sim (M_{\rm Planck})^{-2}$\]. The states with $J=J_{\rm max}$ thus sit along the leading Regge trajectory, while those with smaller values of $J$ sit along the subleading Regge trajectories. Similar results also hold for superstrings and heterotic strings. Given these observations, in this paper we shall assume that the states of our dark “hadronic” DDM ensemble have discrete positive masses $M_n$ of the general form $$M_n^2 ~=~ n \mstring^2 + M_0^2~. \label{eq:MassSpectrum}$$ where $n$ is an index labeling our states in order of increasing mass. Here $M_s\equiv 1/\sqrt{\alpha'}$ is the corresponding “string scale”, while $M_0$ represents the mass of the lightest “hadronic” constituent in the DDM ensemble. Indeed, since we do not expect to have any tachyonic states in our DDM ensemble, we shall assume throughout this paper that $M_0^2\geq 0$. We shall avoid making any further assumptions about the nature of the dark sector by treating both $M_s$ and $M_0$ as free parameters to be eventually constrained by cosmological data. Our choice of sign for $M_0^2$ perhaps deserves further comment. For the visible sector, most hadrons lie along Regge trajectories with $M_0^2 \geq 0$. While there do exist Regge trajectories with $M_0^2 <0$, the lowest states in such trajectories are of course absent. In string theory, by contrast, all Regge trajectories have $M_0^2 <0$. However, just as in the hadronic case, all tachyonic states which might result for small $n$ are ultimately removed from the string spectrum by certain “projections” which are ultimately required for the self-consistency of the string. In other words, for Regge trajectories with $M_0^2 <0$, one could equivalently relabel our remaining states by shifting $n\to n-1$ and thereby obtain an “effective” $M_0^2 \geq 0$. This is not normally done in string theory because in string theory the index $n$ is correlated with other physical quantities such as the spin of the state. However we are making no such assumption for the states of our dark sector, and are treating the index $n$ as a mere labelling parameter. Our assumption of a tachyon-free dark sector then leads us to take $M_0^2 \geq 0$. There is also another motivation for taking $M_0^2 \geq 0$. All of the above results which treat $n$ as an angular momentum assume massless quarks at the endpoints of the flux tube. However, while such an approximation holds well for the lightest states in the visible sector, we do not wish to make such an approximation for our unknown dark sector. We shall therefore assume $M_0^2 \geq 0$ in what follows, recognizing that this parameter may in principle also implicitly include the positive contributions from dark quark masses as well. Degeneracy of states:  Exponential behavior ------------------------------------------- The second generic feature associated with hadronic spectroscopy is the well-known exponential rise in the degeneracies of hadrons as a function of mass: . This behavior was first predicted and observed for hadrons (both mesons and baryons) in Ref. [@Hagedorn], and also holds as a generic feature for both bosonic and fermionic states in string theory [@StringReviews]. In general, we can understand this behavior as follows. If we model our hadrons as quarks connected by flux tubes, the degeneracy $g_n$ of hadronic states at any mass level $n$ can be written as the product of two contributions: one factor $\kappa$ representing a multiplicity of states due to the degrees of freedom associated with the quarks (such as the different possible configurations of quantities like spin and flavor), and a second factor $\hat g_n$ representing the multiplicity of states due to the degrees of freedom associated with the flux tube. We thus have $$g_n ~\approx~ \kappa \, \hat g_n~. \label{eq:gnInTermsOffn}$$ While $\kappa$ is a constant which is independent of the particular mass level $n$, the remaining degeneracy factor $\hat g_n$ counts the rapidly increasing number of ways in which a state of given total energy $n$ can be realized as a combination of the vibrational, rotational, and internal excitations of the different harmonic oscillators which together comprise a quantized string. It is this quantity which grows exponentially with mass, and in string theory the leading behavior of $\hat g_n$ for large $n$ generally takes the form [@StringReviews] g\_n    A n\^[-B]{} e\^[C]{}      [as]{}  n , \[eq:fnForLargen\] where $A,B,C$ are all positive quantities which depend on the particular type of string model under study. Indeed, for any $B$ and $C$, it turns out that the proper normalization for $\hat g_n$ in string theory is given by A  =  [1]{} ( [C4]{})\^[2B-1]{} . Thus our asymptotic degeneracy of states is parametrized by two independent quantities $B$ and $C$, and we shall assume that this continues to be true in our dark sector as well. The most salient property of the expression in Eq. (\[eq:fnForLargen\]) is that it rises exponentially with $\sqrt{n}$, or equivalently with the mass $M_n$ of the corresponding state. This represents a crucial difference relative to the KK-inspired DDM ensembles previously considered in Refs. [@DDM1; @DDM2; @DDMAxion] (or even the purely four-dimensional DDM ensembles considered in Refs. [@DesigningDDM; @RandomDDM]). For example, the KK states corresponding to a single flat extra spacetime dimension have degeneracies $\hat g_n$ which are constant, or which become so above the $n=0$ level. The key difference here is that the degrees of freedom associated with our flux tube consist of not only KK excitations (if the flux tube happens to be situated within a spacetime with a compactified dimension), but also so-called [*oscillator*]{} excitations representing the internal fluctuations of the flux tube itself. It is these oscillator excitations which give rise to the exponentially growing degeneracies and which are a direct consequence of the non-zero spatial extent of the flux tube. As such, they are intrinsically stringy and would not arise in theories involving fundamental point particles. Unfortunately, the asymptotic form in Eq. (\[eq:fnForLargen\]) is not sufficient for our purposes. Although we are interested in the behavior of all states across the DDM ensemble, it is the lighter states rather than the heavier states which are most likely to have longer lifetimes and therefore greater cosmological abundances. Thus, even though we want to keep track of all of the states in our ensemble, we need to be particularly sensitive to the degeneracies of the lighter states, , the states with smaller values of $n$. This poses a problem because the asymptotic expression in Eq. (\[eq:fnForLargen\]) is fairly accurate in the large-$n$ limit but is not especially accurate in the small-$n$ limit. Fortunately, for values of $B$ and $C$ which correspond to self-consistent strings (to be discussed below), the tools of modern string technology (specifically conformal field theory and modular invariance) furnish us with a more precise approximation for $\hat g_n$ which remains accurate even for very small values of $n$. This expression is given by [@Cudell; @HR; @KV; @missusy] g\_n    2 ( - 1)\^[ - B]{} I\_[|2B - |]{}(C) , \[eq:fnBetterApprox\] where $I_\nu (z)$ denotes the modified Bessel function of the first kind of order $\nu$. Use of the approximation $I_\nu(z)\approx e^z /\sqrt{2\pi z}$ for $z\gg1$ then reproduces the result in Eq. (\[eq:fnForLargen\]). However, the expression in Eq. (\[eq:fnBetterApprox\]) remains valid to within only a few percent all the way down to $n=1$, assuming $C\leq 4\pi$ (so that the argument of the Bessel function remains real even for $n=1$). In what follows, we therefore shall adopt the expression in Eq. (\[eq:fnBetterApprox\]) as our general parametrization for the degeneracy of states $\hat g_n$ for arbitrary values of $B$ and $C\leq 4\pi$ and for all $n\geq 1$. For values of $B$ and $C$ corresponding to bona-fide string theories, this expression yields results for the state degeneracies which, though not necessarily integral, are highly accurate for all values of $n\geq 1$. An explicit example of this will be provided below. More generally, however, this expression is smooth and well-behaved for all values of the $B$ and $C$ parameters, and in all cases exhibits the exponential Hagedorn-like behavior whose primary effects we seek to analyze in this paper. For $n=0$, by contrast, we shall define $\hat g_0\equiv 1$, representing the unique ground state of our flux tube. Physical interpretation of ensemble parameters ---------------------------------------------- Thus far we have introduced four parameters to describe our dark “hadron” DDM ensemble: $M_s$, $M_0$, $B$, and $C$.  The first two parameters have immediate interpretations: $M_0$ is the mass of the lightest state in the DDM ensemble, while $M_s$ parametrizes the splitting between the states. We would now like to develop analogous physical interpretations of $B$ and $C$.  Clearly $B$ and $C$ describe the dynamics of the flux tube. However, in the case of the ordinary strong interaction, many possible theories governing this dynamics have been proposed. These range from early examples such as the scalar (Nambu) string [@Nambu], the Ramond string [@Ramond], and the Neveu-Schwarz (NS) string [@NS] to more modern examples such as Polyakov’s “rigid string” [@Polyakov], Green’s “Dirichlet string” [@Green], and the Polchinski-Strominger “effective string” [@PolchinskiStrominger]. Many other possibilities and variants have also been proposed. All of these theories begin by imagining a one-dimensional line of flux energy (, a string) which sweeps out a two-dimensional flux-sheet (or worldsheet) as it sweeps through an external $D$-dimensional spacetime. Here $D$ is the number of spacetime dimensions which are effectively uncompactified with respect to the fundamental energy scale $M_s$ associated with the flux tube. As such, as it propagates, our string/flux tube is free to fluctuate into any of the spatial dimensions transverse to the string. We can describe such fluctuations by specifying $D_\perp$ embedding functions $X^i(\sigma_1,\sigma_2)$, , which are nothing but the transverse spacetime locations of any point on the flux-tube worldsheet with coordinates $(\sigma_1,\sigma_2)$. As such, these embedding functions may be regarded as fields on the two-dimensional flux-tube worldsheet. The dynamics of this system is then governed by the Polyakov action S  \~  M\_s\^2 d\^2 \_[i=1]{}\^[D\_]{} ( [\^]{} X\^i) ( [\_]{} X\^i)  . \[scalarstring\] Minimizing this action is classically equivalent to minimizing the area of the flux-tube worldsheet. By itself, the expression in Eq. (\[scalarstring\]) describes the action of the so-called $D_\perp$-dimensional “scalar” string. In some sense this theory provides the simplest possible description of a strongly-interacting flux tube, with the term in Eq. (\[scalarstring\]) representing the bare minimum that must always be present for any flux-tube description. The various possible refinements of this basic theory then differ in the extra terms that might be added to this action. Some of these theories mentioned above introduce extra terms which correspond to additional, purely internal degrees of freedom \[, additional fields analogous to $X^i(\sigma_1,\sigma_2)$ but without interpretations as the coordinates of uncompactified spacetime dimensions\] on the flux-tube worldsheet. By contrast, other theories introduce extra interaction terms for the $X^i$-fields which alter their short-distance behavior. The action in Eq. (\[scalarstring\]) can be interpreted as that of a two-dimensional (2D) field theory (where the two dimensions are those of the flux-tube worldsheet), and we immediately see that it is endowed with a 2D conformal symmetry. There are good reasons to expect that the long-distance limit of any self-consistent flux-tube theory should exhibit such a symmetry, since we expect the physics of this system to be invariant under reparametrizations of our flux-tube worldsheet coordinates. As a result, those flux-tube theories that augment the scalar string by introducing extra purely internal degrees of freedom on the flux-tube worldsheet must not break this conformal symmetry; this requirement constrains what kinds of terms can be added. By contrast, the theories that introduce extra interaction terms for the $X^i$ fields do break this conformal symmetry, but they do so only in the short-distance limit. The 2D conformal symmetry of the long-distance limit is then preserved as an effective symmetry. In any 2D conformal field theory, either exact or effective, the total number of degrees of freedom is encoded within the so-called [*central charge*]{} $c$. Each $X^i$ field contributes a central charge $c=1$, and thus the minimal scalar-string action in Eq. (\[scalarstring\]) describes a theory with central charge $c=D_\perp$. However the introduction of additional degrees of freedom on the flux-tube worldsheet will necessarily increase the central charge, producing a theory with $c>D_\perp$. Given a particular action for our flux-tube dynamics, it is straightforward to quantize the fields in question. In this way, we can determine the corresponding spectrum of the theory at all mass levels. These calculations are standard in string theory (see, , Ref. [@StringReviews]), and ultimately one obtains [@HR; @KV; @missusy] asymptotic state degeneracies $\hat g_n$ of the forms given in Eq. (\[eq:fnForLargen\]) or Eq. (\[eq:fnBetterApprox\]). Remarkably, one finds a relatively straightforward connection between the parameters $(B,C)$ appearing in our state degeneracies and the parameters $(D_\perp,c)$ of our underlying flux-tube theory [@Cudell; @HR; @KV; @missusy]: B = [14]{}(3+D\_) C =  . \[stringvars\] Indeed, for any value of $B$ and $C$, we may regard the total central charge $c$ as having two contributions: one contribution $c_{\rm fluc}= D_\perp$ associated with the degrees of freedom associated with the transverse uncompactified spacetime fluctuations of the flux tube, and a remaining contribution c\_[int]{}    c - D\_    [3 C\^2 2\^2]{} - 4B + 3 \[stringvars2\] associated with those additional, purely internal degrees of freedom which might also exist within the full flux-tube theory (including those associated with any [*compactified*]{} spacetime dimensions which may also exist). At first glance, it might seem that our dark sector must have $D_\perp=2$, just as does our visible sector. This would certainly be true if our dark-sector flux tube were to experience the same spacetime geometry as does the visible sector. However, we emphasize that in a string-theoretic or “braneworld” context, the dark sector could correspond to physics in the “bulk” — , physics perpendicular to the brane on which the visible-sector resides. The degrees of freedom in the bulk would then be able to interact with those on the brane at most gravitationally, and would thus constitute dark matter by construction. However, the geometric properties of the bulk will generally differ from those of the brane — the bulk might contain not only extra spacetime dimensions which are effectively large (, uncompactified) with respect to the fundamental string scale, but also extra spacetime dimensions which are small (, compactified). The bulk may also be populated by additional fields with no spacetime interpretations at all. It is for this reason that we make no assumptions about the values of $c$ or $D_\perp$ associated with the dark sector. Once our flux-tube theory is specified and the corresponding values of $B$ and $C$ determined, we may calculate the corresponding effective static-quark potential $V(R)$ between two quarks a distance $R$ apart. We find [@Cudell] V(R) &=& ([M\_s2]{})\ && [M\_s\^2 R2]{} - [C\^216]{} [1R]{}  + ...     [for]{}  RM\_s\^[-1]{} .       The first term in the final expression indicates a linear confinement potential, as expected; this is nothing but the classical energy in the flux tube. By contrast, the second term resembles a Coulomb term but is actually an attractive universal quantum correction (or Casimir energy) which arises due to the transverse zero-point vibrations of the flux tube. For visible-sector hadrons, it is natural to take $D=4$. As a result, the $D_\perp=2$ scalar string with $c_{\rm int}=0$ (corresponding to $B=5/4$ and $C=2\pi/\sqrt{3}\approx 3.63$) is the “minimal” string that we expect to underlie all descriptions of the actual visible-sector QCD flux tube. In fact, it has been shown in Ref. [@Cudell] that this minimal $D_\perp=2$ scalar string with $\kappa=36$ provides an excellent fit to hadronic data, both for low energies (which are sensitive to the Casimir energy within the confinement potential) as well as high energies (which are governed by the asymptotic degeneracy of hadronic states and the corresponding Hagedorn temperature). As discussed in Ref. [@Cudell], this success — coupled with the appearance of the same quantity $C$ in both places — provides a highly non-trivial test of the classical conformal invariance of the QCD string. In this paper, we shall imagine that our DDM ensemble of dark-sector hadrons mimics that of the visible-sector hadrons to the extent that it corresponds to a set of masses $M_n$ and state degeneracies $\hat g_n$ parametrized by the functional forms given in Eqs. (\[eq:MassSpectrum\]) and (\[eq:fnBetterApprox\]). However, we shall not insist on an actual string interpretation governing our dark-sector confinement dynamics, and as discussed above we shall therefore regard $B$ and $C$ as free parameters which may be adjusted at will (subject to certain constraints to be discussed below). Nevertheless it is only when $B$ and $C$ correspond to appropriate values of $D_\perp$ and $c$ via the relations in Eq. (\[stringvars\]) that we may describe our resulting spectrum as corresponding to that of a classically self-consistent string moving in a specific geometry. Moreover, motivated by our experience with visible-sector hadrons, we shall continue to regard the special scalar-string case with and as our “minimal” theory, corresponding to the action in Eq. (\[scalarstring\]) with . Adjusting the value of $B$ above or below $5/4$ can then be interpreted as changing the effective number of uncompactified spacetime dimensions felt by our dark-sector flux tube (, the number of uncompactified spacetime dimensions into which it can experience fluctuations), while increasing the value of $C$ beyond $2\pi/\sqrt{3}$ corresponds to introducing additional purely internal degrees of freedom with central charge $c_{\rm int}$ into our flux-tube theory. Note, in this regard, that the degrees of freedom associated with fluctuations into extra [*compactified*]{} spacetime dimensions count towards $c_{\rm int}$ rather than $D_\perp$. Thus, in terms of its effects on the dark sector, the act of compactifying a spacetime dimension to a radius below the associated string scale preserves the central charge $c$ (and thus the coefficient $C$) and merely shifts the associated degrees of freedom from $D_\perp$ to $c_{\rm int}$. The resulting change in the asymptotic state degeneracies $\hat g_n$ due to the change in $B$ then reflects the appearance of new Kaluza-Klein resonances in the total flux-tube spectrum. Constraints on parameters ------------------------- Even though $M_s$, $M_0$, $B$, and $C$ are henceforth to be viewed as unrestricted quantities parametrizing our hadron-like DDM ensemble, they are nevertheless subject to certain self-consistency constraints. First, we note that while the asymptotic form for $\hat g_n$ in Eq. (\[eq:fnBetterApprox\]) is remarkably accurate within those regions of $(B,C)$ parameter space for which actual string realizations exist, there are other regions of $(B,C)$ parameter space within which this approximation provides unphysical results. For example, given that the expression for $\hat g_n$ in Eq. (\[eq:fnBetterApprox\]) multiplies a growing Bessel function against a falling monomial, for any given value of $B$ it is in principle possible for there to exist a critical value of $C$ below which $\hat g_n$ is not always monotonically increasing for all $n\geq 0$. Such a situation is clearly unphysical, implying that the number of accessible flux-tube states fails to grow with the total energy in the flux tube. We therefore demand that g\_[n+1]{}  &gt;  g\_[n]{}      [for  all]{}  n0 . \[strongconstraint\] Given that we have taken $\hat g_0=1$, it turns out throughout the parameter range of interest that this requirement is tantamount to demanding g\_1  &gt;  1 . \[weakconstraint\] If we further wish to demand that our ensemble of dark “hadrons” admit a string-theoretic description, then certain additional consistency conditions on the parameters $B$ and $C$ must be satisfied as well. For example, since $D_\perp \in \IZ > 0$ in any self-consistent string construction, we must have $$B ~\in~ \IZ/4 ~>~ 3/4~. \label{eq:BStringConsistCond}$$ Likewise, as discussed above, any self-consistent string theory will also have $c \geq D_\perp$ (or $c_{\rm int}\geq 0$), which in turn implies $$C^2 ~\geq~ \frac{2\pi^2}{3}(4B - 3)~. \label{eq:CStringConsistCond}$$ There are, of course, further string-derived constraints that might be imposed. For example, the allowed set of worldsheet central charges $c$ that can be realized in such non-critical string theories depends crucially on the types of string models under study and the types of conformal field theories used in their constructions. However, the constraints in Eqs. (\[eq:BStringConsistCond\]) and (\[eq:CStringConsistCond\]) can be taken as a minimal model-independent set of constraints that must be satisfied as a prerequisite to any possible string interpretation. ![The region of $(B,C)$ parameter space of interest for a DDM ensemble of dark “hadrons.” The red shaded region is excluded by the theoretical self-consistency condition $\hat g_1 \geq 1$. By contrast, the blue shaded regions are excluded by the constraint $B>3/4$ as well as by the constraint in Eq. (\[eq:CStringConsistCond\]), and thus correspond to regions in which it would not be possible to interpret the ensemble constituents as the states of a quantized string. Note that locations for which $B\not\in \IZ/4$ would also suffer from this difficulty. Within the (unshaded) string-allowed region, we have indicated contours of $D_\perp$, $c$, and $c_{\rm int}$, as defined in Eqs. (\[stringvars\]) and (\[stringvars2\]). The black dot indicates the point in parameter space corresponding to the minimal scalar string with $c_{\rm int}=0$. As demonstrated in Ref. [@Cudell], this model provides the best fit to the visible hadron spectrum.[]{data-label="fig:g1BCExclusion"}](gof1BCExclusionPlotAlt){width="45.00000%"} In Fig. \[fig:g1BCExclusion\], we indicate the region of $(B,C)$ parameter space which is consistent with the constraints in Eqs. (\[weakconstraint\]), (\[eq:BStringConsistCond\]), and (\[eq:CStringConsistCond\]). We emphasize that the first of these constraints must always be satisfied as a matter of internal self-consistency. By contrast, as discussed above, the latter two conditions need to be satisfied only if one imposes the additional stipulation that our ensemble of dark “hadrons” admit a string-theory description. We observe in this connection that the first constraint is always weaker than the remaining string-motivated constraints. In other words, a string-based description with $B\in \IZ/4\geq 1$ is always guaranteed to have monotonically growing degeneracies $\hat g_n$. In Fig. \[fig:g1BCExclusion\] we also highlight the point $(B,C) = (5/4, 2\pi/\sqrt{3})$ corresponding to the “minimal” scalar string. While this theory need not necessarily provide the best-fit description for our dark hadrons (as it does for the visible hadrons), its minimality nevertheless provides a useful benchmark for exploring the parameter space of our DDM model. Finally, we observe from Fig. \[fig:g1BCExclusion\] that our combined constraints imply that C    1.693 . \[Cmin\] Indeed, this is the allowed range in $C$ for which $\hat g_1 > 1$ when $B=3/4$. As an illustration of the results of this section, let us focus further on this “minimal” scalar string. As noted above, the action for this string is given in Eq. (\[scalarstring\]). Quantizing this theory then gives rise to a discrete spectrum of states whose exact degeneracies are[^5] . Indeed it is only because of the existence of a quantized string formulation that we are even able to calculate the degeneracies of the corresponding ensemble from first principles. However, as we have asserted, these degeneracies are extremely well approximated by the expression in Eq. (\[eq:fnBetterApprox\]) with . This is shown in Fig. \[comparison\], where we plot both the discrete exact degeneracies $\hat g_n$ and the approximate functional form in Eq. (\[eq:fnBetterApprox\]). As evident from Fig. \[comparison\], our functional form matches these discrete values of $\hat g_n$ extremely well for all values of $n\geq 0$ — even though the degeneracies $\hat g_n$ are necessarily integers and even though our functional form was originally designed to be accurate only in the asymptotic $n\to\infty$ limit! Indeed, as claimed above, this functional form is accurate to within two percent over the entire range of $n$. This demonstrates the power of the functional form we have adopted, as well as the utility of an underlying string formulation for our flux tube. ![State degeneracies $\hat g_n$ for the scalar-string flux-tube model of Eq. (\[scalarstring\]) (red circles), with the asymptotic functional form in Eq. (\[eq:fnBetterApprox\]) superimposed (blue line). It is clear that our asymptotic functional form succeeds in modelling the state degeneracies extremely accurately all the way down to the ground state, as we shall require for our analysis.[]{data-label="comparison"}](bessel4){width="42.50000%"} Lifetimes and cosmological abundances for hadronic DDM ensembles {#sec:Balancing} ================================================================ In the previous section, we discussed the spectra of our dark “hadronic” DDM ensembles. Our next step, then, is to consider the lifetimes and cosmological abundances of the individual states within these ensembles. Cosmological abundances ----------------------- As we have seen, the degeneracy of states $g_n$ for our ensemble of dark “hadrons” grows exponentially with the mass of the state, with asymptotic behavior $g_n\sim e^{\sqrt{n}} \sim e^{M_n/M_s}$. This exponential rise in the state degeneracies places severe constraints on the possible, physically consistent cosmological production mechanisms by which the corresponding abundances $\Omega_n$ might be established. Indeed, unless the corresponding abundances $\Omega_n$ fall sufficiently rapidly with $n$, our ensemble is likely to encounter severe phenomenological difficulties. Fortunately, our interpretation of the individual components of such an ensemble as dark hadrons suggests a natural mechanism through which the corresponding abundances $\Omega_n$ are generated with an exponential suppression factor capable of overcoming this exponential rise in $g_n$. As we have discussed, we have been imagining that these dark “hadrons” emerge as the result of a dark-sector confining phase transition triggered by the strong interactions of some dark-sector gauge group $G$. This phase transition occurs when the temperature $T$ in the dark sector drops below the critical temperature $T_c$ associated with this phase transition. This event marks the time $t_c$ at which the primordial abundances of our individual hadrons are established. Moreover, it is reasonable to assume that residual $G$ interactions establish thermal equilibrium among these hadrons at $T \sim T_c$. Thus, the primordial abundances $\Omega_n$ of our hadrons can be assumed to follow a Boltzmann distribution at $t=t_c$: \_n(t\_c) = [13 M\_P\^2 H(t\_c)\^2]{} E\_[**p**]{} e\^[-E\_[**p**]{} /T\_c]{}  \[eq:OmeganPrimordial\] where $E_{\bf p}\equiv \sqrt{ {\bf p}\cdot{\bf p} + M_n^2 }$ and $\rho_{\rm crit}(t) \equiv 3 \widetilde M_P^2 H(t)^2$ where $\widetilde M_P\equiv M_P/\sqrt{8\pi}=1/\sqrt{8\pi G_N}$ is the reduced Planck mass and $H(t)$ the Hubble parameter. Indeed, we may equivalently regard these abundances as emerging from an infinitely rapid succession of thermal freeze-outs. Evaluating Eq. (\[eq:OmeganPrimordial\]) explicitly, we find \_n(t\_c) &=& X (M\_n T\_c)\^2 K\_2(M\_n/T\_c)\ && + M\_n\^3 T\_c K\_1(M\_n/T\_c) + K\_3(M\_n/T\_c)          \[Besselexact\] where $K_\nu(z)$ are modified Bessel functions of the second kind and where $X\equiv [6\pi^2 \widetilde M_P^2 H(t_c)^2]^{-1}$ is a common overall multiplicative factor. In general, a given state with mass $M$ produced at temperature $T_c$ will be non-relativistic (behaving like massive matter) if $T_c\lsim M$ and relativistic (behaving like radiation) otherwise. In such limiting cases, the abundances in Eqs. (\[eq:OmeganPrimordial\]) and (\[Besselexact\]) take the simplified forms \_n(t\_c) X M\_n (M\_nT\_c)\^[3/2]{} e\^[-M\_n/T\_c]{} & 6 X T\_c\^4 &  . \[reducedforms\] At first glance, it may seem that any value for $T_c$ might be phenomenologically permissible. However, this production mechanism can only be self-consistent if it injects a finite total energy density into our system. In other words, as a bare minimum, we must require that \_[tot]{}(t\_c)    \_[n=0]{}\^g\_n \_n(t\_c)  &lt;    . \[finitetotal\] However, this condition is sensitive to the behavior of the abundances $\Omega_n(t_c)$ for extremely large $n$, corresponding to states which are non-relativistic. For such states, we see from Eq. (\[reducedforms\]) that $\Omega_n(t_c)\sim e^{-M_n/T_c}$. With $g_n\sim n^{-B} e^{C\sqrt{n}}$ as $n\to\infty$, we find using Eqs. (\[eq:MassSpectrum\]) and (\[eq:OmeganPrimordial\]) that Eq. (\[finitetotal\]) can only hold if    [1C]{} . \[Tclimit\] This then becomes a hard bound on the allowed values of $T_c$, one which ensures that the Boltzmann exponential suppression factor in Eq. (\[eq:OmeganPrimordial\]) ultimately overcomes the exponential rise in the degeneracy of states $g_n$. Indeed, Eq. (\[Tclimit\]) reflects nothing more than the statement that $T_c\leq T_H$, where $T_H\equiv M_s/C$ is the Hagedorn temperature of our dark ensemble. For the [*visible*]{} hadronic sector, one often assumes that $T_c$ and $T_H$ are related to each other parametrically, with $T_c$ either directly identified as $T_H$ or positioned not too far below $T_H$. We shall implicitly make the same assumption for the dynamics of our dark sector as well. The next question is to determine which of our ensemble components are produced relativistically or non-relativistically at $T=T_c$. To do this, we shall henceforth assume that $T_c, M_s, M_0 > T_{\mathrm{MRE}}$ where $\tMRE$ and $\TMRE$ are the time and temperature associated with matter-radiation equality. This assumption, which parallels what occurs for the hadrons of the visible sector, ensures that our abundances $\Omega_n(t)$ are established during the radiation-dominated era prior to matter-radiation equality and that all ensemble constituents have become effectively non-relativistic by $\tMRE$. Note that the assumption that $T_c>\TMRE$ follows from our expectation that our dark degrees of freedom prior to $t_c$ (, prior to “hadronization” in the dark sector) are likely to be relativistic, thereby reinforcing the radiation-dominated nature of the era prior to $T_{\rm MRE}$ and making matter-radiation equality impossible to achieve using only visible-sector matter, as would have been required had we taken $T_c<\TMRE$. Similarly, the assertion that $M_s>T_c$ follows directly from our assumption that $T_c>\TMRE$, given the constraints in Eqs. (\[Cmin\]) and (\[Tclimit\]). Finally, although it is not impossible to imagine self-consistent scenarios in which $M_0<\TMRE$, taking $M_0>\TMRE$ also helps to preserve $\tMRE$ at its standard cosmological value. We shall nevertheless make no assertion regarding the relative sizes of $M_0$ and $T_c$. The above assumptions enable us to determine which of the components of our ensemble are relativistic or non-relativistic at $T=T_c$. To do this, we simply compare $T_c$ against the ensemble masses $M_n$ given in Eq. (\[eq:MassSpectrum\]). Given the constraint in Eq. (\[Tclimit\]), it is straightforward to demonstrate that T\_c    [M\_sC]{}    [ M\_1C]{} . Since $C>1$ \[as follows from Eq. (\[Cmin\])\], we conclude that [*all of our ensemble components with $n\geq 1$ are necessarily non-relativistic at $t=t_c$. By contrast, the $n=0$ component will be relativistic at $t=t_c$ if $T_c \gsim M_0$, and non-relativistic otherwise.*]{} Eq. (\[eq:OmeganPrimordial\]) describes the abundances of our dark-sector hadrons at the time $t_c$ when these hadrons come into existence as the result of a dark-sector confining transition. However, once established, these abundances then evolve non-trivially with time as a result of two effects. The first of these is Hubble expansion; the second is particle decay. We shall treat each of these effects separately. In order to evaluate the effect of Hubble expansion on the abundances $\Omega_n(t)$, we shall assume a standard cosmological history in which the universe remains radiation-dominated (RD) from very early times up to the time $\tMRE$ of matter-radiation equality. We shall also approximate the universe as matter-dominated (MD) throughout the subsequent epoch. In general, we recall that the abundance $\Omega(t)$ of non-relativistic matter scales as $t^{1/2}$ during an RD epoch but remains constant in an MD epoch; by contrast, the abundance of relativistic matter remains constant during an RD epoch but scales as $t^{-2/3}$ during an MD epoch. Likewise, we recall that the temperature $T$ of the universe scales as $T\sim t^{-1/2}$ during RD but $T\sim t^{-2/3}$ during MD. Thus any ensemble component of mass $M$ which is “born” relativistic at $T=T_c\gg M$ will eventually transition to non-relativistic behavior as the temperature ultimately drops below $T\sim M$. Collecting these observations, we then find that the net effect of Hubble expansion is to rescale the original abundance of given state of mass $M$ by a factor which depends on whether that state was non-relativistic or relativistic at the time $t_c$ of its production: (t)  =  (t\_c) &   & \[magnifications\] where $t_M$ denotes the time at which $T=M$. Note that this result is valid for any time $t\geq \tMRE$. Since it follows from our assumptions that $t_c,t_M < \tMRE$, we see that the abundances of all of our ensemble states are necessarily [*enhanced*]{} before reaching the current MD era. However, as evident from Eq. (\[magnifications\]), these abundances are not enhanced equally: the abundance of a non-relativistic component is enhanced more greatly than that of any relativistic component of mass $M$ a factor $\sqrt{t_M/t_c}$. We have already seen that the states with $n\geq 1$ are all non-relativistic, while the $n=0$ ground state is either relativistic or non-relativistic depending on the value of $M_0/T_c$. Thus, putting all of the pieces together, we find for all $n\geq 1$ that \_[n]{}(t) &=& X M\_n (M\_n T\_c)\^[3/2]{} e\^[-M\_n/T\_c]{}\ &=& X ( [g\_cg\_[MRE]{}]{})\^[1/4]{} [(M\_n T\_c)\^[5/2]{} T\_[MRE]{}]{} e\^[-M\_n/T\_c]{}\ &=& [1g\_c\^[3/4]{} g\_[MRE]{}\^[1/4]{}]{} [M\_n\^[5/2]{} T\_c\^[3/2]{} T\_[MRE]{}]{} e\^[-M\_n/T\_c]{} .\ \[abundances\_n\] Note that in passing to the second line we have exploited the standard time/temperature relationship suitable for an RD epoch, specifically[^6] t =  g\_(T)\^[-1/2]{} [M\_PT\^2]{} , \[timetemp\] where $g_\ast(T)$ tallies the number of effectively relativistic degrees of freedom driving the Hubble expansion at any temperature $T$, with $g_\alpha\equiv g_\ast(T_\alpha)$. Likewise, in passing to the final line of Eq. (\[abundances\_n\]) we have recognized that $H=1/(2t)$ for an RD epoch, from which it follows that $X=1/(6 g_c T_c^4)$. For $n=0$, however, the corresponding cosmological abundance is given by \_0(t) = & T\_cM\_0 & & T\_cM\_0 .     \[abundances\_0\] As expected, the cosmological abundances in Eqs. (\[abundances\_n\]) and (\[abundances\_0\]) depend non-trivially on the three mass scales which parametrize our dark-hadron mass spectrum, namely $M_0$, $T_c$, and $M_s$ (the latter appearing implicitly through $M_n$). They also depend on the fixed mass scale $T_{\rm MRE}$. However, if we disregard the numerical $g$-factors which appear in these results and which only serve to parametrize the external time/temperature relationship, we see that the [*ratios*]{} between these abundances depend only on the [*ratios*]{} between our input mass scales. In particular, such abundance ratios are no longer anchored to a fixed external mass scale such as $T_{\rm MRE}$. To make this point explicit, let us define the dimensionless quantities r   [M\_0M\_s]{}     [and]{}     s   [T\_cM\_s]{}  and imagine that $g_\ast(T)^{1/4}$ does not change significantly between $T_c$ and $M_0$. (Note, indeed, that $g^{1/4}$ varies much more slowly than $g$.) We then find from Eqs. (\[abundances\_n\]) and (\[abundances\_0\]) that  =  & sr &   & sr .     Thus, up to an overall rescaling factor $\Omega_0$, we see that all of our abundances $\Omega_n$ depend purely on the dimensionless ratios $r$ and $s$. It then follows that the cosmological abundance of each state in our dark-hadron ensemble is determined once $\Omega_0$ is anchored to a particular numerical value and specific values of $r$ and $s$ are chosen. This observation will be important in what follows. Lifetimes and decays -------------------- As indicated above, our derivation of the dark-sector cosmological abundances $\Omega_n(t)$ has thus far disregarded the effects of particle decays. In other words, we have implicitly assumed that each ensemble component is absolutely stable once produced at $T_c$. As our final step, we shall therefore now incorporate the effects of such decays into our analysis. In doing so, we shall make several simplifying assumptions. First, we shall assume that the net injection of energy density in the form of radiation from these decays has a negligible effect on the total radiation-energy density of the universe. Hence, this effect decouples from the effect of Hubble expansion. Second, we shall further assume that the contribution to the total decay width $\Gamma_n$ of each ensemble constituent from intra-ensemble decays is negligible. In other words, we shall assume that $\Gamma_n$ is dominated by decays to visible-sector final states which do not include lighter ensemble constituents. We shall discuss the consequences of relaxing this assumption in Sect. \[sec:Conclusion\]. Third, we shall assume that all states at a given mass level $n$ share a common decay width $\Gamma_n$, and that this width scales with $n$ across our dark-hadron ensemble according to $$\Gamma_n ~=~ \Gamma_0 \left( \frac{M_n}{M_0} \right)^{\xi}~ \label{eq:Widths}$$ where $M_n$ are the dark-hadron masses in Eq. (\[eq:MassSpectrum\]) and where $\Gamma_0$ (or, equivalently, the corresponding lifetime $\tau_0$) and the scaling exponent $\xi > 0$ are taken to be additional free parameters of our model. Thus each state in our dark-sector ensemble has a lifetime $\tau_n\equiv 1/\Gamma_n$ given by $$\tau_n ~=~ \tau_0 \left( \frac{n}{r^2} + 1 \right)^{-\xi/2}~. \label{eq:Lifetimes}$$ Finally, for simplicity, we shall imagine that all states with lifetimes $\tau_n$ indeed actually decay at $t=\tau_n$. Under these assumptions, the abundance $\Omega_n(t)$ of any ensemble constituent at any time $t \geq t_c$ is given by the expressions quoted above, but now multiplied by an additional decay factor e\^[- (t-t\_c)/\_n]{}   e\^[- (/r)\^t/\_0]{}  \[expfactor\] where we have approximated $t \gg t_c$. For $s\lsim r$, we thus have \_[n1]{}(t)  =  \^[(NR)]{}\_0(t) [(n+r\^2)\^[5/4]{} r\^[5/2]{}]{} [E]{}\_n\^[(NR)]{}(t)  \[omeganNR\] where \_n\^[(NR)]{} (t)   e\^[-(-r)/s - \[(/r)\^-1\] t/\_0]{} and where \_0\^[(NR)]{}(t) =  ([rs]{})\^[3/2]{} ( [M\_0 T\_[MRE]{}]{}) e\^[-r/s-t/\_0]{} . \[omega0NR\] By contrast, for $s\gsim r$, we have \_[n1]{}(t)  =  \_0\^[(R)]{}(t) [ (n+r\^2)\^[5/4]{}r s\^[3/2]{} ]{} [E]{}\_n\^[(R)]{} (t) \[omeganR\] where \_n\^[(R)]{} (t)   e\^[-/s - \[(/r)\^-1\] t/\_0]{}   and where \_0\^[(R)]{}(t)  =  [1g\_c]{} ( [g\_[M\_0]{}g\_[MRE]{}]{})\^[1/4]{} ([M\_0T\_[MRE]{}]{}) e\^[-t/\_0]{} . \[omega0R\] Cosmological constraints on the dark-hadron ensemble {#sec:OmegaEtaWeff} ===================================================== Having determined the abundances and lifetimes of each of the individual components of our dark-hadron DDM ensemble, we now proceed to study the overall properties of our ensemble and its behavior as a function of time. However, as we shall see, many of the phenomenological properties and constraints that apply to such an ensemble do not rest upon the properties of the individual ensemble components [*per se*]{}, but rather upon various aggregate quantities that collectively describe the ensemble as a whole. Accordingly, in this section we shall begin by describing three aggregate quantities which ultimately play the most important roles in characterizing and constraining such dark-hadron DDM ensembles. We shall then discuss of some of the most immediate cosmological constraints that can be placed upon these quantities. Total abundance, tower fraction, and effective equation of state \[quantities\] ------------------------------------------------------------------------------- Perhaps not surprisingly, the first aggregate property of a given dark-hadron DDM ensemble that shall concern us is its total abundance (t)   \_[n=0]{}\^g\_n \_n(t) =  \_[n=0]{}\^g\_n \_n(t) . \[omegatot\] Given our results in Eqs. (\[omeganNR\]) and (\[omeganR\]), this total abundance takes the form && -0.15 truein (t)  = \ && -0.15 truein & sr & sr \ \[Omegatot\] where $\Omega_0^{\rm (NR,R)}(t)$ are given in Eqs. (\[omega0NR\]) and (\[omega0R\]). Indeed, we further note from Eqs. (\[omega0NR\]) and (\[omega0R\]) that \^[(NR,R)]{}\_0(t)  =  e\^[-(t-)/\_0]{} \^[(NR,R)]{}\_0()  \[omega0timenow\] where $\tnow\approx 4\times 10^{17}$ s denotes the current age of the universe. We thus see from Eqs. (\[Omegatot\]) and (\[omega0timenow\]) that the overall magnitude of $\Omega_{\rm tot}^{\rm (NR,R)}(t)$ can be viewed as being set by the single number $\Omega_0^{\rm (NR,R)}(\tnow)$. In characterizing the properties of our DDM ensemble and how they evolve with time, we are certainly interested in tracking $\Omegatot(t)$. However, we are also interested in tracking the [*distribution*]{} of this total abundance among the individual ensemble constituents. One quantity of particular interest that provides essential information about this distribution is the so-called “tower fraction” $0\leq \eta(t)\leq 1$ originally introduced in Ref. [@DDM1]. This quantity is typically defined in the DDM literature as the fraction of the abundance carried by all ensemble components [*other*]{} than the dominant component, where the dominant component is the one making the largest individual contribution to $\Omegatot(t)$. As such, the quantity $\eta$ tracks the degree to which a single component carries the bulk of the total abundance. When $\eta$ is close to zero, our ensemble effectively resembles a traditional single-component dark-matter setup. By contrast, when $\eta$ differs significantly from zero, our ensemble is more truly “DDM-like”, with many of the ensemble constituents playing a non-trivial role in together shaping the properties of the dark sector. Such a definition for $\eta$ is appropriate in cases in which each ensemble constituent has a unique mass and lifetime. Indeed, this has often been the case for the types of DDM ensembles previously studied. However, for the dark-hadron DDM ensembles on which we are focusing here, the states at a given Regge level $n$ have been assumed to have essentially equal masses and lifetimes. Thus, in this paper, we shall adopt a modified definition for $\eta(t)$ in which the comparison is made between the aggregate abundance contributions that accrue [*level by level*]{} rather than state by state. Specifically, we define $$\widehat{\Omega}_n(t) ~\equiv~ g_n\Omega_n(t)$$ as the aggregate cosmological abundance arising from all states at a particular oscillator level $n$. In terms of these aggregate abundances, we then define $$\eta(t) ~\equiv~ 1 - \frac{\max_n\{\widehat{\Omega}_n(t)\}}{\Omegatot(t)}~.$$ Thus we continue to have $0\leq \eta(t)\leq 1$, with $\eta\approx 0$ signifying a dark sector resembling traditional single-component dark matter and $\eta >0$ indicating (and quantifying) a DDM-like departure from this traditional scenario. At first glance, one might assume that the $n=0$ ground state(s) must always yield the largest aggregate abundance $\widehat \Omega_n(t)$ because the primordial abundances $\Omega_n(t)$ for the states at all higher levels $n>0$ are exponentially suppressed by the corresponding Boltzmann factor in Eq. (\[eq:OmeganPrimordial\]). However, for the DDM ensembles of dark hadrons studied here, it often turns out that the Hagedorn-like exponential growth of the degeneracies $g_n$ as a function of $n$ can more than compensate for the Boltzmann suppression for small values of $n$. Indeed, this is true even for combinations of the ensemble parameters $B$, $C$, $r$, and $s$ which satisfy the consistency conditions discussed in Sect. \[sec:DensityOfStates\] and which yield a finite value of $\Omegatot(t_c)$. As a result of this net balancing between these two competing exponential effects, the level carrying the greatest aggregate cosmological abundance $\widehat \Omega_n(t)$ need not always be the $n=0$ ground state. It need not even be fixed as a function of time. This possibility must therefore be taken into account when evaluating $\eta(t)$. Finally, another important quantity which can be taken to characterize our dark sector is the so-called equation-of-state parameter $w$. For a single-component dark sector, this quantity is nothing but the ratio between the pressure $p$ and energy density $\rho$ of the dark component: $p=w\rho$. However, we are dealing here with a multi-component dark sector in which each component has its own individual lifetime and abundance. As a result, the total energy density and pressure associated with our dark sector will generally experience a rather non-trivial time dependence which causes our ensemble as a whole to behave collectively as if it had a non-trivial $w$ — even if each individual component is taken to be pure matter with $w=0$. To describe these collective effects, we therefore define [@DDM1] an [*effective*]{} equation-of-state parameter $\weff(t)$ which describes the behavior of our ensemble as a single collective entity: $$\weff(t) ~\equiv~ -\left(\frac{1}{3H}\frac{d\log\rhotot}{dt}+1 \right)~. \label{weffdef}$$ Here $H$ is the Hubble parameter and $\rhotot = 3\widetilde M_P H^2\Omegatot$ is the total energy density of the ensemble. Note that the definition in Eq. (\[weffdef\]) is nothing but the usual definition of $w$ prior to any assumptions of dark-sector minimality. As discussed above, we are primarily concerned with the evolution of the ensemble during the present matter-dominated epoch, within which $H(t)\approx 2/(3t)$. Thus, the effective equation-of-state parameter for our DDM ensemble within this epoch is given by $$\weff(t) ~=~ -\frac{t}{2\Omegatot}\frac{d\Omegatot(t)}{dt}~. \label{eq:wdef}$$ As discussed in Sect. \[sec:Balancing\], the only explicit dependence of $\Omegatot(t)$ on $t$ within a matter-dominated epoch is due to the exponential decay factor (\[expfactor\]) within each individual abundance $\Omega_n(t)$. We thus find that $$\weff(t) ~=~ \frac{t}{2\tau_0 \Omegatot(t)}\sum_{n=0}^\infty g_n \left(\frac{\sqrt{n + r^2}}{r}\right)^\xi \Omega_n(t)~.~~ \label{eq:weffExplicit}$$ Note that even though each of the individual components of our ensemble has been taken to be matter-like (with $w=0$), the collective equation-of-state parameter $w_{\rm eff}(t)$ for our ensemble as a whole is [*positive*]{}, reflecting the fact that the ensemble as a whole is continually losing abundance as its individual components decay. Indeed, it is only in the $\tau_0\to\infty$ limit that $w_{\rm eff}(t)\to 0$. As we shall see in Sect. \[sec:Constraints\], $\weff(t)$ plays an important role in constraining the parameter space of these DDM ensembles. Cosmological constraints \[sec:Constraints\] -------------------------------------------- Given our time-dependent aggregate quantities $\Omegatot(t)$, $\eta(t)$, and $w_{\rm eff}(t)$, we now turn to the cosmological constraints that bound these functions. In this way, we shall ultimately be placing non-trivial constraints on the parameter space underlying these hadronic DDM ensembles. In this connection, we again stress that our aim in this paper is not to perform a detailed analysis of the astrophysical and/or cosmological constraints on this parameter space. Such a detailed analysis would clearly be an important but extensive task which is beyond the scope of this paper. Moreover, such an analysis would require a host of further assumptions concerning the particular nature of our ensemble, the specific decay modes of its constituents into SM states, and so forth. Rather, in this paper, our goal is to simply to obtain a rough initial sense of those regions of parameter space in which a DDM ensemble of dark “hadrons” might have at least the [*potential*]{} of phenomenological viability. Accordingly, in what follows, we shall put forth a set of requirements which directly constrain the fundamental quantities $\Omegatot(t)$, $\eta(t)$, and $\weff(t)$ we have defined above, but which do not require any further information concerning these hadronic ensembles beyond those properties already discussed. In some sense, then, these might be viewed as the immediate “zeroth-order” model-independent constraints that any DDM ensemble of this sort must satisfy. Our first constraint is an obvious one: despite the presence of an infinite tower of dark-hadronic resonances, each with its own cosmological abundance and lifetime, we shall demand that (t\_[now]{})  =     0.26 . \[constraint1\] This requirement is clearly predicated on the assumption that our dark-hadronic ensemble represents the totality of the dark sector; for other cases we would simply require that $\Omegatot(t_{\rm now}) \lsim 0.26$. As we shall see, in either situation this is a severe and unavoidable constraint which ultimately “anchors” our entire construction in terms of actual numbers and mass scales. Second, we may also consider the [*time-variation*]{} of $\Omegatot(t)$. The time-variation of this total abundance is constrained by experimental probes which yield information about the dark-matter abundance during different cosmological epochs. For example, CMB data [@Planck] provides information about the dark-matter abundance around the time of last scattering — , at a redshift $z \approx 1100$, or equivalently a time of roughly $2.7 \times 10^{-5} \tnow$. On the other hand, observational data on baryon acoustic oscillations [@BOSS] and the relationship between luminosity and redshift for Type Ia supernovae [@SCP] provide information about $H(t)$ and the dark-energy abundance $\Omega_\Lambda$ at subsequent times, down to redshifts of around $z\approx 0.5$. Within the context of the $\Lambda$CDM cosmology, the agreement between these different measurements implies that the dark-matter abundance has not changed dramatically since the time of last scattering. In order be consistent with this result, we shall therefore demand that the total abundance of our DDM ensemble not vary by more than 5% between an early “look-back” time $t_{\rm LB}$ and today: $$\frac{\Omega(t) - \Omega(\tnow)}{\Omega(\tnow)} ~\leq ~ 0.05~~~~~ {\rm for~all}~~ t_{\rm LB}\leq t \leq \tnow~. \label{constraint2}$$ In what follows, we shall choose a look-back time $\tLB = 10^{-6}\tnow$, which lies comfortably before the recombination epoch. In addition to these constraints on the time-variation of the dark-matter abundance, there are further considerations which constrain the decays of the DDM-ensemble constituents more directly. These constraints depend on the decay properties of the dark-sector particles and are thus ultimately model-dependent. However, for those rather general cases in which the ensemble constituents can decay to final states involving visible-sector particles, one must ensure that these decay products not disrupt big-bang nucleosynthesis [@BBNLimits], not produce observable distortions in the CMB [@HuAndSilkLong; @HuAndSilkShort], not reionize the universe [@Slatyer], and not violate current limits on the fluxes of photons or other cosmic-ray particles [@AMSPositron; @AMSAntiproton]. Indeed, even if the ensemble constituents decay exclusively into other, lighter dark-sector particles, such decays can nevertheless leave observable imprints on small-scale structure [@peter; @MeiYu], alter the scale- and redshift-dependence of the cosmological gravitational-lensing power spectrum [@LensingConstraint], and affect the luminosity-redshift relation for Type Ia supernovae [@Gong; @Savvas]. Since these effects all arise from the decays of ensemble constituents, non-observation of these effects also leads to constraints on the time-variation of $\Omega_{\rm tot}$. Some of these latter constraints admittedly depend on model-dependent aspects of the decay kinematics of the dark-ensemble constituents. However the strongest and most general of these constraints effectively amount to limits on the variation of $\Omegatot(t)$ within the recent past — , for redshifts $0 \lesssim z \lesssim 3$. Therefore, in addition to our look-back-time constraint in Eq. (\[constraint2\]), we shall also impose an additional constraint on our effective equation-of-state parameter: ()   0.05 . \[constraint3\] Through Eq. (\[eq:wdef\]), this thus becomes a constraint on the present-day [*time-derivative*]{} of $\Omegatot(t)$. It is important to stress that this constraint is independent of that in Eq. (\[constraint2\]): while Eq. (\[constraint2\]) constrains accumulated changes in $\Omegatot(t)$ over a relatively long interval, Eq. (\[constraint3\]) constrains the time-variation of $\Omegatot(t)$ near the present time. Other considerations will also guide our interest in certain regions of parameter space. For example, from a DDM-inspired standpoint, we are particularly interested in scenarios for which ()  \~  [O]{}(1) , \[constraint4\] , scenarios in which the present-day value of $\eta$ is significantly different from zero. This ensures that a sizable number of ensemble constituents continue to survive and contribute meaningfully to $\Omegatot$ at the present time, with dark-matter decays occurring [*throughout*]{} the present epoch and not just in the distant past or future. Although Eq. (\[constraint4\]) is not a strict requirement for phenomenological consistency, this condition guides the degree to which we may regard our ensemble as being fully DDM-like, with a significant portion of the ensemble playing a non-trivial role in the phenomenology of the dark sector. For example, this condition rules out regions of parameter space in which $\tau_n\ll \tau_0 $ for all $n\geq 1$, with $\tau_1 \ll t_{\rm LB}$. In such regions of parameter space, all excited dark-hadronic states have decayed prior to our look-back time, leaving us with a single dark-hadronic ground state in the present epoch. Such a scenario trivially satisfies all of our phenomenological constraints on the time-variations of the total dark-sector abundance, but is effectively no different from that of a traditional, single-component dark sector. It is thus less interesting from a DDM perspective. There are two further phenomenological constraints which will be useful for us to consider in the following. First, we shall demand that $\tau_0 \gg \tnow$. Although we do not necessarily require $\tau_0\approx 10^9 \tnow$ as in traditional single-component dark sectors, we generally expect that $\tau_0$ must exceed $\tnow$ by at least several orders of magnitude in order to satisfy look-back and $w_{\rm eff}$ constraints. This assumption will be discussed further in Sect. \[sec:Results\]. Likewise, although we have thus far assumed $M_0\geq T_{\rm MRE}$ throughout our analysis, we actually must impose the somewhat stronger bound $M_0\gsim {\cal O}(10^3) T_{\rm MRE} \approx {\cal O}({\rm keV})$ in order to satisfy BBN and structure-formation constraints. This last requirement implicitly assumes that our lightest ensemble component carries the largest cosmological abundance (or at least a sizable fraction of the total cosmological abundance), but we shall see in Sect. \[sec:Results\] that this turns out to be true for the vast majority of phenomenologically interesting cases. Finally, we shall also make certain simplifying assumptions. First, for concreteness, we shall restrict our attention to situations with $\xi=3$. In other words, we shall assume that the dominant contributions to the decay lifetimes $\tau_n$ of our DDM constituents $\phi_n$ scale as $\tau_n\sim 1/M_n^3$ across the DDM ensemble. Decay widths of the form $\Gamma_n\sim M_n^3/\Lambda^2$ emerge naturally from operators such as $\phi_n F_{\mu\nu}F^{\mu\nu}/\Lambda$ where $\Lambda$ parametrizes the energy scale associated with such couplings and where $F^{\mu\nu}$ denotes a field-strength tensor associated with either the visible-sector (SM) photon or a dark-radiation photon associated with an additional Abelian gauge group under which the ensemble constituents are not charged. The contributions from such operators will dominate the decays of our DDM constituents in scenarios in which our DDM ensemble is uncharged with respect to all SM symmetries, and in which intra-ensemble decays can be neglected. Likewise, we shall also make the simplifying assumption that $\kappa=1$ in Eq. (\[eq:gnInTermsOffn\]). This restricts us to the bare “minimal” case in which we do not ascribe non-trivial degrees of freedom to our dark-sector quarks, and thereby focus exclusively on the ensemble of states generated by our infinite tower of hadronic resonances. Finally, throughout our analysis, we shall continue to impose the self-consistency constraints listed in Eqs. (\[strongconstraint\]) \[or equivalently (\[weakconstraint\])\], (\[eq:BStringConsistCond\]), (\[eq:CStringConsistCond\]), and (\[Tclimit\]). Thus, going forward, the free parameters governing our dark-hadron DDM ensemble may be tallied as follows. First, there are the two parameters $\lbrace B,C\rbrace$ which govern the individual state degeneracies $\hat g_n$ according to Eq. (\[eq:fnBetterApprox\]). Second, there are the four parameters $\lbrace r,s,M_0, \tau_0\rbrace $ which govern the individual abundances $\Omega_n(t)$ in Eqs. (\[expfactor\]) through (\[omega0R\]). However, imposing Eq. (\[constraint1\]) as an overall normalization condition allows us to remove $M_0$ as a free parameter. Thus, for the rest of this paper, we shall consider our DDM ensembles as functions of their locations within the five-dimensional parameter space corresponding to the variables $\lbrace B,C, r,s, \tau_0\rbrace$ where $B\geq 1$, $C^2 \geq 2\pi^2 (4B-3)/3$, and $s\leq 1/C$. Results\[sec:Results\] ====================== In general, we seek to determine which values of our defining parameters $\lbrace B, C, r, s, \tau_0\rbrace$ lead to self-consistent and potentially viable dark sectors — , sectors which satisfy our abundance, look-back, and $w_{\rm eff}$ constraints in Eqs. (\[constraint1\]), (\[constraint2\]), and (\[constraint3\]) respectively, along with our $M_0> {\cal O}({\rm keV})$ constraint. For each such set, we also seek to determine the corresponding values of relevant mass scales such as the string scale $M_s$. We also seek to determine the extent to which the corresponding ensemble is truly DDM-like, with a relatively large number of component states playing a significant role in the phenomenology of the dark sector and contributing to $\Omega_{\rm tot}$ at the present time. In general, the larger the value of $\eta(t_{\rm now})$, the more DDM-like the corresponding ensemble. At first glance, it might seem rather daunting to orient ourselves within the five-dimensional $\lbrace B, C, r, s, \tau_0\rbrace$ parameter space. However, there are really two separate parts to our analysis — one part which depends only on [*relative*]{} mass scales, and one part which makes explicit reference to [*absolute*]{} mass scales. It is clear from Eqs. (\[Omegatot\]) and (\[omega0timenow\]) that once we know $\lbrace B,C, r,s,\tau_0\rbrace$, we can determine the function $\Omegatot^{\rm (R,NR)}(t)$ up to an overall multiplicative constant $\Omega^{\rm (R,NR)}_0(\tnow)$. Setting $\Omegatot^{\rm (R,NR)}(\tnow)=\OmegaCDM \approx 0.26$ therefore immediately determines a required numerical value of $\Omega^{\rm (R,NR)}_0(\tnow)$. This also determines the corresponding values of $\eta(\tnow)$ and $\weff(\tnow)$. Up to this point, we have not yet anchored our results in terms of absolute mass scales. However, this can also easily be done: we simply set our required numerical value of $\Omega^{\rm (R,NR)}_0(\tnow)$ to the expression in either Eq. (\[omega0NR\]) or Eq. (\[omega0R\]). This then determines an absolute value for the mass scale $M_0$, whereupon we find that $M_s= r M_0$ and $T_c= (s/r) M_0$. Thus, in this way, we can extract the values for $M_s$ and $\eta(\tnow)$ corresponding to every point in the $\lbrace B,C, r,s,\tau_0\rbrace$ parameter space. Certain observations can be made rather rapidly. For example, given Eq. (\[constraint1\]), it immediately follows that $\Omega_0(t_{\rm now}) \lsim 0.26$ — a bound which can be saturated only when $\eta(t_{\rm now})=0$. More generally and more schematically, we might write this constraint in the rough order-of-magnitude form \_0(t\_[now]{})    [O]{}(0.1) . \[omega0constraint\] However, let us now consider the expression in Eq. (\[omega0R\]) for $\Omega_0(t)$ in the relativistic case. Since $\tau_0$ must significantly exceed $\tnow$ by at least several orders of magnitude, as discussed in Sect. \[sec:OmegaEtaWeff\], we see that the exponential factor $e^{-t/\tau_0}$ is essentially 1. Likewise we recall that $M_0/T_{\rm MRE}\geq {\cal O}(10^3)$, as also discussed in Sect. \[sec:OmegaEtaWeff\]. Let us assume that this bound is saturated, so that $M_0/T_{\rm MRE}={\cal O}(10^3)$. We therefore find that Eq. (\[omega0constraint\]) can be satisfied only if $g_c \sim 10^{4}$. This would in turn require a mass scale $T_c$ which at the very minimum exceeds the TeV scale (thereby introducing a hierarchy between $T_c$ and $M_0$ which is at least a factor of $10^6$) and which actually must be so high that there are at least ten times as many effectively relativistic degrees of freedom below this scale than are known to exist below the TeV scale — a rather unlikely proposition resting entirely on currently unknown physics. Considering greater values of $M_0/T_{\rm MRE}$ only worsens this situation and requires even greater values of $g_c$. Therefore, although there might exist finely tuned slivers of parameter space in which one might manage to achieve a balancing between $g_c$ and $M_0/T_{\rm MRE}$ sufficient to satisfy Eq. (\[omega0constraint\]), we shall abandon any further consideration of the relativistic case in what follows. This situation changes dramatically when we turn to the non-relativistic case in Eq. (\[omega0NR\]). In this case, we continue to find that $e^{-\tnow/\tau_0}\approx 1$. However, the presence of the factor $(r/s)^{3/2} e^{-r/s}$ allows us greater freedom in satisfying the constraint in Eq. (\[omega0constraint\]). Indeed, the first thing we learn is that our system is going to be very sensitive to the ratio $r/s$ — not surprising, given that this was already the radio that determined the extent to which our lightest mode was relativistic or non-relativistic. However, we now see that $r/s$ is also going to play a large role in governing the allowed values of the overall mass scales in our system, with greater (lesser) values of $r/s$ generally corresponding to higher (lower) absolute mass scales for our ensemble. We shall therefore proceed through our parameter space as outlined above, paying special attention to the values of $r$ and $s$ and in particular to the ratio $r/s$. Specifically, for each value of $\lbrace B, C, r,s, \tau_0\rbrace$, we shall determine whether our internal consistency constraints $B\geq 1$, $C^2 \geq 2\pi^2 (4B-3)/3$, and $s\leq 1/C$ are satisfied and whether the phenomenological consistency constraints in Eqs. (\[constraint2\]) and (\[constraint3\]) are satisfied. If so, we shall then determine the corresponding values of $M_s$ and $\eta(\tnow)$, with the overall goal of understanding which regions of parameter space potentially lead to viable ensembles and which subregions correspond to ensembles which are particularly DDM-like. ![image](OW0e-rs-Bbf-Cbf-t9){width="45.00000%"} 0.05 truein ![image](contour-rrs-BCbf-t9){width="45.00000%"} Because of the somewhat natural and intuitive role played by the $D_\perp=2$ scalar flux tube, as discussed in Sect. \[sec:DensityOfStates\], we shall adopt the values B =  5/4 ,      C = 2/3.63  \[BCbenchmarks\] as “benchmark” values and begin our exploration within $(r,s)$ space. Taking $\tau_0= 10^9 \tnow$, we find the results shown in Fig. \[fig3\]. Let us first concentrate on the left panel of Fig. \[fig3\]. The red region indicates those values of $(r,s)$ which are excluded by look-back and $w_{\rm eff}$ constraints, while the pale green region is excluded by the requirement that $M_0\gsim {\cal O}({\rm keV})$. The blue curves indicate contours of $\eta(\tnow)$ and the magenta curves indicate contours of $M_s$, labelled by values of $\log_{10} (M_s/{\rm GeV})$. The single green curve indicates the contour with $M_0=1$ keV.  The thin black curve indicates the contour with $r/s=1$, and thus serves as the nominal dividing line between the regions in which the lowest ensemble state is relativistic (above and to the left) or non-relativistic (below and to the right). Several things are immediately apparent from this figure. First, we see that the portion of the parameter space corresponding to the relativistic case is excluded by our constraint on $M_0$. This is entirely in keeping with our conclusions already reached above. Nevertheless, we also see that beyond this region there exists an entire area of parameter space in which all of our constraints are satisfied. Moreover, within this region we see that $M_s$ varies from the keV/MeV-range all the way to the Planck scale. Likewise, $\eta(\tnow)$ varies through all of its possible values. This is therefore not only an allowed region, but one which is likely to be exceedingly rich in phenomenology. Indeed, given the contours plotted in this figure, we see that the “sweet spot” within the $(r,s)$ parameter space lies roughly within the range  1r6  0.05 s 0.18 . \[sweetspot\] This is the region of $(r,s)$ parameter space where the plotted blue and magenta contours intersect each other and form a “cross-hatched” region, as illustrated in the left panel of Fig. \[fig3\]. This sweet spot is therefore the region that will be of maximum interest to us. Indeed, within this region, we observe from the left panel of Fig. \[fig3\] that $\eta(\tnow)$ increases if either $r$ or $s$ is increased, while $M_s$ increases in the former case but decreases in the latter. The right panel of Fig. \[fig3\] focuses on this sweet-spot region and shows the same $M_s$ and $\eta$ contours, only now plotted with respect to the variables $r/s$ and $s$ using a linear rather than logarithmic axis. The fact that the $M_s$ contours are approximately vertical in this region indicates that $M_s$ is dominantly determined by the ratio $r/s$, exactly as anticipated above, with increasing values of $r/s$ corresponding to increasing values of $M_s$. Indeed, we see from the right panel of Fig. \[fig3\] that $M_s$ increases extremely rapidly as a function of $r/s$, in keeping with the exponential dependence in Eq. (\[omega0NR\]). Likewise, increasing the value of $r/s$ while holding $r$ fixed tends to [*decrease*]{} the value of $\eta(\tnow)$. Thus, for fixed $r$, we find that $M_s$ and $\eta(\tnow)$ tend to vary inversely with respect to each other as functions of $r/s$, with our ensembles becoming less DDM-like at higher mass scales and more DDM-like at lower mass scales. Likewise, for fixed $r/s$, we find that increasing $r$ tends to increase $\eta(\tnow)$, as already evident from the left panel of Fig. \[fig3\]. It is easy to understand these results physically. For fixed $r$, increasing $r/s$ corresponds to decreasing $s$. This lowers the critical temperature $T_c$ at which our initial cosmological abundances are established, which has the effect of decreasing the abundances of the heavier states relative to the lighter states. This therefore decreases the value of $\eta(\tnow)$. By contrast, holding $r/s$ fixed and increasing $r$ corresponds to increasing $s$ as well. The increase in $r$ renders all of the ensemble states more massive but provides a smaller proportional mass increase for the heavier states than for the lighter states. Thus the mass [*ratios*]{} between heavier and lighter states decreases, which tends to increase the value of $\eta(\tnow)$. Likewise, as discussed above, increasing $s$ also tends to increase the value of $\eta(\tnow)$. These two effects then tend to reinforce each other, as evident in Fig. \[fig3\]. ![image](meo0_rc_B1_25){width="45.00000%"} 0.05 truein ![image](etaMs-CCs-r3_5){width="45.00000%"} ![image](OW-BC-r3_5_sr_30){width="45.00000%"} Having identified our sweet-spot region in $(r,s)$ parameter space, we now investigate how these values of $M_s$ and $\eta(\tnow)$ vary as our other parameters $B$, $C$, and $\tau_0$ are varied. To do this, we study variations in these parameters relative to an $(r,s)$ “benchmark” r= 3.5 ,      r/s = 30 , \[rsbenchmarks\] which we henceforth take as representative of our sweet-spot region in the $(r,s)$ plane. In Fig. \[fig4\] we illustrate the effects of variations in $B$ and $C$ relative to this benchmark, plotting contours of $M_s$ and $\eta(\tnow)$ in the $(r,C)$ plane (upper left panel), the $(s, C)$ plane (upper right panel), and the $(B,C)$ plane (lower panel). Note that since we must always have $s\leq 1/C$, it is actually the normalized product $s\cdot C$ which captures the dependence on $s$ in situations where $C$ might also be varied. In the upper right panel we therefore plot our contours relative to $s\cdot C$ rather than $s$ alone. Likewise, in the lower panel of Fig. \[fig4\] we have continued to indicate our allowed regions of $B$ and $C$ as in Fig. \[fig:g1BCExclusion\], where the dot continues to represent the $D_\perp=2$ scalar-string benchmark values in Eq. (\[BCbenchmarks\]). Together, the three panels of Fig. \[fig4\] tell a consistent story. First, with $r$ and $s$ held fixed, we see from the upper left and lower panels of Fig. \[fig4\] that increasing $C$ generally tends to increase $\eta(\tnow)$. This result makes sense: increasing $C$ corresponds to increasing the [*degeneracies*]{} of the heavier states relative to the lighter states. However, with $s$ held constant, each of these heavier states continues to accrue the same abundance as before. Thus increasing $C$ increases the total abundance carried by the heavier states relative to that carried by the lighter states, thereby increasing $\eta(\tnow)$. Second, we see from the lower panel of Fig. \[fig4\] that while our values of $M_s$ and $\eta(\tnow)$ are quite sensitive to $C$, they are far less sensitive to $B$.  This too makes sense, since $C$ governs the exponential rate of growth in the state degeneracies while $B$ governs only the subleading polynomial behavior. Third, in each of the above two cases, we also note that increasing $C$ while holding $r$ or $B$ fixed also corresponds to decreasing $M_s$. Thus, once again, we see that $M_s$ and $\eta(\tnow)$ tend to vary inversely with each other, giving rise to more DDM-like ensembles at lower energy scales and more traditional ensembles at higher energy scales. Finally, we see from the upper right panel of Fig. \[fig4\] that our values of $\eta(\tnow)$ are largely [*insensitive*]{} to variations in $C$ as long as $s\cdot C$ is held fixed. However, this too is easy to understand. Increasing $C$ while holding $s\cdot C$ fixed corresponds to decreasing $s$ as we increase $C$.  Increasing $C$ induces an exponential increase in the degeneracy of each massive state, while decreasing $s$ decreases the critical temperature $T_c$, thereby inducing a corresponding exponential decrease in the abundance associated with each such state. Thus, to first approximation, these two effects tend to mitigate each other: they produce more states, but also cause each state to carry a correspondingly smaller abundance. ![image](rs-B1_25-Cbf){width="45.00000%"} 0.05 truein ![image](BC-r3_5-sr_30){width="45.00000%"} Thus far we have not discussed the effects of varying our remaining free parameter $\tau_0$. Varying $\tau_0$ does not affect the degeneracies of states or their cosmological abundances. Indeed, variations in $\tau_0$ affect only the [*lifetimes*]{} of these states. In principle, this has the potential to affect the values of quantities such as $\eta(\tnow)$ since the determination of $\eta(\tnow)$ requires totalling the abundances of only those states which have not yet decayed at the present time. However, under the assumption that $\tau_0\gg \tnow$ (or under the equivalent assumption that our scenario already satisfies the look-back and $w_{\rm eff}$ constraints), we know that $\Omegatot(\tnow)$ is not changing rapidly at the present time. In other words, the total abundances of those states which are decaying at the present time is relatively small. In such cases, the $M_s$ and $\eta(\tnow)$ contours are therefore largely insensitive to $\tau_0$. Indeed, in Fig. \[fig3\], the sole effect of varying $\tau_0$ is therefore merely to “slide” the red exclusion regions in Fig. \[fig3\] horizontally relative to the rest of the plot: these exclusion regions move to the right (and therefore become more threatening to our sweet-spot region) if $\tau_0/\tnow$ is decreased, and move to the left (and therefore become even less of a concern) if $\tau_0/\tnow$ is increased. While this is entirely as expected, the natural question then arises: for any values of $\lbrace B,C,r,s\rbrace$, what is the minimum value of $\tau_0$ that can be tolerated before violating our look-back and $\weff$ constraints? Contours indicating the resulting minimum values $\tau_0^{\rm min}$ are plotted in Fig. \[fig5\] in both the $(r,s)$ and $(B,C)$ planes, taking our “benchmark” values in Eqs. (\[rsbenchmarks\]) and (\[BCbenchmarks\]) respectively. In general, we see from Fig. \[fig5\] that a wide variety of values of $\tau_0^{\rm min}$ are possible, depending on our specific location in parameter space, with larger values of $\tau_0^{\rm min}$ corresponding to very small values of $r$ or relatively large values of $s$ or $C$.  However, for our sweet-spot benchmark values in Eqs. (\[BCbenchmarks\]) and (\[rsbenchmarks\]), we see from Fig. \[fig5\] that $\tau_0^{\rm min}$ can be as small as approximately $10^{2}\tnow$. This, too, is not entirely a surprise. After all, a bound on the lifetime of the longest-lived DDM constituent on the order $\tau_0/t_{\mathrm{now}} \sim \mathcal{O}(100)$ is roughly on the same order as the most conservative bounds on the lifetime $\tau_\chi$ of a traditional single-component dark-matter candidate which decays into other purely dark-sector states. Indeed, model-independent bounds on decaying dark matter in traditional single-component models in which the dark-matter particle carries essentially all of the observed dark-matter abundance and decays into dark radiation have been derived by a number of groups (see, , Refs. [@DeLopeAmigo; @Audren; @Aubourg; @Blackadder]). Depending on the assumptions inherent in the various analyses and on the breadth of cosmological data incorporated, such studies place a bound on the lifetime of such a dark-matter candidate on the order of $\tau_\chi/t_{\mathrm{now}} \gtrsim \mathcal{O}(10 - 100) $. Thus, a bound on $\tau_0$ in this range is [*a priori*]{} reasonable — especially since our analysis in Fig. \[fig5\] determines the value of $\tau_0^{\rm min}$ based only on cosmological look-back and $\weff$ constraints. Of course, if the ensemble constituents decay into [*visible*]{}-sector particles with a non-negligible branching fraction, the constraints on $\tau_0$ are expected to increase significantly. Indeed, the most stringent bounds on a single dark-matter particle $\chi$ which decays primarily into visible-sector radiation require that this particle be hyperstable, with $\tau_\chi \sim 10^9 \tnow$. Despite the possibilities for lowering $\tau_0$ afforded by the results in Fig. \[fig5\], we shall continue to retain our benchmark value $\tau_0 = 10^9 t_{\mathrm{now}}$. We do this in order to be consistent with the most conservative decay scenarios possible. Although this value for $\tau_0$ is quite large, we emphasize that this is only the lifetime of the lightest ensemble constituent, and that a significant fraction of the ensemble constituents will generally have lifetimes much less than $\tau_0$. Moreover, even in cases for which the majority of the ensemble is long-lived, DDM ensembles can nevertheless yield striking astrophysical signatures [@DDMindirect; @DDMboxes1; @DDMboxes2] which differ from those of traditional dark-matter candidates. Thus, even with such values of $\tau_0$, the phenomenology of the resulting ensemble can differ significantly from that of traditional dark-matter candidates. Having explored the relevant $\lbrace B,C,r,s,\tau_0\rbrace$ parameter space of our ensemble and identified our sweet-spot region, we now examine the characteristics of the corresponding ensembles in more detail. In particular, we seek to understand what these ensembles look like, and how their overall structure evolves with time. As discussed in Sect. \[quantities\], the most relevant aggregate properties of any dark-sector ensemble are its total cosmological abundance $\Omegatot(t)$, its effective equation-of-state parameter $\weff(t)$, and its tower fraction $\eta(t)$, each of which is generally time-dependent. We therefore begin by examining how each of these quantities evolves with time for ensembles in and near our sweet spot. This information is shown in Fig. \[fig6\]. In this figure, we consider a “benchmark” ensemble with $B=5/4$, $C=2\pi/\sqrt{3}$, $r=3.5$, $s=3.5/30$, and $\tau_0=10^9 \tnow$, as well as nearby ensembles in which $\tau_0$ is varied (top row), $r$ is varied (second row), $s$ is varied (third row), $C$ is varied (fourth row), and $B$ is varied (fifth row). In each case, we plot the corresponding total cosmological abundance $\Omega_{\rm tot}$ (left column), equation-of-state parameter $w_{\rm eff}$ (middle column), and tower fraction $\eta$ (right column) as functions of time. Note that in each case the overall abundance is normalized through an appropriate choice of $M_s$ such that $\Omega(t_{\rm now}) = \OmegaCDM \approx 0.26$, as required. ![image](ab_t){width="30.00000%"}   ![image](w_t){width="30.00000%"}   ![image](eta_t){width="30.00000%"} ![image](ab_r){width="30.00000%"}   ![image](w_r){width="30.00000%"}   ![image](eta_r){width="30.00000%"} ![image](ab_s){width="30.00000%"}   ![image](w_s){width="30.00000%"}   ![image](eta_s){width="30.00000%"} ![image](ab_C){width="30.00000%"}   ![image](w_C){width="30.00000%"}   ![image](eta_C){width="30.00000%"} ![image](ab_B){width="30.00000%"}   ![image](w_B){width="30.00000%"}   ![image](eta_B){width="30.00000%"} In each panel of Fig. \[fig6\] (except for those along the bottom row), the blue curve corresponds to our “benchmark” point. We therefore begin by focussing on these benchmark curves. The curve for $\Omega_{\rm tot}(t)$ appears nearly constant at $\OmegaCDM\approx 0.26$ for all of the cosmological history plotted (which we assume to have been matter-dominated), including the present time $t_{\rm now}$. Indeed, this behavior continues all the way into the future until $t\approx 10^9 t_{\rm now}$, at which point $\Omega_{\rm tot}(t)$ begins to decline gently to $\Omega_{\rm tot}=0$. This behavior is matched by $w_{\rm eff}(t)$, which remains near zero for most its cosmological evolution before gently rising to $w_{\rm eff}>0$ at $t\approx 10^9 t_{\rm now}$. This makes sense, since Eq. (\[eq:wdef\]) tells us that $w_{\rm eff}(t)$ is proportional to the time-derivative of $\Omega_{\rm tot}(t)$. Finally, we see that $\eta(t)$ remains more or less fixed at approximately $\eta\approx 0.72$ during most of its cosmological history before smoothly dropping to $\eta=0$. This behavior is easy to understand. If this has been a traditional ensemble with a single dark-matter component whose decay we could model as essentially instantaneous (just as we are assuming for the individual components of our dark-matter ensembles), our curve for $\Omega_{\rm tot}(t)$ would have been fixed precisely at its present value $\OmegaCDM\approx 0.26$ over the entire range shown until suddenly dropping (essentially discontinuously) to $\Omega_{\rm tot}=0$ when the single dark-matter particle decays at $t\approx 10^9 t_{\rm now}$. Likewise, $w_{\rm eff}(t)$ would have been strictly fixed at $w_{\rm eff}=0$ during the cosmological evolution, while $\eta(t)$ would have been fixed at zero all along. However, this is not a traditional dark-matter setup: this is a DDM ensemble in which the present-day cosmological abundance $\Omega_{\rm tot}(t_{\rm now})\approx 0.26$ is spread across a relatively large number of individual components with different masses and different lifetimes. It is thus the continued, ordered, sequential decays of these different components which produce the softer, gentler drop in $\Omega_{\rm tot}(t)$ as $t$ approaches $t\approx 10^9 t_{\rm now}$. In fact, $\Omega_{\rm tot}(t)$ is actually falling slightly [*throughout*]{} the cosmological evolution shown; this behavior is not visible in Fig. \[fig6\] only because at early times prior to $t\approx 10^9 t_{\rm now}$ the states which are decaying are extremely heavy and thus carry extremely small abundances. By contrast, at late times approaching $t\approx 10^9\tnow$, the states which are decaying are relatively low-lying and carry more significant abundances. This is also evident in our curve for $\eta(t)$: for most of the cosmological history, the value $\eta\approx 0.72$ tells us that only approximately $28\%$ of the total dark-sector cosmological abundance is carried by the dominant (lightest) state in the ensemble, even at early times, while the remaining $72\%$ of the abundance is carried by the more massive states — particularly those which, though more massive, are nevertheless relatively low-lying. As a result of the sequential decays of such states, $\eta(t)$ — like $\Omegatot(t)$ — is also actually falling slightly [*throughout*]{} the cosmological evolution shown. It is only due to the decays of the relatively low-lying states near $t\approx 10^9 t_{\rm now}$ that $\eta(t)$ ultimately falls gently but noticeably to zero. At first glance, it may seem surprising that all three of our primary quantities $\Omega_{\rm tot}$, $w_{\rm eff}$, and $\eta$ are nearly constant at $t\approx t_{\rm now}$. However, this is ultimately the direct consequence of our benchmark choice $\tau_0=10^9 t_{\rm now}$: with this choice, those states within the ensemble which are decaying today are all extremely massive and thus carry very little abundance. The DDM nature of such an ensemble is nevertheless clear from its $\eta$-value, which is as high as $0.72$ even at the present time. In this connection, we again emphasize that taking $\tau_0=10^9 t_{\rm now}$ was merely a conservative choice which is not by itself intrinsic to the DDM framework; indeed we learned from Fig. \[fig5\] that we could easily have chosen $\tau_0$ as small as $\tau_0 \approx 10^2\tnow$ without running afoul of our look-back and $\weff$ constraints. Indeed, without further details concerning the precise nature of these ensembles (including, most critically, the ultimate decay products of their constituents), such small values for $\tau_0$ would have been equally viable. This observation is illustrated along the top row of Fig. \[fig6\], where we show the evolution of our blue “benchmark” curves as we vary $\tau_0$ between our conservative value $\tau_0\approx 10^9\tnow$ and the more extreme value $\tau_0\approx 10^2\tnow$. In general, changing $\tau_0$ does not affect the internal structure of the ensemble — it merely affects the lifetimes of the individual ensemble constituents, rescaling them all up or down together. Since it is these lifetimes which produce the non-trivial time-dependence for $\Omegatot$, $\weff$, and $\eta$, we expect that changing $\tau_0$ should preserve the general shapes of these curves and merely translate these curves along the time axis. This behavior is verified in the panels along the top row of Fig. \[fig6\]. Indeed, we can even see from these panels why $\tau_0 \approx 10^2 \tnow$ is the minimum value of $\tau_0$ that may be chosen for our benchmark point: choosing $\tau_0$ any smaller would shift our curves even further towards earlier times, whereupon $\Omegatot(t)$ would begin to experience significant variations within the interval $10^{-6}\tnow \lsim t \lsim \tnow$ and $\weff(\tnow)$ would begin to deviate significantly from zero. Such behavior would then violate our look-back and $\weff$ constraints, respectively. Let us now turn to the behavior of our $\Omegatot$, $\weff$, and $\eta$ curves as we vary $r$, as shown in the panels along the second row of Fig. \[fig6\]. Two observations underlie the behavior shown. First, we note that changing $r$ changes the lifetimes of the states at each mass level according to Eq. (\[eq:Lifetimes\]), with $\tau_n/\tau_0\to 0$ as $r\to 0$. This result is simple to understand: as $r\to 0$, the $n=0$ states become hierarchically lighter than the $n\geq 0$ states and thus the $n>0$ states have hierarchically shorter lifetimes. Second, we note that changing $r$ also changes the relative abundances which are generated at $t_c$ according to  =  ( - ) . \[nonmon\] This quantity is non-monotonic as a function of $r$, first dropping as $r$ is reduced from large values and ultimately hitting a minimum before increasing again and diverging as $r\to 0$. Indeed, for $n=1$ and $s$ set to its benchmark value $s=3.5/30\approx 0.117$, this minimum occurs at $r\approx 0.4$. These two effects are responsible for the behaviors shown in the second row of Fig. \[fig6\]. As $r$ decreases from its benchmark value with $\tau_0$ held fixed, the excited states with $n>0$ start decaying earlier and earlier. Rescaling our overall abundances in order to keep $\Omegatot(\tnow)=\OmegaCDM$ produces the effects shown in the left panel. Indeed, we see from this panel that the case with $r= 0.001$ actually violates our look-back and $\weff$ constraints, as already evident from Fig. \[fig3\]. Even the $\Omegatot(t)$ curve with $r=0.01$ is tightly constrained: shifting $\tau_0$ towards any smaller values below $10^9\tnow$ (, shifting this curve further towards the left) also leads to violations of our look-back and $\weff$ constraints, as already anticipated in the left panel of Fig. \[fig5\]. Likewise, as a result of the observations below Eq. (\[nonmon\]), the relative sizes of the abundances $\Omega_n$ associated with the excited $n>0$ states relative to the abundance $\Omega_0$ associated with the $n=0$ ground state vary non-monotonically with $r$, shrinking as $r$ drops from 3.5 to approximately 0.4, and then growing again as $r$ drops still further. This then explains the non-monotonic behavior for $\eta(t)$ as a function of $r$, as shown in the right panel. By contrast, the effects of varying $s$ and $C$ are shown along the third and fourth rows of Fig. \[fig6\], respectively. While the quantity $s$ governs the exponential rate at which the Boltzmann suppression of the abundances of the ensemble constituents [*decreases*]{} with $n$, the quantity $C$ governs the exponential rate at which the the degeneracy of states for the ensemble [*grows*]{} with $n$. As a result, the effects of decreasing $s$ or increasing $C$ are largely similar to each other as far as $\Omegatot(t)$ is concerned, as evident in Fig. \[fig6\]: both tend to increase the primordial aggregate abundances $\widehat{\Omega}_n$ of the heavier states in the ensemble. This effect causes $\Omegatot(t)$ to begin to decline earlier and earlier as these heavier states are the first to decay. By contrast, it is important to note that increasing $C$ and decreasing $s$ nevertheless have [*opposite*]{} effects on the value of $\eta(\tnow)$: the former increases $\eta(\tnow)$, as anticipated in Fig. \[fig4\], while the latter decreases $\eta(\tnow)$, as anticipated in Fig. \[fig3\]. This difference occurs because increasing $C$ merely increases the state degeneracies $\hat g_n$ of the heavy states, thereby injecting more abundance into the heavy states relative to the light states, while decreasing $s$ has the effect of increasing the abundances of [*all*]{} of our states, including the abundance of the dominant abundance-carrier at $n=0$. This causes the total abundance of the ensemble to grow more rapidly than the abundances of the excited $n>0$ states alone, thereby decreasing $\eta(\tnow)$. One important feature to note from these plots is the appearance of a Hagedorn instability as $s\to 1/C$ (or equivalently as $C\to 1/s$). In these limiting cases, the total energy density $\Omegatot$ injected into the system through our confining phase transition at $t=t_c$ diverges, violating the constraint in Eq. (\[finitetotal\]). Such cases therefore violate our look-back and $\weff$ constraints, as evident in Fig. \[fig6\]. Indeed, the Hagedorn instability is a critical feature of theories with exponentially growing degeneracies of states [@Hagedorn]. Finally, we turn to the fifth and final row of Fig. \[fig6\]. Note that in order to remain within the self-consistency bound in Eq. (\[eq:CStringConsistCond\]), it is not possible to increase $B$ above our benchmark value $5/4$ when $C=2\pi/\sqrt{3}$. For this reason, we have chosen to hold $C$ fixed at a greater value, specifically $C=7$, when exploring the effects of varying $B$. Unfortunately, we see that variations in $B$ are barely distinguishable in these plots, even when $B$ is varied all the way from $B=1$ (corresponding to $D_\perp=1$) to $B=9/4$ (corresponding to $D_\perp= 6$). This tells us that the sorts of abundance-based or equation-of-state-based analyses we are doing here are relatively insensitive to the number of uncompactified transverse spacetime directions into which our dark-sector flux tube can vibrate, as long as $C$ (related to the total central charge of the degrees of freedom on the flux-tube worldsheet) is held fixed. Of course, in a realistic setting, there are likely to be many other more specific probes of $D_\perp$, including probes that are based on specific properties of the dark-sector dynamics. Our result here merely indicates that studies based on cosmological abundances alone are not likely to be the most useful in this regard. ![image](onr3_5s_r_30){width="60.00000%"} ![image](omega_n_r_s){width="45.00000%"} 0.05 truein ![image](omega_n_r_s_zoom){width="47.50000%"} We have seen in Fig. \[fig6\] how the total abundances $\Omegatot$ of our DDM ensembles vary as a function of time. However, it is also interesting to understand how the [*individual*]{} aggregate abundances $\widehat\Omega_n(t)$ at each mass level $n$ contribute to this behavior. The result is shown in Fig. \[fig7\] for our benchmark DDM model. As we see from Fig. \[fig7\], there are many mass levels $n$ whose states contribute to $\Omegatot(\tnow)$: states with smaller values of $n$ carry larger abundances and have longer lifetimes, persisting into later times before decaying, while those with larger values of $n$ carry smaller abundances and have shorter lifetimes, decaying earlier. Indeed, this balancing between lifetimes and abundances is a fundamental hallmark of the DDM framework. Although the sum of these abundances at $t=\tnow$ is fixed at $\Omegatot(\tnow)=\OmegaCDM\approx 0.26$, we see that even states with relatively large values of $n$ have lifetimes $\tau_n$ exceeding $t_{\rm now}$ and thus contribute non-trivially to $\Omegatot(\tnow)$. Indeed, for our benchmark model, we find that there are no fewer than seven distinct mass levels contributing more than 0.01 to $\Omegatot(\tnow)$ and no fewer than ten distinct mass levels contributing more than $1\%$ of $\Omegatot(\tnow)$. ![image](pie_r35_sr_25){width="32.00000%"}  ![image](pie_r35_sr_30){width="32.00000%"}  ![image](pie_r35_sr_50){width="32.00000%"}  ![image](pie_r35_sr_65){width="32.00000%"} 0.5 truein ![image](pie_r4_sr_25){width="32.00000%"}  ![image](pie_r4_sr_30){width="32.00000%"}  ![image](pie_r4_sr_50){width="32.00000%"}  ![image](pie_r4_sr_65){width="32.00000%"} ![image](pie_r35_sr_25_new){width="32.00000%"}  ![image](pie_r35_sr_30_new){width="32.00000%"}  ![image](pie_r35_sr_50_new){width="32.00000%"}  ![image](pie_r35_sr_65_new){width="32.00000%"} 0.3 truein It is also interesting to examine how these results vary as a function of the ratio $r/s$ which, as we have seen, governs the overall mass scales associated with these DDM ensembles. The results are shown in Fig. \[fig8\], where we plot the aggregate fractions $\widehat \Omega_n(\tnow)/\Omegatot(\tnow)$ for a variety of different mass levels $n$ as a function of $r/s$. As evident in Fig. \[fig8\], the lighest state carries a larger and larger fraction of the total abundance as $r/s$ increases, resulting in scenarios which have smaller values of $\eta$ and which are therefore less DDM-like. By contrast, the lightest state carries a smaller proportional fraction of the total abundance as $r/s$ decreases, and in fact may not even be the dominant state for sufficiently small $r/s$. Indeed, for $r/s=15$, we find that all states carry relatively small abundances, and it is actually the states at the $n=23$ mass level which collectively carry the largest individual abundance at the present time. Such scenarios are therefore extremely DDM-like. Putting all the pieces together, we can summarize our results as in Figs. \[fig9\] and \[fig10\].  Fig. \[fig9\] consists of a sequence of dark-matter pie charts showing the relative contributions to $\Omega_{\rm tot}(t_{\rm now})=\OmegaCDM \approx 0.26$ from the lowest-lying states for $r=3.5$ (top row) and $r=4$ (bottom row), with $r/s= \lbrace 25, 30, 50, 65\rbrace$ across each row. Within each pie, we illustrate the corresponding collective abundances $\widehat\Omega_n(\tnow)$ as separate slices, one for each value of $n$, while the numbers listed within each slice indicate the number of individual states $\hat g_n$ contributing at that mass level. For each pie chart we have also shown the corresponding values of $M_0$, $T_c$, and $M_s$. For these calculations we have used the input values $T_{\rm MRE}=0.7756$ eV, $g_{\rm MRE}=3.36$, and $ g_c=\lbrace 10.75, 61.75, 106.75, 106.75\rbrace$, respectively, for $r/s=\lbrace 25, 30, 50, 65\rbrace$. We have also assumed our standard benchmark values $B=5/4$, $C=2\pi/\sqrt{3}$, and $\tau_0=10^9\tnow$. Let us begin by focusing on the “benchmark” pie chart within Fig. \[fig9\] corresponding to $r=3.5$ and $r/s=30$. For this pie chart, we see that the largest pie slice corresponds to the abundance contribution from the $n=0$ mass level, while the successively smaller pie slices progressing in a clockwise fashion within the pie chart correspond to the abundance contributions from successively higher mass levels. For this pie chart, we find that $M_0\approx 532$ GeV, $T_c\approx 18$ GeV, and $M_s\approx 152$ GeV. Note that this value for $M_s$ is in agreement with the $M_s$ contours shown in Fig. \[fig3\]. We also see geometrically from this pie chart that $\eta\approx 0.72$, in agreement with the results shown in Figs. \[fig3\], \[fig4\], and \[fig6\]. Given this, we can now investigate how this benchmark pie chart deforms as a function of $r/s$ and $r$. Results are illustrated in the other pie charts shown in Fig. \[fig9\]. We see in general that increasing $r$ from 3.5 to 4.0 (, passing from the top row of pie charts in Fig. \[fig9\] to the bottom row) has the net effect of shifting cosmological abundance away from the ground state, thereby increasing $\eta$ and generally making each pie slice smaller while simultaneously lowering the corresponding mass scales. This is in complete accord with the results shown in Fig. \[fig3\]. Likewise, decreasing or increasing $r/s$ (, moving left or right along either row) has the effect of increasing or decreasing $\eta$ while decreasing or increasing our corresponding mass scales. Indeed, we see that the variable $r/s$ allows us to interpolate between two extremes: traditional ensembles with high mass scales at large $r/s$ versus DDM-like ensembles with smaller mass scales at small $r/s$. We further observe that for sufficiently small $r/s$, the largest pie slice is no longer the $n=0$ slice (labelled ‘1’ in each pie chart) — as $r/s$ decreases, this honor gradually shifts towards the pie slices corresponding to higher mass levels. This is in accordance with the results in Fig. \[fig8\]. Fig. \[fig10\] is similar to the top row of Fig. \[fig9\], except that we have now increased our values of $C$ and $B$ to $\sqrt{2}\pi$ and $B=3/2$, respectively. These new values maintain $c_{\rm int}=0$ and correspond to the $D_\perp=3$ scalar string. These changes in $C$ and $B$ increase the degeneracies $\hat g_n$ of states at each mass level, with the new values indicated within the corresponding pie slices. Although the cosmological abundances per state are not affected by the changes in $C$ and $B$, these increased degeneracies result in ensembles which are even more DDM-like and which have correspondingly smaller mass scales than those along the top row of Fig. \[fig9\]. These results are consistent with those shown in Fig. \[fig4\]. We see, then, that a tremendous variety of DDM ensembles exist which have the two fundamental features outlined in the Introduction — Regge trajectories and exponentially rising degeneracies of states. These ensembles are consistent with our look-back and $\weff$ constraints, and thus satisfy the zeroth-order constraints that may be imposed on such ensembles on the basis of their total energy densities and equations of state alone. We also observe an important feature, a inverse correlation between the tower fraction $\eta$ (which governs the extent to which our ensemble is truly DDM-like) and the magnitude of its underlying mass scales. Indeed, we have seen that while traditional ensembles typically have high corresponding mass scales, our ensembles become increasingly DDM-like for lower mass scales — all while remaining consistent with our look-back and $\weff$ constraints. These observations will likely be an important guide and ingredient in any future attempts to build realistic dark-matter models of this type. Conclusions\[sec:Conclusion\] ============================= In this paper, we have investigated the properties of a hitherto-unexplored class of DDM ensembles whose constituents are the composite states which emerge in the confining phase of a strongly-coupled dark sector. In ensembles of this sort, the masses of the constituent particles lie along well-defined Regge trajectories and the density of states within the ensemble grows exponentially as a function of the constituent-particle mass. This exponential growth is ultimately compensated by a Boltzmann suppression factor in the primordial abundances of the individual constituents, resulting in a finite total energy density $\Omega_{\rm tot}(t)$. We also showed that such ensembles can naturally exhibit a balancing between lifetimes and cosmological abundances of the sort required by the DDM framework. For each such ensemble, we calculated the corresponding effective equation-of-state parameter $\weff(t)$ as well as the tower fraction $\eta(t)$. We also imposed a number of zeroth-order model-independent phenomenological constraints which follow directly from knowledge of $\Omegatot(t)$, $\weff(t)$, and $\eta(t)$. In general, we found that the imposition of such constraints tends to introduce [*correlations*]{} between the different underlying variables which parametrize our DDM ensembles, so that an increase in one variable (such as, , the exponential rate of growth in the state degeneracies) requires a corresponding shift in another variable (in this case, an increase in the lifetime of the lightest state in the ensemble, as indicated in the right panel of Fig. \[fig5\]). Perhaps one of our most important results is the existence of an inverse correlation between the tower fraction $\eta(t)$ associated with a given a DDM ensemble and its corresponding fundamental mass scales, so that the present-day cosmological abundance of the dark sector must be distributed across an increasing number of different states in the ensemble as these fundamental mass scales are dialed from the Planck scale down to the GeV scale. We are certainly not the first to consider dark-matter scenarios in which the dark matter is composite. Indeed, within the context of traditional dark-matter models, it has been appreciated for some time that the dark-matter particle could be a composite state. For example, the lightest technibaryon in technicolor theories was long ago identified as a promising dark-matter candidate [@NussinovTechnibaryon; @BarrTechnibaryon], and mechanisms [@GudnasonTechnibaryon] were advanced by which this particle could be rendered sufficiently light so as to be phenomenologically viable. Indeed, several explicit models [@RyttovTIMP] have been developed along these lines. Other more exotic baryon-like composites have also been advanced as potential dark-matter candidates [@GUTzilla]. Lattice studies of baryon-like states in the confining phases of both $SU(3)$ and $SU(4)$ gauge theories have also been performed [@AppelquistLattice1; @AppelquistLattice2; @AppelquistLattice3]. A variety of scenarios in which a long-lived meson-like state which appears in the confining phase of a strongly-coupled hidden sector have been developed as well (for a review see, , Ref. [@KribsReview]). These include scenarios in which the dark-matter particle is a pseudo-Nambu-Goldstone boson (PNGB) stabilized by a dark-sector analogue of flavor symmetry [@KilicDarkPion; @HurKoPion; @HolthausenPion; @HatanakaPion; @AmetaniPNGB] or $G$-parity [@BaiPion], or alternatively by some other symmetry of the theory with no SM analogue [@Ectocolor; @BhattacharyaPion; @CarmonaPNGB; @FrigerioHeavyEta]. Complementary lattice studies of strongly-coupled dark-sector scenarios in which the dark-matter candidate is a PNGB have been performed as well [@LewisLattice; @HietanenLattice]. Scenarios in which the dark-matter candidate is not a PNGB, but rather a bound state of one heavy quark and one light quark, have also received recent attention [@CiDM; @CiDMParity; @CiDMCosmology], primarily due to the non-standard direct-detection phenomenology to which they give rise, as have scenarios in which the dark-matter candidate is a bound state of heavy quarks alone [@KribsQuirks]. More general studies of composite hidden-sector theories which give rise to meson-like or baryon-like dark-matter candidates within different regions of parameter space have also been performed [@AntipinCompositeHS1; @AntipinCompositeHS2]. Composite hidden-sector states consisting of non-Abelian gauge fields alone (so-called “glueball” states) have also long been recognized as promising dark-matter candidates [@OkunThetons1; @OkunThetons2] — a possibility which has received renewed attention [@SoniGlueballs; @ForestellGlueballs] as well. Indeed, hidden sectors involving cosmologically stable dark glueball states arise naturally in a variety string constructions [@FaraggiGlueballs; @HalversonGlueballs], as well as in certain anomaly-mediated supersymmetry-breaking scenarios [@FengShadmiWIMPless]. In addition, the possibility that composite states in the dark sector could themselves form bound states (so-called “dark nuclei”) has also been studied [@DetmoldDarkNuclei1; @KrnjaicDarkNuclei], as has the possibility that these nuclei themselves could combine to form dark “atoms” or even dark “molecules” [@ClineDarkMolecules; @BoddyDarkHydrogen]. Indeed, lattice studies [@DetmoldDarkNuclei1; @DetmoldDarkNuclei2] corroborate the existence of stable dark nuclei states even within simple, two-flavor models with $SU(2)$ as the confining gauge group. In such models, a dark-sector equivalent of BBN serves as the mechanism for abundance generation. Such models can have interesting phenomenological consequences, especially in the regime in which a significant fraction of the dark-matter abundance is contributed by nuclei with large nucleon numbers [@LargeDarkNuclei1; @LargeDarkNuclei2]. Composite dark-matter models are interesting from a phenomenological perspective as well. For example, the states of a strongly-coupled hidden sector provide a natural context [@SIMPParadigmWacker] for strongly-interacting massive particle (SIMP) dark matter [@SIMPParadigmCarlson; @SIMPParadigmdeLaix] models, in which $3\rightarrow 2$ processes rather than $2\rightarrow 2$ processes play a dominant role in determining the dark-matter abundance. Indeed, a number of explicit models along these lines have been constructed [@SIMPModelHochberg; @SIMPModelHansen; @SIMPModelBernal; @SIMPModelKamada]. One of the most interesting ramifications of SIMP models is that they naturally give rise to dark-matter self-interactions with cross-sections sufficiently large that dark-matter scattering can have an observable impact on structure formation [@BoddySIDM1]. Such composite dark-matter models can have other phenomenological consequences as well, both at indirect-detection experiments [@BoddySIDM2; @DarkShower] and at colliders [@SIMPPhenoLee; @SIMPPhenoHochberg; @EnglertSIMPCollder; @BruggisserSIMPCollider]. Finally, the presence of additional non-Abelian gauge sectors, each with their own analogue of the QCD $\Theta$-angle, could have potential implications for the physics of axions and axion-like particles [@MultipleThetaAngles]. While all of these represent theoretically viable possibilities for the dark sector, the dark ensemble we have considered in this paper is unique for several important reasons. In traditional composite dark-matter models, it is usually a single bound state (usually the lightest bound state) which serves as the primary dark-matter candidate and which therefore carries the full dark-matter abundance $\OmegaCDM$. While there may be several other dark states to which this bound state couples — and which may play a role in determining the abundance of the dark-matter candidate — it is nevertheless true that only one (or a few) composite states carry the dark-matter abundance $\OmegaCDM$ and thereby play a significant role in dark-sector phenomenology. By contrast, within the DDM framework, the dark-matter abundance is potentially spread across a relatively large set of composite states with various masses and lifetimes. Thus the usual required stability of the traditional dark-matter candidate is not a required feature of the DDM ensemble, thereby allowing the associated dark-matter abundance $\OmegaCDM(t)$ and dark-matter equation-of-state parameter $\weff(t)$ to vary with time — even during the current, matter-dominated era. Moreover, because the DDM framework requires an enlarged viewpoint in which the entire spectrum of composite states are potentially relevant for determining the properties of the dark sector, features that describe the entire composite spectrum suddenly become relevant for determining dark-sector phenomenology — features which would not have been relevant for previous studies within more traditional frameworks. These features include the fact that the masses of such bound states actually lie along Regge trajectories, and that the densities of such bound states experience a Hagedorn-like exponential growth as a function of mass. Indeed, these features do not play a role within traditional studies of composite dark states, but they have been the cornerstones of the analysis we have presented here. In this context, we note that a similar approach was also adopted in Ref. [@LargeDarkNuclei1] with regard to ensembles of dark nuclei whose abundances are generated via a dark-sector analogue of BBN.  This is indeed another context in which the full ensemble of dark-sector states plays an important role in dark-matter phenomenology. Given the initial steps presented here, there are many avenues for future research. For example, in this paper we have primarily focused on the phenomenology associated with the “sweet-spot” region in Eq. (\[sweetspot\]), as this region gives rise to a rich spectrum of associated mass scales and DDM-like behaviors. However, other regions may also be relevant for different situations, including the case of dark ensembles emerging from the bulk sectors of actual critical Type I string theories. Indeed, such theories typically have significantly larger central charges and values of $D_\perp$ than those corresponding to the $D_\perp=2$ flux tube, and thus correspond to values of $(B,C)$ which are very far from the “benchmark” values in Eq. (\[BCbenchmarks\]). Such strings also likely correspond to values of $(r,s)$ which are far from those in Eq. (\[sweetspot\]). Likewise, in our analysis we have taken $\kappa=1$ and $\xi=3$. Although these simple choices were well-motivated and conservative, it would certainly be interesting to explore the consequences of alternative choices. It would also be of interest to explore the ramifications of relaxing some of the approximations we have made in our analysis. These include the “instantaneous freeze-out” approximation that underpins the Boltzmann suppression factor in Eq. (\[eq:OmeganPrimordial\]), as well as our implicit assumption that the Hubble expansion within which our calculations have taken place is unaffected by potential gravitational backreaction from our continually evolving dark sector. While these approximations may certainly be justified to first order, a more refined calculation is still capable of altering our results numerically if not qualitatively. It would also be interesting to subject the DDM ensembles we have studied here to more detailed phenomenological constraints. The constraints we have studied here, such as our look-back and $\weff$ constraints, are those that follow directly (and in a completely model-independent manner) from knowledge of $\Omegatot(t)$ and $\weff(t)$ alone, and as such we have seen that they are sufficient to rule out vast regions of parameter space. It is nevertheless true that a plethora of additional constraints could be formulated once a particular scenario with a particular particle content is specified, and that imposing such additional constraints could potentially narrow our viable parameter space still further. Finally, and perhaps most importantly, in this paper we have assumed that the effects of [*intra-ensemble*]{} decays on the decay widths of the ensemble constituents are negligible. Such an assumption is certainly consistent with our other assumptions about the structure of the theory. In general, following our string-based approach to understanding the dyamics of these bound-state flux tubes, we may regard the strength of the interactions among the different dark hadrons in our DDM model as being governed by an additional parameter, a so-called “string coupling” $g_s$, which we have not yet specified but which does not impact any of the results we have presented thus far. In general, $g_s$ can be different from the coupling which governs the decays of our ensemble states to SM states and which is thus embedded within $\tau_0$. In an actual string construction, the value of $g_s$ is determined by the vacuum expectation value (VEV) of the dilaton field, but the dynamics that determines this VEV is not well understood. In general, however, intra-ensemble decays will provide an additional contribution to the total decay widths $\Gamma_n$, especially for the heavier ensemble constituents, and the decays of these heavier constituents can serve as an additional source for the abundances of the lighter constituents. The effects of such intra-ensemble decays will be discussed in more detail in Ref. [@toappear]. We would like to thank Emilian Dudas and Eduardo Rozo for discussions. The research activities of KRD, FH, and SS were supported in part by the Department of Energy under Grant DE-FG02-13ER41976 / DE-SC0009913; the research activities of KRD were also supported in part by the National Science Foundation through its employee IR/D program. The opinions and conclusions expressed herein are those of the authors, and do not represent any funding agencies. K. R. Dienes and B. Thomas, Phys. Rev. D [**85**]{}, 083523 (2012) \[arXiv:1106.4546 \[hep-ph\]\]. K. R. Dienes and B. Thomas, Phys. Rev. D [**85**]{}, 083524 (2012) \[arXiv:1107.0721 \[hep-ph\]\]. K. R. Dienes, S. Su and B. Thomas, Phys. Rev. D [**86**]{}, 054008 (2012) \[arXiv:1204.4183 \[hep-ph\]\]. K. R. Dienes, S. Su and B. Thomas, Phys. Rev. D [**91**]{}, no. 5, 054002 (2015) \[arXiv:1407.2606 \[hep-ph\]\]. K. R. Dienes, J. Kumar and B. Thomas, Phys. Rev. D [**86**]{}, 055016 (2012) \[arXiv:1208.0336 \[hep-ph\]\]. K. R. Dienes, J. Kumar and B. Thomas, Phys. Rev. D [**88**]{}, no. 10, 103509 (2013) \[arXiv:1306.2959 \[hep-ph\]\]. K. K. Boddy, K. R. Dienes, D. Kim, J. Kumar, J. C. Park and B. Thomas, arXiv:1606.07440 \[hep-ph\]. K. K. Boddy, K. R. Dienes, D. Kim, J. Kumar, J. C. Park and B. Thomas, arXiv:1609.09104 \[hep-ph\]. K. R. Dienes and B. Thomas, Phys. Rev. D [**86**]{}, 055013 (2012) \[arXiv:1203.1923 \[hep-ph\]\]. K. R. Dienes, J. Fennick, J. Kumar and B. Thomas, to appear. K. R. Dienes, J. Fennick, J. Kumar and B. Thomas, Phys. Rev. D [**93**]{}, 083506 (2016) \[arXiv:1601.05094 \[hep-ph\]\]. R. Hagedorn, Nuovo Cim. Suppl.  [**3**]{}, 147 (1965). For reviews, see, :\ M. B. Green, J. A. Schwarz and E. Witten, [*Superstring Theory, Vols. I and II*]{} (Cambridge University Press, 1987); J. Polchinski, [*String Theory, Vols. I and II*]{} (Cambridge University Press, 1998). K. R. Dienes and J. R. Cudell, Phys. Rev. Lett.  [**72**]{}, 187 (1994) \[hep-th/9309126\]. G. H. Hardy and S. Ramanujan, Proc. London Math. Soc.  [**17**]{}, 75 (1918). I. Kani and C. Vafa, Commun. Math. Phys.  [**130**]{}, 529 (1990). K. R. Dienes, Nucl. Phys. B [**429**]{}, 533 (1994) \[hep-th/9402006\]. Y. Nambu (unpublished, 1970). P. Ramond, Phys. Rev. D [**3**]{}, 2415 (1971). A. Neveu and J. H. Schwarz, Nucl. Phys. B [**31**]{}, 86 (1971). A. M. Polyakov, Nucl. Phys. B [**268**]{}, 406 (1986). M. B. Green, Phys. Lett. B [**266**]{}, 325 (1991). J. Polchinski and A. Strominger, Phys. Rev. Lett.  [**67**]{}, 1681 (1991). P. A. R. Ade [*et al.*]{} \[Planck Collaboration\], arXiv:1502.01589 \[astro-ph.CO\]. L. Anderson [*et al.*]{} \[BOSS Collaboration\], Mon. Not. Roy. Astron. Soc.  [**441**]{}, no. 1, 24 (2014) \[arXiv:1312.4877 \[astro-ph.CO\]\]. N. Suzuki, D. Rubin, C. Lidman, G. Aldering, R. Amanullah, K. Barbary, L. F. Barrientos and J. Botyanszki [*et al.*]{}, Astrophys. J.  [**746**]{}, 85 (2012) \[arXiv:1105.3470 \[astro-ph.CO\]\]. R. H. Cyburt, J. Ellis, B. D. Fields, F. Luo, K. A. Olive and V. C. Spanos, JCAP [**0910**]{}, 021 (2009) \[arXiv:0907.5003 \[astro-ph.CO\]\]. W. Hu and J. Silk, Phys. Rev.  D [**48**]{}, 485 (1993). W. Hu and J. Silk, Phys. Rev. Lett.  [**70**]{}, 2661 (1993). T. R. Slatyer, Phys. Rev. D [**87**]{}, no. 12, 123513 (2013) \[arXiv:1211.0283 \[astro-ph.CO\]\]. L. Accardo [*et al.*]{} \[AMS Collaboration\], Phys. Rev. Lett.  [**113**]{}, 121101 (2014). AMS-02 Collaboration, presentations at AMS Days at CERN, April 15-17, 2015. A. H. G. Peter, C. E. Moody and M. Kamionkowski, Phys. Rev. D [**81**]{}, 103501 (2010) \[arXiv:1003.0419 \[astro-ph.CO\]\]. M. Y. Wang, A. H. G. Peter, L. E. Strigari, A. R. Zentner, B. Arant, S. Garrison-Kimmel and M. Rocha, Mon. Not. Roy. Astron. Soc.  [**445**]{}, no. 1, 614 (2014) \[arXiv:1406.0527 \[astro-ph.CO\]\]. M. Y. Wang and A. R. Zentner, Phys. Rev. D [**82**]{}, 123507 (2010) \[arXiv:1011.2774 \[astro-ph.CO\]\]. Y. Gong and X. Chen, Phys. Rev. D [**77**]{}, 103511 (2008) \[arXiv:0802.2296 \[astro-ph\]\]. G. Blackadder and S. M. Koushiappas, Phys. Rev. D [**90**]{}, no. 10, 103527 (2014) \[arXiv:1410.0683 \[astro-ph.CO\]\]. S. De Lope Amigo, W. M. Y. Cheung, Z. Huang and S. P. Ng, JCAP [**0906**]{}, 005 (2009) \[arXiv:0812.4016 \[hep-ph\]\]. B. Audren, J. Lesgourgues, G. Mangano, P. D. Serpico and T. Tram, JCAP [**1412**]{}, no. 12, 028 (2014) \[arXiv:1407.2418 \[astro-ph.CO\]\]. E. Aubourg [*et al.*]{}, Phys. Rev. D [**92**]{}, no. 12, 123516 (2015) \[arXiv:1411.1074 \[astro-ph.CO\]\]. G. Blackadder and S. M. Koushiappas, arXiv:1510.06026 \[astro-ph.CO\]. S. Nussinov, Phys. Lett. B [**165**]{}, 55 (1985). S. M. Barr, R. S. Chivukula and E. Farhi, Phys. Lett. B [**241**]{}, 387 (1990). S. B. Gudnason, C. Kouvaris and F. Sannino, Phys. Rev. D [**73**]{}, 115003 (2006) \[hep-ph/0603014\]. T. A. Ryttov and F. Sannino, Phys. Rev. D [**78**]{}, 115010 (2008) \[arXiv:0809.0713 \[hep-ph\]\]. K. Harigaya, T. Lin and H. K. Lou, JHEP [**1609**]{}, 014 (2016) \[arXiv:1606.00923 \[hep-ph\]\]. T. Appelquist [*et al.*]{} \[Lattice Strong Dynamics (LSD) Collaboration\], Phys. Rev. D [**89**]{}, no. 9, 094508 (2014) \[arXiv:1402.6656 \[hep-lat\]\]. T. Appelquist [*et al.*]{}, Phys. Rev. D [**92**]{}, no. 7, 075030 (2015) \[arXiv:1503.04203 \[hep-ph\]\]. T. Appelquist [*et al.*]{}, Phys. Rev. Lett.  [**115**]{}, no. 17, 171803 (2015) \[arXiv:1503.04205 \[hep-ph\]\]. G. D. Kribs and E. T. Neil, Int. J. Mod. Phys. A [**31**]{}, no. 22, 1643004 (2016) \[arXiv:1604.04627 \[hep-ph\]\]. C. Kilic, T. Okui and R. Sundrum, JHEP [**1002**]{}, 018 (2010) \[arXiv:0906.0577 \[hep-ph\]\]. T. Hur and P. Ko, Phys. Rev. Lett.  [**106**]{}, 141802 (2011) \[arXiv:1103.2571 \[hep-ph\]\]. M. Holthausen, J. Kubo, K. S. Lim and M. Lindner, JHEP [**1312**]{}, 076 (2013) \[arXiv:1310.4423 \[hep-ph\]\]. H. Hatanaka, D. W. Jung and P. Ko, JHEP [**1608**]{}, 094 (2016) \[arXiv:1606.02969 \[hep-ph\]\]. Y. Ametani, M. Aoki, H. Goto and J. Kubo, Phys. Rev. D [**91**]{}, no. 11, 115007 (2015) \[arXiv:1505.00128 \[hep-ph\]\]. Y. Bai and R. J. Hill, Phys. Rev. D [**82**]{}, 111701 (2010) \[arXiv:1005.0008 \[hep-ph\]\]. M. R. Buckley and E. T. Neil, Phys. Rev. D [**87**]{}, no. 4, 043510 (2013) \[arXiv:1209.6054 \[hep-ph\]\]. S. Bhattacharya, B. Melić and J. Wudka, JHEP [**1402**]{}, 115 (2014) \[arXiv:1307.2647 \[hep-ph\]\]. A. Carmona and M. Chala, JHEP [**1506**]{}, 105 (2015) \[arXiv:1504.00332 \[hep-ph\]\]. M. Frigerio, A. Pomarol, F. Riva and A. Urbano, JHEP [**1207**]{}, 015 (2012) \[arXiv:1204.2808 \[hep-ph\]\]. R. Lewis, C. Pica and F. Sannino, Phys. Rev. D [**85**]{}, 014504 (2012) \[arXiv:1109.3513 \[hep-ph\]\]. A. Hietanen, C. Pica, F. Sannino and U. I. Sondergaard, Phys. Rev. D [**87**]{}, no. 3, 034508 (2013) \[arXiv:1211.5021 \[hep-lat\]\]. D. S. M. Alves, S. R. Behbahani, P. Schuster and J. G. Wacker, Phys. Lett. B [**692**]{}, 323 (2010) \[arXiv:0903.3945 \[hep-ph\]\]. M. Lisanti and J. G. Wacker, Phys. Rev. D [**82**]{}, 055023 (2010) \[arXiv:0911.4483 \[hep-ph\]\]. D. Spier Moreira Alves, S. R. Behbahani, P. Schuster and J. G. Wacker, JHEP [**1006**]{}, 113 (2010) \[arXiv:1003.4729 \[hep-ph\]\]. G. D. Kribs, T. S. Roy, J. Terning and K. M. Zurek, Phys. Rev. D [**81**]{}, 095001 (2010) \[arXiv:0909.2034 \[hep-ph\]\]. O. Antipin, M. Redi and A. Strumia, JHEP [**1501**]{}, 157 (2015) \[arXiv:1410.1817 \[hep-ph\]\]. O. Antipin, M. Redi, A. Strumia and E. Vigiani, JHEP [**1507**]{}, 039 (2015) \[arXiv:1503.08749 \[hep-ph\]\]. L. B. Okun, JETP Lett.  [**31**]{}, 144 (1980) \[Pisma Zh. Eksp. Teor. Fiz.  [**31**]{}, 156 (1979)\]. L. B. Okun, Nucl. Phys. B [**173**]{}, 1 (1980). A. Soni and Y. Zhang, Phys. Rev. D [**93**]{}, no. 11, 115025 (2016) \[arXiv:1602.00714 \[hep-ph\]\]. L. Forestell, D. E. Morrissey and K. Sigurdson, arXiv:1605.08048 \[hep-ph\]. A. E. Faraggi and M. Pospelov, Astropart. Phys.  [**16**]{}, 451 (2002) \[hep-ph/0008223\]. J. Halverson, B. D. Nelson and F. Ruehle, arXiv:1609.02151 \[hep-ph\]. J. L. Feng and Y. Shadmi, Phys. Rev. D [**83**]{}, 095011 (2011) \[arXiv:1102.0282 \[hep-ph\]\]. W. Detmold, M. McCullough and A. Pochinsky, Phys. Rev. D [**90**]{}, no. 11, 115013 (2014) \[arXiv:1406.2276 \[hep-ph\]\]. G. Krnjaic and K. Sigurdson, Phys. Lett. B [**751**]{}, 464 (2015) \[arXiv:1406.1171 \[hep-ph\]\]. J. M. Cline, Z. Liu, G. Moore and W. Xue, Phys. Rev. D [**90**]{}, no. 1, 015023 (2014) \[arXiv:1312.3325 \[hep-ph\]\]. K. K. Boddy, M. Kaplinghat, A. Kwa and A. H. G. Peter, arXiv:1609.03592 \[hep-ph\]. W. Detmold, M. McCullough and A. Pochinsky, Phys. Rev. D [**90**]{}, no. 11, 114506 (2014) \[arXiv:1406.4116 \[hep-lat\]\]. E. Hardy, R. Lasenby, J. March-Russell and S. M. West, JHEP [**1506**]{}, 011 (2015) \[arXiv:1411.3739 \[hep-ph\]\]. E. Hardy, R. Lasenby, J. March-Russell and S. M. West, JHEP [**1507**]{}, 133 (2015) \[arXiv:1504.05419 \[hep-ph\]\]. Y. Hochberg, E. Kuflik, T. Volansky and J. G. Wacker, Phys. Rev. Lett.  [**113**]{}, 171301 (2014) \[arXiv:1402.5143 \[hep-ph\]\]. E. D. Carlson, M. E. Machacek and L. J. Hall, Astrophys. J.  [**398**]{}, 43 (1992). A. A. de Laix, R. J. Scherrer and R. K. Schaefer, Astrophys. J.  [**452**]{}, 495 (1995) \[astro-ph/9502087\]. Y. Hochberg, E. Kuflik, H. Murayama, T. Volansky and J. G. Wacker, Phys. Rev. Lett.  [**115**]{}, no. 2, 021301 (2015) \[arXiv:1411.3727 \[hep-ph\]\]. M. Hansen, K. Lang[æ]{}ble and F. Sannino, Phys. Rev. D [**92**]{}, no. 7, 075036 (2015) \[arXiv:1507.01590 \[hep-ph\]\]. N. Bernal and X. Chu, JCAP [**1601**]{}, 006 (2016) \[arXiv:1510.08527 \[hep-ph\]\]. A. Kamada, M. Yamada, T. T. Yanagida and K. Yonekura, arXiv:1606.01628 \[hep-ph\]. K. K. Boddy, J. L. Feng, M. Kaplinghat and T. M. P. Tait, Phys. Rev. D [**89**]{}, no. 11, 115017 (2014) \[arXiv:1402.3629 \[hep-ph\]\]. K. K. Boddy, J. L. Feng, M. Kaplinghat, Y. Shadmi and T. M. P. Tait, Phys. Rev. D [**90**]{}, no. 9, 095016 (2014) \[arXiv:1408.6532 \[hep-ph\]\]. M. Freytsis, D. J. Robinson and Y. Tsai, Phys. Rev. D [**91**]{}, no. 3, 035028 (2015) \[arXiv:1410.3818 \[hep-ph\]\]. H. M. Lee and M. S. Seo, Phys. Lett. B [**748**]{}, 316 (2015) \[arXiv:1504.00745 \[hep-ph\]\]. Y. Hochberg, E. Kuflik and H. Murayama, JHEP [**1605**]{}, 090 (2016) \[arXiv:1512.07917 \[hep-ph\]\]. C. Englert, K. Nordstrom and M. Spannowsky, arXiv:1606.05359 \[hep-ph\]. S. Bruggisser, F. Riva and A. Urbano, arXiv:1607.02474 \[hep-ph\]. P. Di Vecchia and F. Sannino, Eur. Phys. J. Plus [**129**]{}, 262 (2014) \[arXiv:1310.0954 \[hep-ph\]\]. B. Audren, J. Lesgourgues, G. Mangano, P. D. Serpico and T. Tram, JCAP [**1412**]{}, no. 12, 028 (2014) \[arXiv:1407.2418 \[astro-ph.CO\]\]. V. Poulin, P. D. Serpico and J. Lesgourgues, JCAP [**1608**]{}, no. 08, 036 (2016) \[arXiv:1606.02073 \[astro-ph.CO\]\]. K. R. Dienes, F. Huang, J. Kost, S. Su, and B. Thomas, to appear. [^1]: E-mail address: [dienes@email.arizona.edu]{} [^2]: E-mail address: [huangfei@email.arizona.edu]{} [^3]: E-mail address: [shufang@email.arizona.edu]{} [^4]: E-mail address: [thomasbd@lafayette.edu]{} [^5]: These degeneracies $\hat g_n$ may be extracted as the coefficients of $q^n$ in a small-$q$ power-series expansion of the infinite product . With only minor modifications and a proper physical definition for $q$, this infinite product turns out to be the partition function of the scalar string theory in Eq. (\[scalarstring\]). [^6]: Note that the factor of $\sqrt{\pi/32}$ in Eq. (\[timetemp\]) is consistent with our adoption of Boltzmann statistics in Eq. (\[eq:OmeganPrimordial\]); for Bose-Einstein statistics this would instead become $\sqrt{45/16\pi^3}$.
--- abstract: 'Oblivious transfer is the cryptographic primitive where Alice sends one of two bits to Bob but is oblivious to the bit received. Using quantum communication, we can build oblivious transfer protocols with security provably better than any protocol built using classical communication. However, with imperfect apparatus one needs to consider other attacks. In this paper we present an oblivious transfer protocol which is impervious to lost messages.' author: - 'Jamie Sikora[^1]' bibliography: - 'paper.bib' date: 'October 3, 2011' title: '**On the existence of loss-tolerant quantum oblivious transfer protocols**' --- Introduction ============ Quantum information allows us to perform certain cryptographic tasks which are impossible using classical information alone. In 1984, Bennett and Brassard gave a quantum key distribution scheme which is unconditionally secure against an eavesdropper [@M01; @LC99; @PS00]. This led to many new problems including finding quantum protocols for other cryptographic primitives such as *coin-flipping* and *oblivious transfer*. Coin-flipping is the cryptographic primitive where Alice and Bob generate a random bit over a communication channel. We discuss two kinds of coin-flipping protocols, *weak coin-flipping* where Alice wants outcome $0$ and Bob wants outcome $1$, and *strong coin-flipping* where there are no assumptions on desired outcomes. We define weak coin-flipping below. A *weak coin-flipping* protocol, denoted ${\mathrm{WCF}}$, with cheating probabilities $(A_{{\mathrm{WCF}}}, B_{{\mathrm{WCF}}})$ and bias ${\varepsilon}_{{\mathrm{WCF}}}$ is a protocol with no inputs and output $c \in { \{ 0, 1 \} }$ satisfying: - if Alice and Bob are honest, they output the same randomly generated bit $c$; - $A_{{\mathrm{WCF}}}$ is the maximum probability dishonest Alice can force honest Bob to accept the outcome $c=0$; - $B_{{\mathrm{WCF}}}$ is the maximum probability dishonest Bob can force honest Alice to accept the outcome $c=1$; - ${\varepsilon}_{{\mathrm{WCF}}} := \max \{ A_{{\mathrm{WCF}}}, B_{{\mathrm{WCF}}} \} - 1/2$. The idea is to design protocols which protect honest parties from cheating parties and there are no security guarantees when both parties are dishonest. We can assume neither party aborts in a ${\mathrm{WCF}}$ protocol. If, for instance, Alice detects Bob has cheated then she may declare herself the winner, i.e., the outcome is $c = 0$. This is not the case in strong coin-flipping since there is no sense of “winning.” A *strong coin-flipping* protocol, denoted ${\mathrm{SCF}}$, with cheating probabilities $(A_{{\mathrm{SCF}}}, B_{{\mathrm{SCF}}})$ and bias ${\varepsilon}_{{\mathrm{SCF}}}$ is a protocol with no inputs and output $c \in \set{0, 1, \textup{abort}}$ satisfying: - if Alice and Bob are honest, then they never abort and they output the same randomly generated bit $c \in { \{ 0, 1 \} }$; - $A_{{\mathrm{SCF}}}$ is the maximum probability dishonest Alice can force honest Bob to accept some outcome $c=a$, over both choices of $a \in { \{ 0, 1 \} }$; - $B_{{\mathrm{SCF}}}$ is the maximum probability dishonest Bob can force honest Alice to accept some outcome $c=b$, over both choices of $b \in { \{ 0, 1 \} }$; - ${\varepsilon}_{{\mathrm{SCF}}} := \max \{ A_{{\mathrm{SCF}}}, B_{{\mathrm{SCF}}} \} - 1/2$. We note here that ${\mathrm{SCF}}$ protocols can be used as ${\mathrm{WCF}}$ protocols. The only issue is if the outcome is “”. In this case, the party who detected the cheating announces themselves the winner. Doing this, the bias in the ${\mathrm{WCF}}$ protocol is the same as in the ${\mathrm{SCF}}$ protocol. Aharonov, Ta-Shma, Vazirani, and Yao [@ATVY00] first showed the existence of an ${\mathrm{SCF}}$ protocol with bias ${\varepsilon}_{{\mathrm{SCF}}} < 1/2$ followed shortly by Ambainis [@Amb01] who showed an ${\mathrm{SCF}}$ protocol with bias ${\varepsilon}_{{\mathrm{SCF}}} = 1/4$. As for lower bounds, Mayers [@May97], Lo, and Chau [@LC97] showed that bias ${\varepsilon}_{{\mathrm{SCF}}} = 0$ is impossible. Kitaev [@Kit03], and later Gutoski and Watrous [@GW07], extended this result to show that the bias of *any* ${\mathrm{SCF}}$ protocol satisfies ${\varepsilon}_{{\mathrm{SCF}}} \geq 1/\sqrt{2} - 1/2$. This bound was proven to be tight by Chailloux and Kerenidis [@CK09] who showed the existence of protocols with bias ${\varepsilon}_{{\mathrm{SCF}}}~<~1/\sqrt{2}~-~1/2 + \delta$ for any $\delta > 0$. As for ${\mathrm{WCF}}$ protocols, it was shown that the bias could be less than Kitaev’s bound. For example, the protocols in [@SR02; @KN04; @Moc05] provide biases of ${\varepsilon}_{{\mathrm{WCF}}} = 1/\sqrt{2} - 1/2$, ${\varepsilon}_{{\mathrm{WCF}}} = 0.239$, and ${\varepsilon}_{{\mathrm{WCF}}} = 1/6$, respectively. The best known lower bound for ${\mathrm{WCF}}$ is by Ambainis [@Amb01] who showed that a protocol with bias ${\varepsilon}_{{\mathrm{WCF}}}$ must use $\Omega (\log \log (1/{\varepsilon}_{{\mathrm{WCF}}}))$ rounds of communication. Then, in a breakthrough result, Mochon [@Moc07] showed the existence of ${\mathrm{WCF}}$ protocols with bias ${\varepsilon}_{{\mathrm{WCF}}} < \delta$ for any $\delta > 0$. Oblivious transfer is the cryptographic primitive where Alice sends to Bob one of two bits but is oblivious to the bit received. We define oblivious transfer and its notions of cheating below. An *oblivious transfer* protocol, denoted ${\mathrm{OT}}$, with cheating probabilities $(A_{{\mathrm{OT}}}, B_{{\mathrm{OT}}})$ and bias ${\varepsilon}_{{\mathrm{OT}}}$ is a protocol *with inputs* satisfying: - Alice inputs two bits $(x_0, x_1)$ and Bob inputs an index $b \in { \{ 0, 1 \} }$; - when Alice and Bob are honest they never abort, Bob learns $x_b$ perfectly, Bob gets no information about $x_{\bar{b}}$, and Alice gets no information about $b$; - $A_{{\mathrm{OT}}}$ is the maximum probability dishonest Alice can learn $b$ without Bob aborting the protocol; - $B_{{\mathrm{OT}}}$ is the maximum probability dishonest Bob can learn $x_0 \oplus x_1$ without Alice aborting the protocol; - ${\varepsilon}_{{\mathrm{OT}}} = \max \{ A_{{\mathrm{OT}}}, B_{{\mathrm{OT}}} \} - 1/2$. When a party cheats, we only refer to the probability which they can learn the desired values without the other party aborting. For example, when Bob cheats, we do not require that he learns either bit with probability $1$. In the ${\mathrm{OT}}$ definition above there can be different ways to interpret the bias. For example, we could consider worst-case choices over inputs, we could assume the inputs are chosen randomly, etc. The protocol construction given in this paper is independent of how the inputs are chosen so this is not an issue. Like weak coin-flipping, oblivious transfer has a related primitive which is useful for the analysis in this paper. A *randomized oblivious transfer* protocol, denoted ${\mathrm{Random \textup{-} OT}}$, with cheating probabilities $(A_{{\mathrm{ROT}}}, B_{{\mathrm{ROT}}})$ and bias ${\varepsilon}_{{\mathrm{ROT}}}$ is a protocol with *no inputs* satisfying: - Alice outputs two randomly generated bits $(x_0, x_1)$ and Bob outputs two bits $(b, x_b)$ where $b \in { \{ 0, 1 \} }$ is independently, randomly generated; - when Alice and Bob are honest they never abort, Bob gets no information about $x_{\bar{b}}$, and Alice gets no information about $b$; - $A_{{\mathrm{ROT}}}$ is the maximum probability dishonest Alice can learn $b$ without Bob aborting the protocol; - $B_{{\mathrm{ROT}}}$ is the maximum probability dishonest Bob can learn $x_0 \oplus x_1$ without Alice aborting the protocol; - ${\varepsilon}_{{\mathrm{ROT}}} = \max \{ A_{{\mathrm{ROT}}}, B_{{\mathrm{ROT}}} \} - 1/2$. We note here that a protocol is considered *fair* if the cheating probabilities for Alice and Bob are equal and *unfair* otherwise. ${\mathrm{OT}}$ is an interesting primitive since it can be used to construct secure two-party protocols [@EGL82], [@C87], [@R81]. It was shown by Lo $\cite{Lo97}$ that ${\varepsilon}_{{\mathrm{OT}}} = 0$ is impossible. This result was improved by Chailloux, Kerenidis, and Sikora [@CKS10] who showed that every ${\mathrm{OT}}$ protocol satisfies ${\varepsilon}_{{\mathrm{OT}}} \geq 0.0586$. Various settings for oblivious transfer have been studied before such as the bounded-storage model [@DFSS08] and the noisy-storage model [@S10]. In this paper, we study only information theoretic security but we allow the possibility of lost messages (more on this below). Oblivious transfer has a rich history, has various definitions, and has many names such as the *set membership problem* [@JRS02] or *private database querying* [@JSGBBWZ10]. A *loss-tolerant protocol* is a quantum cryptographic protocol which is impervious to lost messages. That is, neither Alice nor Bob can cheat more by declaring that a message was lost (even if it was received) or by sending blank messages deliberately. We prefix a protocol with “$\mathrm{LT}$-” to indicate that it is loss-tolerant. The idea of loss-tolerance was first applied to strong coin-flipping by Berlin, Brassard, Bussieres, and Godbout in [@BBBG08]. They showed a vulnerability in the best known coin-flipping protocol construction by Ambainis [@Amb01]. They circumvented this problem and presented an ${\mathrm{LT \textup{-} SCF}}$ protocol with bias ${\varepsilon}_{{\mathrm{SCF}}} = 0.4$. Aharon, Massar, and Silman generalized this protocol to a family of ${\mathrm{LT \textup{-} SCF}}$ protocols with bias slightly smaller at the cost of using more qubits in the communication [@AMS10]. Chailloux added an encryption step to the protocol in [@BBBG08] to improve the bias to ${\varepsilon}_{{\mathrm{SCF}}} = 0.359$ [@C10]. The best known protocol for ${\mathrm{LT \textup{-} SCF}}$ is by Ma, Guo, Yang, Li, and Wen [@MGYLW11] who use an EPR-based protocol which attains a bias of ${\varepsilon}_{{\mathrm{SCF}}} = 0.3536$. It remains an open problem to find the best possible biases for ${\mathrm{LT \textup{-} WCF}}$ and ${\mathrm{LT \textup{-} SCF}}$. In fact, we do not even know if there is an ${\mathrm{LT \textup{-} WCF}}$ protocol with bias less than the best possible bias for ${\mathrm{LT \textup{-} SCF}}$; they may in fact share the same smallest possible bias. The first approach to designing loss-tolerant oblivious transfer protocols was by Jakobi, Simon, Gisin, Bancal, Branciard, Walenta, and Zbinden [@JSGBBWZ10]. They designed a loss-tolerant protocol for private database querying which is also known as “$1$-out-of-$N$ oblivious transfer.” The protocol is not technically an oblivious transfer protocol (using the definition in this paper) since an honest Bob may receive too much information. However, it is practical in the sense that it is secure against the most evident attacks. The backbone of their protocol is the use of a quantum key distribution scheme. This differs from the loss-tolerant protocol in this paper which is based on weak coin-flipping. The results of this paper {#the-results-of-this-paper .unnumbered} ------------------------- We first present a protocol in Section \[example\] and prove it is not loss-tolerant. Then, in Section \[construction\], we show how to build ${\mathrm{LT \textup{-} OT}}$ protocols from ${\mathrm{LT \textup{-} WCF}}$ and ${\mathrm{LT \textup{-} Random \textup{-} OT}}$ protocols. Namely, we prove the following theorem. \[theorem\] Suppose there exists an ${\mathrm{LT \textup{-} WCF}}$ protocol with cheating probabilities $(A_{{\mathrm{WCF}}}, B_{{\mathrm{WCF}}})$ and bias ${\varepsilon}_{{\mathrm{WCF}}}$ and an ${\mathrm{LT \textup{-} Random \textup{-} OT}}$ protocol with cheating probabilities $(A_{{\mathrm{ROT}}}, B_{{\mathrm{ROT}}})$ and bias ${\varepsilon}_{{\mathrm{ROT}}}$. Then there exists an ${\mathrm{LT \textup{-} OT}}$ protocol with cheating probabilities $$\begin{aligned} A_{{\mathrm{OT}}} & = & A_{{\mathrm{WCF}}} \, | A_{{\mathrm{ROT}}} - B_{{\mathrm{ROT}}} | + \min \{ A_{{\mathrm{ROT}}}, B_{{\mathrm{ROT}}} \}, \\ B_{{\mathrm{OT}}} & = & B_{{\mathrm{WCF}}} \, | A_{{\mathrm{ROT}}} - B_{{\mathrm{ROT}}} | + \min \{ A_{{\mathrm{ROT}}}, B_{{\mathrm{ROT}}} \}.\end{aligned}$$ This protocol has bias $${\varepsilon}_{{\mathrm{OT}}} \leq | A_{{\mathrm{ROT}}} - B_{{\mathrm{ROT}}} | + \min \{ A_{{\mathrm{ROT}}}, B_{{\mathrm{ROT}}} \} - 1/2 = {\varepsilon}_{{\mathrm{ROT}}}.$$ We have ${\varepsilon}_{{\mathrm{OT}}} < {\varepsilon}_{{\mathrm{ROT}}}$ when ${\varepsilon}_{{\mathrm{WCF}}} < 1/2$ and $A_{{\mathrm{ROT}}} \neq B_{{\mathrm{ROT}}}$. Furthermore, the ${\mathrm{OT}}$ protocol is fair when the ${\mathrm{LT \textup{-} WCF}}$ protocol is fair. In Subsection \[unfair\], we show the existence of an unfair ${\mathrm{LT \textup{-} Random \textup{-} OT}}$ protocol with cheating probabilities $(A_{{\mathrm{ROT}}}, B_{{\mathrm{ROT}}}) = (1, 1/2)$. Combining this with the fact that there is a fair ${\mathrm{LT \textup{-} WCF}}$ protocol with bias ${\varepsilon}_{{\mathrm{WCF}}} = 0.3536$ [@MGYLW11] we get the following corollary. \[cor\] There exists a fair ${\mathrm{LT \textup{-} OT}}$ protocol with bias ${\varepsilon}_{{\mathrm{OT}}} = 0.4268$. An example of a ${\mathrm{\mathbf{Random \textup{-} OT}}}$ protocol that is not loss-tolerant {#example} ============================================================================================= In this section, we examine a protocol for ${\mathrm{Random \textup{-} OT}}$ and show it is not loss-tolerant. This protocol has the same vulnerability as the best known coin-flipping protocol constructions based on bit-commitment, see [@BBBG08] for details. - Bob randomly chooses $b \in { \{ 0, 1 \} }$ and sends Alice half of the two-qutrit state $$\ket{\phi_b} := \frac{1}{\sqrt{2}} \ket{bb} + \frac{1}{\sqrt{2}} \ket{22}.$$ - Alice randomly chooses $x_0, x_1 \in { \{ 0, 1 \} }$ and applies the following unitary to the qutrit $$\ket{0} \to (-1)^{x_0} \ket{0}, \quad \ket{1} \to (-1)^{x_1} \ket{1}, \quad \ket{2} \to \ket{2}.$$ - Alice returns the qutrit to Bob. Bob now has the two-qutrit state $$\frac{(-1)^{x_b}}{\sqrt{2}} \ket{bb} + \frac{1}{\sqrt{2}} \ket{22}.$$ - Bob performs the measurement $\{ \Pi_0 := | \phi_b \rangle \langle \phi_b |, \; \Pi_1 := \I - \Pi_0 \}$ on the state. - If the outcome is $\Pi_0$ then $x_b=0$. If the outcome is $\Pi_1$ then $x_b=1$. - Any lost messages are declared and the protocol is restarted from the beginning. It has been shown in [@CKS10] that Bob can learn $x_0 \oplus x_1$ with probability $1$ and Alice can learn $b$ with maximum probability $3/4$. However, this does not take into account “lost-message strategies.” We now show such a strategy and how Alice can learn $b$ perfectly. Suppose Alice measures the first message in the computational basis. If she sees outcome “$0$” or “$1$” then she knows Bob’s index $b$ with certainty. If the outcome is “$2$” then she replies to Bob, “Sorry, your message was lost.” Then they restart the protocol and Alice can measure again. Eventually, Alice will learn $b$ perfectly proving this protocol is not loss-tolerant. This protocol illustrates another interesting point about the design of ${\mathrm{OT}}$ protocols. One may not be able to simply change the amplitudes in the starting states to balance the cheating probabilities. For example, if we were to change the amplitudes in $\ket{\phi_b}$, then Bob would have a nonzero probability of getting the wrong value for $x_b$. Thus, balancing an unfair ${\mathrm{OT}}$ protocol is not as straightforward as it can be in coin-flipping. Constructing loss-tolerant oblivious transfer protocols {#construction} ======================================================= In this section, we prove Theorem \[theorem\] by constructing an ${\mathrm{LT \textup{-} OT}}$ protocol from an ${\mathrm{LT \textup{-} WCF}}$ protocol and a (possibly unfair) ${\mathrm{LT \textup{-} Random \textup{-} OT}}$ protocol. In doing so, we have to overcome some issues that are not present when designing ${\mathrm{LT \textup{-} SCF}}$ protocols. These issues include: - it is not always possible to simply reset a protocol with inputs; - balancing the cheating probabilities can be difficult; - it is not possible to switch the roles of Alice and Bob since Bob must be the receiver; - an honest party must not learn extra information about the other party’s inputs (or outputs in the case of ${\mathrm{Random \textup{-} OT}}$). We deal with these issues by reducing the problem one step at a time. First we reduce the task of finding ${\mathrm{LT \textup{-} OT}}$ protocols to finding ${\mathrm{LT \textup{-} Random \textup{-} OT}}$ protocols in Subsection \[same\]. Then we build an ${\mathrm{LT \textup{-} Random \textup{-} OT}}$ protocol from an ${\mathrm{LT \textup{-} WCF}}$ protocol and two (possibly unfair) ${\mathrm{LT \textup{-} Random \textup{-} OT}}$ protocols in Subsection \[wcf\]. In Subsection \[symmetry\], we show how to create the two ${\mathrm{LT \textup{-} Random \textup{-} OT}}$ protocols from a single ${\mathrm{LT \textup{-} Random \textup{-} OT}}$ protocol. Finally, we show an unfair ${\mathrm{LT \textup{-} Random \textup{-} OT}}$ protocol in Subsection \[unfair\] to prove Corollary \[cor\]. Equivalence between ${\mathrm{\mathbf{LT \textup{-} OT}}}$ protocols and ${\mathrm{\mathbf{LT \textup{-} Random \textup{-} OT}}}$ protocols with respect to bias {#same} ---------------------------------------------------------------------------------------------------------------------------------------------------------------- Having a protocol with inputs is an issue when building protocols loss-tolerantly. In recent ${\mathrm{LT \textup{-} SCF}}$ protocols, if messages were lost for any reason, then the protocol is simply restarted at some point, but this is not always an option with ${\mathrm{OT}}$ because the inputs could have context, e.g., Alice’s bits could be database entries. For this reason, we cannot simply “reset” them and repeat the protocol. To remedy this issue, we use ${\mathrm{Random \textup{-} OT}}$. It is well known that ${\mathrm{OT}}$ and ${\mathrm{Random \textup{-} OT}}$ share the same cheating probabilities, i.e., if there exists an ${\mathrm{OT}}$ protocol with cheating probabilities $(A_{{\mathrm{OT}}}, B_{{\mathrm{OT}}}) = (x,y)$ then there exists a ${\mathrm{Random \textup{-} OT}}$ protocol with cheating probabilities $(A_{{\mathrm{ROT}}}, B_{{\mathrm{ROT}}}) = (x,y)$, and vice versa. For completeness, we show these reductions and prove they preserve loss-tolerance. - Alice randomly chooses $x_0, x_1 \in { \{ 0, 1 \} }$ and Bob randomly chooses $b \in { \{ 0, 1 \} }$. - Alice and Bob input the choices of bits above into the ${\mathrm{LT \textup{-} OT}}$ protocol so that Bob learns $x_b$. - Alice outputs $(x_0, x_1)$ and Bob outputs $(b, x_b)$. It is straightforward to see that this reduction preserves the loss-tolerance of the ${\mathrm{LT \textup{-} OT}}$ protocol since we are only restricting how the inputs are chosen. More interesting is the reduction from ${\mathrm{LT \textup{-} Random \textup{-} OT}}$ to ${\mathrm{LT \textup{-} OT}}$. - Alice and Bob decide on their desired choices of inputs to the ${\mathrm{LT \textup{-} OT}}$ protocol. - Alice and Bob use an ${\mathrm{LT \textup{-} Random \textup{-} OT}}$ protocol to generate the output $(x_0, x_1)$ for Alice and $(b, x_b)$ for Bob. - Bob tells Alice if his output bit $b$ is equal to his desired index. If it is not equal, Bob changes it and Alice switches her two bits. - Alice tells Bob which of her two bits $(x_0, x_1)$ are equal to her desired inputs. Alice and Bob flip their outcome bits accordingly. This reduction is a way to derandomize the outputs of the ${\mathrm{LT \textup{-} Random \textup{-} OT}}$ protocol. We see that this also preserves the loss-tolerance of the ${\mathrm{LT \textup{-} Random \textup{-} OT}}$ protocol since classical information can simply be resent if lost in transmission. Using the reductions above, we have reduced the task of finding ${\mathrm{LT \textup{-} OT}}$ protocols to finding ${\mathrm{LT \textup{-} Random \textup{-} OT}}$ protocols. Creating ${\mathrm{\mathbf{LT \textup{-} Random \textup{-} OT}}}$ protocols {#wcf} --------------------------------------------------------------------------- There is a simple construction of an ${\mathrm{SCF}}$ protocol with bias ${\varepsilon}\approx 3/4$ and it proceeds as follows. Alice and Bob first use a ${\mathrm{WCF}}$ protocol with bias ${\varepsilon}\approx 0$. The “winner” gets to flip a coin to determine the outcome of the ${\mathrm{SCF}}$ protocol. Of course, a dishonest player would like to “win” the ${\mathrm{WCF}}$ protocol since then they have total control of the ${\mathrm{SCF}}$ outcome. We mimic this idea to create a protocol prototype for ${\mathrm{LT \textup{-} Random \textup{-} OT}}$ and discuss why it does not work. - Alice randomly chooses two bits $(x_0, x_1)$ and Bob randomly chooses an index $b \in { \{ 0, 1 \} }$. - Alice and Bob perform an ${\mathrm{LT \textup{-} WCF}}$ protocol with bias ${\varepsilon}_{{\mathrm{WCF}}}$ to create random $c \in { \{ 0, 1 \} }$. - If $c = 0$, then Bob sends $b$ to Alice. Alice then replies with $x_b$. - If $c = 1$, then Alice sends $(x_0, x_1)$ to Bob. This protocol has bias ${\varepsilon}_{{\mathrm{ROT}}} < 1/2$ if ${\varepsilon}_{{\mathrm{WCF}}} < 1/2$. However, the problem is that honest Alice learns $b$ with probability $3/4$ when Bob is honest. This is simply not allowed in a ${\mathrm{Random \textup{-} OT}}$ protocol because honest Alice should never obtain any information about $b$. Honest Bob learns $x_0 \oplus x_1$ with probability $3/4$, which is also not allowed since he should only learn $x_0$ or $x_1$. This illustrates another issue when designing ${\mathrm{OT}}$ and ${\mathrm{Random \textup{-} OT}}$ protocols. To remedy this problem, instead of Alice and Bob revealing their bits entirely, they can use (possibly unfair) ${\mathrm{LT \textup{-} Random \textup{-} OT}}$ protocols. We present a modified version of the protocol below. - Alice and Bob perform an ${\mathrm{LT \textup{-} WCF}}$ protocol with cheating probabilities $(A_{{\mathrm{WCF}}}, B_{{\mathrm{WCF}}})$ and bias ${\varepsilon}_{{\mathrm{WCF}}}$ to create random $c \in { \{ 0, 1 \} }$. - If $c = 0$, then Alice and Bob generate their outputs using an ${\mathrm{LT \textup{-} Random \textup{-} OT}}$ protocol with cheating probabilities $(A_{{\mathrm{ROT}}}, B_{{\mathrm{ROT}}}) = (x,y)$, where $x \geq y$. - If $c = 1$, then Alice and Bob generate their outputs using an ${\mathrm{LT \textup{-} Random \textup{-} OT}}$ protocol with cheating probabilities $(A_{{\mathrm{ROT}}}, B_{{\mathrm{ROT}}}) = (y,x)$. - Alice and Bob abort if and only if either ${\mathrm{LT \textup{-} Random \textup{-} OT}}$ protocol is aborted. We now prove that this ${\mathrm{LT \textup{-} Random \textup{-} OT}}$ protocol has cheating probabilities equal to those in Theorem \[theorem\]. We show it for cheating Alice as the case for cheating Bob is almost identical. Since $x \geq y$, Alice would prefer the outcome of the ${\mathrm{WCF}}$ protocol to be $c=0$. She can force $c=0$ with probability $A_{{\mathrm{WCF}}}$ and in this case she can learn $b$ with probability $x$. If $c=1$, she can learn $b$ with probability $y$. Letting $A'_{{\mathrm{ROT}}}$ be the amount she can learn $b$ in the protocol above, we have $$A'_{{\mathrm{ROT}}} = A_{{\mathrm{WCF}}} \, x + (1-A_{{\mathrm{WCF}}}) \, y = A_{{\mathrm{WCF}}} \, (x-y) + y.$$ All that remains to prove Theorem \[theorem\] is to show that an ${\mathrm{LT \textup{-} Random \textup{-} OT}}$ protocol with cheating probabilities $(A_{{\mathrm{ROT}}}, B_{{\mathrm{ROT}}}) = (\alpha, \beta)$ implies the existence of an ${\mathrm{LT \textup{-} Random \textup{-} OT}}$ protocol with cheating probabilities $(A_{{\mathrm{ROT}}}, B_{{\mathrm{ROT}}}) = (\beta, \alpha)$, for any $\alpha, \beta \in [1/2, 1]$. This way, we can just set $x = \max \{ \alpha, \beta \}$ and $y = \min \{ \alpha, \beta \}$. Symmetry in ${\mathrm{\mathbf{LT \textup{-} Random \textup{-} OT}}}$ protocols {#symmetry} ------------------------------------------------------------------------------ Suppose we have an ${\mathrm{LT \textup{-} Random \textup{-} OT}}$ protocol with cheating probabilities $(A_{{\mathrm{ROT}}}, B_{{\mathrm{ROT}}}) = (\alpha, \beta)$, for some $\alpha, \beta \in [1/2, 1]$. We now show how to create an ${\mathrm{LT \textup{-} Random \textup{-} OT}}$ protocol with cheating probabilities $(A_{{\mathrm{ROT}}}, B_{{\mathrm{ROT}}}) = (\beta, \alpha)$. The trick is to switch the roles of Alice and Bob. 1. Alice and Bob use an ${\mathrm{LT \textup{-} Random \textup{-} OT}}$ protocol with cheating probabilities $(A_{{\mathrm{ROT}}}, B_{{\mathrm{ROT}}}) = (\alpha, \beta)$ except that Bob is the sender and Alice is the receiver. Let Alice’s output be $(b, x_{b})$ and let Bob’s output be $(x_0, x_1)$. 2. Alice randomly chooses $d \in { \{ 0, 1 \} }$ and sends $d \oplus x_{b}$ to Bob. 3. Alice outputs $(x'_0, x'_1) = (d, d \oplus b)$ and Bob outputs $(b', m) = (x_0 \oplus x_1, d \oplus x_{b} \oplus x_0)$. 4. Alice and Bob abort if and only if the ${\mathrm{LT \textup{-} Random \textup{-} OT}}$ protocol is aborted. Notice this protocol is loss-tolerant since classical messages can be resent if lost in transmission. We can write Bob’s output $m$ as $d \oplus x_{b} \oplus x_0 = d \oplus bb'$. Thus, if $b'=0$ then $m = d = x'_0$ and if $b' = 1$ then $m = d \oplus b = x'_1$. Therefore Bob gets the correct value for $x'_{b'}$. Since $x'_0 \oplus x'_1 = d \oplus (d \oplus b) = b$, honest Bob gets no information about Alice’s other bit and cheating Bob can learn $x'_0 \oplus x'_1$ with maximum probability $\alpha$. Since $b' = x_0 \oplus x_1$, honest Alice gets no information about $b'$ and cheating Alice can learn $b'$ with maximum probability $\beta$. Therefore, $(A_{{\mathrm{ROT}}}, B_{{\mathrm{ROT}}}) = (\beta, \alpha)$ as desired. Since $b, x_0, x_1$, and $d$ are all randomly generated, so are $x'_0, x'_1$, and $b'$ making this a valid ${\mathrm{LT \textup{-} Random \textup{-} OT}}$ protocol. This completes the proof of Theorem \[theorem\]. An unfair ${\mathrm{\mathbf{LT \textup{-} Random \textup{-} OT}}}$ protocol {#unfair} --------------------------------------------------------------------------- We present here an ${\mathrm{LT \textup{-} Random \textup{-} OT}}$ protocol with cheating probabilities $(A_{{\mathrm{ROT}}}, B_{{\mathrm{ROT}}}) = (1/2, 1)$. Note that even though this protocol has bias ${\varepsilon}_{{\mathrm{ROT}}} = 1/2$, it can be used to create a protocol with smaller bias using recent ${\mathrm{LT \textup{-} WCF}}$ protocols and Theorem \[theorem\]. - Bob randomly chooses an index $b \in { \{ 0, 1 \} }$ and another random bit $d \in { \{ 0, 1 \} }$. - Bob sends Alice the qubit $H^b \ket{d}$. - Alice randomly chooses $x_0, x_1 \in { \{ 0, 1 \} }$ and applies the unitary $X^{x_0} Z^{x_1}$ to the qubit. - Alice returns the qubit to Bob which is in the state $X^{x_0} Z^{x_1} H^b \ket{d} = H^b \ket{x_b \oplus d}$ (up to global phase). - Bob has a two-outcome measurement (depending on $b$ and $d$) to learn $x_b$ perfectly. - If any messages are lost the protocol is restarted from the beginning. We see that this is a valid ${\mathrm{Random \textup{-} OT}}$ protocol. Firstly, because honest Bob learns $x_b$ and gets no information about $x_{\bar b}$ (since $H^b \ket{x_b \oplus d}$ does not involve $x_{\bar{b}}$). Secondly, Alice cannot learn any information about $b$, even if she is dishonest, since the density matrices for $b=0$ and $b=1$ are identical. Therefore $A_{{\mathrm{ROT}}} = 1/2$. This protocol is loss-tolerant concerning cheating Alice since $b$ and $d$ are reset if any messages are lost so Alice cannot accumulate useful information. It is also loss-tolerant concerning cheating Bob since he can already learn both of Alice’s bits perfectly. He can do this by first sending Alice half of $$\ket{\Phi^+} = \dfrac{1}{\sqrt 2} \ket{00} + \dfrac{1}{\sqrt 2} \ket{11}.$$ Each choice of $(x_0, x_1)$ corresponds to Bob having a different Bell state at the end of the protocol. From this, $x_0$ and $x_1$ can be perfectly inferred, yielding $B_{{\mathrm{ROT}}} = 1$. Conclusions and open questions ============================== We have designed a way to build ${\mathrm{LT \textup{-} OT}}$ protocols by using an ${\mathrm{LT \textup{-} WCF}}$ protocol to help balance the cheating probabilities in a (possibly unfair) ${\mathrm{LT \textup{-} Random \textup{-} OT}}$ protocol. This protocol uses well known reductions between ${\mathrm{OT}}$ and ${\mathrm{Random \textup{-} OT}}$ and the reduction to switch the roles of Alice and Bob. The construction in this paper is robust enough to design ${\mathrm{OT}}$ protocols with other definitions of cheating Bob. Suppose that Bob wishes to learn $f(x_0, x_1)$ where $f \neq \mathrm{XOR}$ is some functionality. In this case, we may not be able to switch the roles of Alice and Bob in a way that switches the cheating probabilities as in Subsection \[symmetry\]. However, instead of just using one ${\mathrm{LT \textup{-} Random \textup{-} OT}}$ protocol and creating another from it, we could have just as easily used two different ${\mathrm{LT \textup{-} Random \textup{-} OT}}$ protocols (with a consistent notion of cheating Bob). A limitation of this protocol design is that is uses ${\mathrm{LT \textup{-} Random \textup{-} OT}}$ protocols as subroutines. Even if ${\mathrm{LT \textup{-} WCF}}$ protocols with bias ${\varepsilon}_{{\mathrm{WCF}}} \approx 0$ are constructed, using the protocols in Subsection \[unfair\] can reduce the bias to only ${\varepsilon}_{{\mathrm{OT}}} \approx 1/4$. It would be interesting to see if there exists an ${\mathrm{LT \textup{-} OT}}$ protocol with cheating probabilities $(A_{{\mathrm{OT}}}, B_{{\mathrm{OT}}}) = (\alpha, \beta)$ where $\alpha + \beta < 3/2$. An open question is to show if using more ${\mathrm{LT \textup{-} WCF}}$ subroutines can help improve the bias. In [@CK09], many ${\mathrm{WCF}}$ protocols were used to drive the bias of a ${\mathrm{SCF}}$ protocol down towards the optimal value of $1/\sqrt{2} - 1/2$. Can something similar be done for ${\mathrm{OT}}$ or ${\mathrm{LT \textup{-} OT}}$? Acknowledgements {#acknowledgements .unnumbered} ================ I would like to thank Ashwin Nayak and Levent Tunçel for helpful discussions. I acknowledge support from NSERC, MITACS, and ERA (Ontario). [^1]: Department of Combinatorics & Optimization and Institute for Quantum Computing, University of Waterloo. Address: 200 University Ave. W., Waterloo, ON, N2L 3G1, Canada. Email: [ jwjsikor@uwaterloo.ca]{}.
--- abstract: 'The minimal faithful permutation degree of a finite group $G$, denote by $\mu(G)$ is the least non-negative integer $n$ such that $G$ embeds inside the symmetric group $\operatorname{Sym}(n)$. In this paper, we outline a Magma proof that $10$ is the smallest degree for which there are groups $G$ and $H$ such that $\mu(G \times H) < \mu(G)+ \mu(H)$.' author: - 'Scott H. Murray' - Neil Saunders title: Magma Proof of Strict Inequalities for Minimal Degrees of Finite Groups --- [^1] [^2] Introduction ============ The study of this topic dates back to Johnson [@J71] and Wright [@W75], who among other things investigated the inequality $$\label{eq:directsum} \mu(G\times H) \leq \mu(G) + \mu(H),$$ which clearly holds. Johnson first showed that equality holds when $G$ and $H$ have coprime orders or are abelian. Wright went further to show that equality holds when $G$ and $H$ are $p$-groups and hence extended this to nilpotent groups. In that same paper, he constructs a class of groups $\mathscr{C}$ with the defining property that for every $G$ in $\mathscr{C}$, there exists a nilpotent subgroup $G_1$ in $G$ such that $\mu(G_1)=\mu(G)$. It is clear that equality in holds for any two groups in $\mathscr{C}$ and that $\mathscr{C}$ is closed under taking direct products. Wright [@W75], asked the question: does $\mu(G \times H)= \mu(G) + \mu(H)$ for all finite groups $G$ and $H$? The referee to [@W75] provided an example of strict inequality of degree $15$ and attached it as an addendum to that paper. The second author of this article recognised that the example quoted in that paper involved the complex reflection group $G(5,5,3)$ and its centraliser in $\operatorname{Sym}(15)$. This led to the investigation in [@S07] where the second author proved that a similar result occurs with the complex reflection groups $G(4,4,3)$ and $G(2,2,5)$, which are of degree $12$ and $10$ respectively. That is, these groups have non-trivial centralisers in their minimal embedding that intersect trivially their embedded image. In [@S08], the second author extended this idea exhibiting that for $p$ and $q$ distinct odd primes, with $q \geq 5$ or $q=3$ and $p \not\equiv 1$ mod $3$, the groups $G(p,p,q)$ and their centralisers in $\operatorname{Sym}(pq)$ have the same property that $$\mu(G(p,p,q))=\mu(G(p,p,q)\times C_{\operatorname{Sym}(pq)}(G(p,p,q)))=pq,$$ and so give examples of strict inequality in . The authors do not know whether there are groups $G$ and $H$ such that $$\max \{\mu(G),\mu(H)\} < \mu(G \times H) < \mu(G) + \mu(H).$$ In the following section, we prove using the computation algebra system Magma [@CB06], that $10$ is the smallest degree for the scenario that $\mu(G)=\mu(G \times C)$ where $G$ is minimally embedded group in $\operatorname{Sym}(\mu(G))$ and $C$ is its centraliser which intersects trivially with it. This is done by a brute-force search of the subgroups of $\operatorname{Sym}(m)$ for $m \leq 9$ and examining their centralisers. The Magma Code ============== The following code was implemented in magma for $m \leq 9$ n:=m; S:=Sym(m); num:=NumberOfTransitiveGroups(m); subs:=Subgroups(Sym(m)); subs:=[s`subgroup: s in Subgroups(Sym(m))]; smaller:=[[s`subgroup: s in Subgroups(Sym(i))] : i in [1..m-1]]; minemb:=[ G : G in subs | forall{H : H in smaller[i], i in [1..m-1]| not IsIsomorphic(G,H)}]; Ind:=[Index(sub<S|Centraliser(S,G),G>,G) : G in minemb]; indices_min:=[i : i in [1..#minemb]| Ind[i] ne 1]; Thus the code constructs the entire subgroup lattice of the symmetric group, isolates the subgroups which are minimally embedded inside the symmetric group and then computes their centralisers. For $G$ a minimally embedded group in $\operatorname{Sym}(m)$ and $C$ the centraliser of $G$ in this minimal embedding, the Ind sequence returns the index of $G$ in the group generated $G$ and $C$. Once this index is known, one can either determine that the centraliser is contained inside the group, or there is a possibility of a subgroup in $C$ which intersects trivially with $G$ by searching for an element in $x$ in $C$, such that the intersection of $\langle x \rangle$ with $G$ is trivial. Results ======= Since in the cases $m=2,3,4$ are easily dealt with by hand we only give the Magma output for the higher cases. > Ind; [ 1, 1, 1, 1, 1, 1, 1 ] > Ind; [ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 ] > Ind; [ 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 ] > Ind; [ 1, 4, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 2, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,1, 1, 1, 1 ] Ind; [ 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 , 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 ] An inspection of the numbers above shows that they are either $1$ or divisible by $2$. This means that any subgroup of the centraliser of $G$ in $\operatorname{Sym}(n)$ which intersects trivially with $G$ must have order divisible by $2$. Therefore, to search for such a subgroup, we implement the following function; Comp:= [ G : G in minemb |exists{g : g in Centraliser(S,G) | Order(g) eq 2 and Order(G meet sub<S|g>) eq 1} ]; In each case, we find that > Comp; [] Thus for every minimally embedded group of degree at most $9$, there does not exist a subgroup of its centraliser which intersects it trivially. Therefore we cannot obtain a strict inequality in by this method. [1]{} J.J. Cannon and Bosma W. . Edition 2.13, 4350 pages, 2006. D.L. Johnson. Minimal permutation representations of finite groups. , 93:857–866, 1971. N. Saunders. The minimal degree for a class of finite complex reflection groups. 2008, [*Preprint*]{}. N. Saunders. Strict inequalities for minimal degrees of direct products. , 79:23–30, 2009. D. Wright. Degrees of minimal embeddings of some direct products. , 97:897–903, 1975. [^1]: [AMS subject classification (2000): 20B35]{} [^2]: [Keywords: Faithful Permutation Representations]{}
--- abstract: 'We present an analysis of [[*Chandra*]{}]{} spectra of five gravitationally lensed active galactic nuclei. We confirm the previous detections of FeK$\alpha$ emission lines in most images of these objects with high significance. The line energies range from 5.8 to 6.8 keV with widths from unresolved to 0.6 keV, consistent with emission close to spinning black holes viewed at different inclination angles. We also confirm the positive offset from the Iwasawa-Taniguchi effect, the inverse correlation between the FeK$\alpha$ equivalent width and the X-ray luminosity in AGN, where our measured equivalent widths are larger in lensed quasars. We attribute this effect to microlensing, and perform a microlensing likelihood analysis to constrain the emission size of the relativistic reflection region and the spin of supermassive black holes, assuming that the X-ray corona and the reflection region, responsible for the iron emission line, both follow power-law emissivity profiles. The microlensing analysis yields strong constraints on the spin and emissivity index of the reflection component for [Q2237$+$0305]{}, with $a > 0.92$ and $n > 5.4$. For the remaining four targets, we jointly constrain the two parameters, yielding $a=0.8\pm0.16$ and an emissivity index of $n=4.0\pm 0.8$, suggesting that the relativistic X-ray reflection region is ultra-compact and very close to the innermost stable circular orbits of black holes, which are spinning at close to the maximal value. We successfully constrain the half light radius of the emission region to $< 2.4$ $r_g$ ($r_g = GM/c^2$) for [Q2237$+$0305]{} and in the range 5.9–7.4 $r_g$ for the joint sample.' author: - Xinyu Dai - Shaun Steele - Eduardo Guerras - 'Christopher W. Morgan' - Bin Chen title: Constraining Quasar Relativistic Reflection Regions and Spins with Microlensing --- Introduction ============ The X-ray spectra of active galactic nuclei (AGN) are characterized by continuum emission that is well modeled by a power law [e.g., @guilbert1988; @reynolds2003; @brenneman2013]. The UV emission of the accretion disk provides the seed photons and these photons are then inverse Compton scattered by relativistic electrons in the corona to produce the continuum. A portion of these photons are scattered back to the accretion disk and can can create a reprocessed or reflected emission component including fluorescent emission lines, most notably, the  emission line at 6.4keV in the source rest frame [@guilbert1988; @fabian1995; @reynolds2003; @brenneman2013]. The exact locations of the reflection are not well constrained and the process can occur at multiple locations, from the inner accretion disk, disk, broad line regions, to the torus. Measuring the spins of super-massive black holes (SMBH) at the centers of AGN is important because it is related to the growth history of the black holes, their interaction with the environment, the launching of relativistic jets, and the size of the innermost stable circular orbit (ISCO) [e.g., @thorne1974; @blandford1977; @fabian2012; @brenneman2013; @chartas2017]. For example, as a SMBH grows it can provide matter and energy to its surrounding environment through outflows [@fabian2012]. One important method to estimate spins models the general relativistic (GR) and special relativistic (SR) distortions of the  emission line [e.g., @brenneman2013; @reynolds2014]. This method has been applied to many nearby Seyferts with most estimates being close to the maximal spin  [@reynolds2014]. Another approach is to model the UV-optical SEDs of high redshift quasars [e.g., @capellupo2015; @capellupo2017]. These studies again find high spins for high redshift quasars. Quasar microlensing has significantly improved our understanding of the accretion disks [e.g., @dai2010; @morgan2010; @mosquera2013; @blackburne2014; @blackburne2015; @macleod2015] and non-thermal emission regions [e.g., @pooley2006; @pooley2007; @morgan2008; @chartas2009; @chartas2016; @chartas2017; @dai2010; @chen2011; @chen2012; @guer2017] of quasars, and the demographics of microlenses in the lens galaxy [e.g., @blackburne2014; @dg2018; @gdm2018] . Since the magnification diverges on the caustics produced by the lensing stars, quasar microlensing can constrain arbitrarily small emission regions if they can be isolated from other emission, in position, velocity, or energy. In particular, microlensing can be used to constrain the spin of black holes by measuring the ISCO size. In this paper, we will utilize the excess equivalent width (EW) difference between lensed and unlensed quasars first summarized by @chen2012 to constrain the size of the reflection region and the spin of quasars. This paper is organized as follows. We present the [[*Chandra*]{}]{} observations and the data reduction in Section \[sec:obs\_red\] and the spectral analysis in Section \[sec:spec\]. In Section \[sec:spec\] we discuss the significance of the iron line detections and we confirm the offset of  equivalent widths of lensed quasars. In Section \[sec:ML\], we carry out a microlensing analysis to estimate the size of the  emission region and the spin of the black hole. We discuss the results in Section \[sec:discussion\]. We assume cosmological parameters of $\Omega_M = 0.27$, $\Omega_{\Lambda}=0.73$, and $H_0 = 70$ km s$^{-1}$ Mpc$^{-1}$ throughout the paper. OBSERVATIONS AND DATA REDUCTION {#sec:obs_red} =============================== The observations used in this paper were made with the Advanced CCD Imaging Spectrometer [@garmire2003] on board [[*Chandra X-ray Observatory*]{}]{} [@weisskopf2002]. [[*Chandra*]{}]{} has a point spread function (PSF) of 0[$^{\prime\prime}\!\!.$]{}5 and is therefore able to resolve most of the multiple images in lensed systems since they have a typical image separation of 1–2. We used two sets of data in this analysis. The first, which we call Data Set 1, mainly comes from the [[*Chandra*]{}]{} Cycle 11 program and the second, which we call Data Set 2, comes from [[*Chandra*]{}]{} Cycles 14–15. We analyzed five lenses: [QJ0158$-$4325]{}, [HE0435$-$1223]{}, [SDSSJ1004$+$4112]{}, [HE1104$-$1805]{}, and [Q2237$+$0305]{}. The lens properties are summarized in Table \[tab:lensinfo\]. All the data were reprocessed using the [[*Chandra X-ray Center*]{}]{} CIAO 4.7 software tool `chandra_repro`, which takes data that have already passed through the [[*Chandra X-ray Center*]{}]{} Standard Data Processing and filters the event file on the good time intervals, grades, cosmic ray rejection, transforms to celestial coordinates, and removes any observation-specific bad pixel files. [lccccccccc]{} QJ 0158$-$4325 & 1.29 & 0.317 & 01:58:41.44 & $-$43:25:04.20 & 1.95 & 12 & 29.9 & 111.7 & 141.6\ HE 0435$-$1223 & 1.689 & 0.46 & 04:38:14.9 & $-$12:17:14.4 & 5.11 & 10 & 48.4 & 217.5 & 265.9\ SDSS J1004$+$4112 & 1.734 & 0.68 & 10:04:34.91 & $+$41:12:42.8 & 1.11 & 11 & 103.8 & 145.6 & 249.4\ HE 1104$-$1805 & 2.32 & 0.73 & 11:06:33.45 & $-$18:21:24.2 & 4.62 & 15 & 110.0 & 80.5 & 191.5\ Q 2237$+$0305 & 1.69 & 0.04 & 22:40:30.34 & $+$03:21:28.8 & 5.51 & 30 & 292.4 & 175.4 & 467.8\ SPECTRAL ANALYSIS {#sec:spec} ================= We extracted spectra using the CIAO 4.7 software tool `specextract`. We used circular extraction regions with a radius of 0[$^{\prime\prime}\!\!.$]{}72, which is less than the typical image separation but greater than the PSF of [[*Chandra*]{}]{}. We chose this extraction radius to balance the needs for maximizing the signal-to-noise ratio (S/N) of the spectrum from a lensed image while minimizing the contamination from the other nearby lensed images. For the cluster lens, SDSS J1004+4112, we instead used circles with a radius 1[$^{\prime\prime}\!\!.$]{}5, because the lensed images are well separated in this system and are therefore not contaminated by the other images. The extraction regions were centered on the positions found from the PSF fits performed in @guer2017 to the X-ray images. For the background regions, a circular region with radius of 0[$^{\prime\prime}\!\!.$]{}72 was reflected through the position of the other images of the lens to account for both large scale backgrounds and any contamination from the other images. For SDSS J1004+4112, we used partial annuli to account for the non-negligible X-ray contamination from the X-ray emission of the cluster that acts as the lens for this object. We also extracted “Total” spectra using circular regions that encompass all images. Again, SDSS J1004+4112 was treated differently and the “Total" spectra is the sum of the individual image spectra. These regions are similar to those used in @chen2012, where a spectral analysis was performed for Data Set 1. @chartas2017 also performed an analysis on individual epochs of RXJ1131$-$1231, [QJ0158$-$4325]{}, and [SDSSJ1004$+$4112]{}. We fit the extracted spectra using the [*NASA*]{} *HEASARC* software [*XSPEC*]{}*V12.9*. We used a simple power-law model for the direct X-ray continuum and then added Gaussian components for any emission lines. These were modified by Galactic absorption [@dickey90] and absorption from the lens. Galactic absorption was fixed for each system to the values given in Table \[tab:lensinfo\]. The lens absorption was a free parameter in the fitting process, unless there was no evidence for absorption in the lens. In these cases, the lens absorption was set to zero. The source and lens redshifts are also listed in Table \[tab:lensinfo\]. The spectral fitting results for combined data, Data Set 1, and Data Set 2 are given in Tables \[tab:spec\_fit\_results\], \[tab:spec\_fit\_results\_1\], and \[tab:spec\_fit\_results\_2\] respectively. In these tables, we report the photon index $\Gamma$, the lens absorption $N_H$, the Gaussian line properties (rest frame line energy, width, and EW)[^1], the reduced $\chi^2$ of the fit, the null-hypothesis probability of the fit, the analytical and Monte Carlo calibrated significances of the metal line, the line detection threshold of the spectrum, and the absorption free and macro magnification corrected X-ray luminosity $L_{X}$ over the 10–50keV rest frame range. We fit the spectra in the 0.4–8.0keV observed frame. The [*XSPEC*]{} command *dummyrsp* was used to extend our model to 50keV. The magnifications were calculated using the equation $$\label{eqn:mag_eqn} \mu= |(1-\kappa)^2-\gamma^2|^{-1},$$ where $\kappa$ and $\gamma$ are the surface mass density and shear for each image, and we adopted the values from Table 9 of @guer2017. The reported luminosity values are different for individual images of each lens, because of the combination of source variability, time-delay, and microlensing effects. The spectral fits and $\Delta\chi ^2$ plots are shown in Figures \[fig:0158\_spec\]–\[fig:2237\_spec\]. We detected the FeK$\alpha$ emission line at high significances ($>99\%$) in the combined, “Total” spectra in four out of five lenses and with a weaker detection in [HE1104$-$1805]{} (see Table \[tab:spec\_fit\_results\]). For the combined data sets of the individual images, we detected the FeK$\alpha$ line with high significance ($>99\%$) in ten images of the combined data sets, and we detected low significance FeK$\alpha$ lines in [QJ0158$-$4325]{} image B, [HE0435$-$1223]{} images A, B, and D, and [HE1104$-$1805]{} image A (see Table \[tab:spec\_fit\_results\]). We were unable to obtain a stable fit for [HE1104$-$1805]{} image B that included the iron emission line. Analyzing the data sets separately, we detected the FeK$\alpha$ emission line in the “Total" image for [SDSSJ1004$+$4112]{} and [Q2237$+$0305]{} with $>99\%$ significance for Data Set 1. We have a weak detection in the “Total" image for [QJ0158$-$4325]{}, [HE0435$-$1223]{}, and [HE1104$-$1805]{} (see Table \[tab:spec\_fit\_results\_1\]). For the individual images, we find $>99\%$ significant detections only in the images of [SDSSJ1004$+$4112]{} and [Q2237$+$0305]{}. The rest have weak detections except for [QJ0158$-$4325]{} image B, [HE0435$-$1223]{} image D, and [HE1104$-$1805]{} image B where we were unable to obtain stable fits for that included the iron emission line. Compared to the analysis of @chen2012 of Data Set 1, the line detections and significance values are generally consistent, however, with the significance values reported in this paper slightly more conservative. We detected the FeK$\alpha$ emission line with $>99\%$ significance in the “Total" image in four of the five lenses for Data Set 2, except [HE1104$-$1805]{} with a significance value of 96% (see Table \[tab:spec\_fit\_results\_2\]). We report weak detections in [QJ0158$-$4325]{} image B, [HE0435$-$1223]{} images B and D, and [Q2237$+$0305]{} image D. For [HE1104$-$1805]{} image B, we were unable to obtain a stable fit that included the iron emission line. The non-detection of the iron line in Data Set 1 for [QJ0158$-$4325]{} image B or [HE0435$-$1223]{} image D is in agreement with @chen2012, and we have weak line detections in these two images in Data Set 2 and in the combined data set as well. @protassov2002 showed that the F-test should be used with caution when testing the significance of emission line detections since the data may not necessarily follow an F-distribution. If the models are well constrained, @protassov2002 provided a method to calibrate the F statistic using Monte Carlo simulations of the spectra. Following this method and the example of @dai2003, we generated ten thousand simulated spectra for each image of each object in all data sets to evaluate the significance of the detected emission lines. We used the [*XSPEC*]{} command *fakeit* to simulate the spectra. We fixed the null model parameters (not including the emission line) to the best fit values listed in Tables \[tab:spec\_fit\_results\]–\[tab:spec\_fit\_results\_2\]. These simulated spectra were then grouped exactly like the real data and fit with the null model and the model with the line. From the fits, we calculated the F statistic of the simulated spectra for each image of each object and compared it to the F statistic from the real data. Figure \[fig:fdist\] shows the distribution of the F statistics from the simulated spectra, the F statistic from the real data, and the analytical F distribution for image A of [QJ0158$-$4325]{} as an example. We then calculated the significance of the real data’s F statistics by calculating the percentage of the simulated spectra having an F statistic greater than or equal to that of the real data. These calibrated F test significances from the Monte Carlo simulations are given in Tables \[tab:spec\_fit\_results\]–\[tab:spec\_fit\_results\_2\]. In these tables, we see that the differences between the Monte-Carlo and analytical significances are small for most of the images. We also note that the differences are much larger for the low significance emission lines. We also simulated the line detection threshold for each spectrum of the combined data by simulating spectra of different EW values and finding those having $1 \sigma$ detections. We also generated simulated spectra to test if the stacked EW measurements are comparable to averaged individual EW measurements, since the line is observed to be a variable between individual observations of gravitational lenses [@chartas2017]. Figure \[fig:EW\_sim\] compares these two EWs giving a mean difference of 0.0065keV with a standard deviation of 0.42keV which is only slightly larger than the 1$\sigma$ uncertainties of the stacked EW measurements. These results show that the stacking process does not bias the mean of the EW from a sequence of observations. The line energies are measured to be within the range of 5.8 to 6.8keV, and the widths of the lines are mostly 1–2$\sigma$ broad compared to the measurement uncertainties. For [Q2237$+$0305]{}, the line widths are measured from 0.41 to 0.66 keV with 4–8$\sigma$ broad, confirming the broad line nature first claimed in @dai2003. The energy range and broadness of the lines are consistent with  emission originated from a few $r_g$ around spinning black holes and viewed at different inclination angles, and those 1–2$\sigma$ broad line width measurements are due to the poor signal-to-noise ratios. We plot the rest-frame EW and lensing corrected X-ray luminosity of our sample for the five lenses using the results in Table \[tab:spec\_fit\_results\] for the Total and individual images (Figures \[fig:LvsEW\]–\[fig:LvsEW\_images\]) and compare with the Iwasawa-Taniguchi or X-ray Baldwin effect, an inverse correlation between the $EW$ of metal emission lines and the X-ray Luminosity. The relation was first discovered by @iwasawa93 for neutral FeK$\alpha$ lines and the 2–10 keV X-ray luminosity. @fukazawa2011 later showed that the trend holds by including the ionized FeK$\alpha$ lines and at 10–50 keV X-ray luminosity as well using 88 nearby Seyfert galaxies observed by *Suzaku*. We adopt the fit from  @chen2012 to the sample of @fukazawa2011 as $$\label{eqn:chenfit} \log{\frac{EW_{model}}{\rm eV}} = (2.96 \pm 0.22)-(0.21 \pm 0.07) \log{\frac{L_{X}}{10^{40}{\hbox{erg~s$^{-1}$}}}} \pm (0.44 \pm 0.11).$$ The lensed sample shows a positive EW offset from the unlensed systems (Figures \[fig:LvsEW\]–\[fig:LvsEW\_images\]). For our targets, the predicted rest-frame EW are between 0.08 to 0.15 keV with the mean at 0.11 keV, while the measured values have a range between 0.2–0.7 keV with the mean at 0.42 keV. We performed the Kolmogorov-Smirnov test between our lensed sample and the *Suzaku* sample [@fukazawa2011]. For the *Suzaku* sample, we selected objects within the luminosity range of the lensed sample between $43.7 < \log{L_X ({\hbox{erg~s$^{-1}$}})} < 45.0$ and EW values greater than the median detection threshold of the lensed sample, 0.1 keV for individual images and 0.05 keV for total images. For the lensed sample, the EW of non-detections are set to be half of the detection threshold values. The K-S test results show that the probability of the null hypothesis that the EW of *Suzaku* sample and the lensed individual image sample are from the same parent distribution is 0.008, and between *Suzaku* sample and the lensed total image sample, the null probability is 0.001. The cumulative EW distributions are shown in Figure \[fig:ks\]. The Student’s T-test yields similar results with the null probabilities further reduced by a factor of two. Based on these statistical test results, we concluded that the EW of lensed sample and unlensed sample are different. Since our sample of AGN are at high redshifts $z \sim 2$, it is possible that the properties of the  line are different from the local sample. However, several studies on high redshift non-lensed samples show little evolution of the EW of  from the local sample, such as the stacking analyses of [[*Chandra*]{}]{} deep field sources [@falocco2012; @falocco2013] with the average EW of 0.07 and 0.14 keV reported. The  line of the two brightest sources in the field were measured to have EW of 0.2 keV [@iwasawa2015]. Here, we attribute this offset as a microlensing effect, because microlensing signals are stronger for smaller sources leading to the conclusion that the reflection region is more compact than the X-ray corona. Although there are low magnification regions in the magnification patterns, the current sample focuses on lenses with on-going microlensing activities and those non-active lenses usually are not well observed. [lccccccccccccc]{} 0158 & A & $1.93^{+0.03}_{-0.03}$ & $0.00^{+0.01}_{-0.00}$ & $6.49^{+0.07}_{-0.06}$ & $0.14^{+0.08}_{-0.11}$ & $0.42^{+0.12}_{-0.12}$ & 1.05(116) & 0.35 & 3.45 & 0.9969 & 0.9988 & 0.1\ 0158 & B & $1.84^{+0.13}_{-0.11}$ & $0.02^{+0.05}_{-0.02}$ & $6.34^{+0.27}_{-0.19}$ & $<0.49$ & $0.29^{+0.25}_{-0.24}$ & 0.69(54) & 0.96 & 2.57 & 0.6416 & 0.7419 & 0.3\ 0158 & Total & $1.92^{+0.03}_{-0.03}$ & $0.000^{+0.003}_{-0.000}$ & $6.50^{+0.07}_{-0.07}$ & $0.17^{+0.09}_{-0.08}$ & $0.33^{+0.10}_{-0.09}$ & 0.92(132) & 0.73 & 3.25 & 0.9994 & 0.9996 & 0.1\ 0435 & A & $1.93^{+0.06}_{0-0.06}$ & $0.02^{+0.03}_{-0.02}$ & $5.98^{+0.12}_{-0.11}$ & $0.11^{+0.11}_{-0.11}$ & $0.21^{+0.12}_{-0.12}$ & 0.80(102) & 0.93 & 2.04 & 0.8902 & 0.9023 & 0.1\ 0435 & B & $1.89^{+0.5}_{-0.5}$ & $0.00^{+0.01}_{0.00}$ & $5.99^{+0.76}_{-0.14}$ & $0.35^{+0.82}_{-0.21}$ & $0.36^{+0.21}_{-0.20}$ & 0.90(73) & 0.71 & 1.14 & 0.7891 & 0.8302 & 0.25\ 0435 & C & $1.84^{+0.05}_{-0.05}$ & $0.00^{+0.01}_{-0.00}$ & $6.41^{+0.43}_{-0.44}$ & $0.91^{+0.50}_{-0.44}$ & $0.72^{+0.33}_{-0.32}$ & 0.61(90) & 0.999 & 1.71 & 0.9946 & 0.9983 & 0.07\ 0435 & D & $1.80^{+0.05}_{-0.05}$ & $0.00^{+0.01}_{-0.00}$ & $6.28^{+0.14}_{-0.21}$ & $<0.42$ & $0.19^{+0.14}_{-0.13}$ & 1.16(76) & 0.17 & 2.27 & 0.4988 & 0.5822 & 0.25\ 0435 & Total & $1.88^{+0.02}_{-0.02}$ & $0.000^{+0.004}_{-0.000}$ & $6.15^{+0.13}_{-0.12}$ & $0.28^{+0.10}_{-0.11}$ & $0.23^{+0.08}_{-0.07}$ & 1.01(142) & 0.47 & 1.78 & 0.9952 & 0.9974 & 0.05\ J1004 & A & $1.72^{+0.04}_{-0.04}$ & $0.00^{+0.01}_{-0.00}$ & $6.32^{+0.05}_{-0.10}$ & $<0.23$ & $0.44^{+0.12}_{-0.11}$ & 0.97(92) & 0.55 & 0.52 & 0.9993 & 0.9999 & 0.08\ J1004 & B & $1.86^{+0.04}_{-0.04}$ & $0.01^{+0.02}_{-0.01}$ & $6.46^{+0.09}_{-0.08}$ & $0.14^{+0.11}_{-0.08}$ & $0.27^{+0.10}_{-0.10}$ & 0.86(141) & 0.88 & 1.03 & 0.9941 & 0.9977 & 0.08\ J1004 & C & $1.87^{+0.05}_{-0.05}$ & $0.03^{+0.3}_{-0.3}$ & $6.32^{+0.06}_{-0.08}$ & $0.15^{+0.10}_{-0.09}$ & $0.50^{+0.11}_{-0.12}$ & 0.82(122) & 0.92 & 1.43 & 0.999992 & 0.9999 & 0.1\ J1004 & D & $1.78^{+0.07}_{-0.06}$ & $0.02^{+0.04}_{-0.02}$ & $6.30^{+0.07}_{-0.07}$ & $<0.15$ & $0.42^{+0.15}_{-0.14}$ & 1.10(86) & 0.24 & 1.87 & 0.9902 & 0.9965 & 0.1\ J1004 & Total & $1.81^{+0.02}_{-0.02}$ & $0.01^{+0.01}_{-0.01}$ & $6.37^{+0.03}_{-0.03}$ & $0.10^{+0.07}_{-0.08}$ & $0.35^{+0.06}_{-0.06}$ & 0.92(142) & 0.73 & 0.99 & $1-7\times10^{-11}$ & 1.0000 & 0.05\ 1104 & A & $1.76^{+0.04}_{-0.04}$ & $0.00^{+0.01}_{-0.00}$ & $6.78^{+0.30}_{-0.34}$ & $<0.55$ & $0.31^{+0.18}_{-0.16}$ & 0.93(84) & 0.65 & 3.48 & 0.8912 & 0.9176 & 0.17\ 1104 & B & $1.83^{+0.05}_{-0.05}$ & $0.00^{+0.01}_{-0.00}$ & ... & ... & ... & 0.79(76) & 0.91 & 9.17 & ... & ... & 0.2\ 1104 & Total & $1.79^{+0.03}_{-0.03}$ & $0.000^{+0.004}_{-0.000}$ & $6.84^{+0.28}_{-0.47}$ & $0.37^{+0.24}_{-0.13}$ & $0.23^{+0.12}_{-0.14}$ & 1.09(108) & 0.24 & 4.87 & 0.8173 & 0.8309 & 0.13\ 2237 & A & $1.84^{+0.03}_{-0.03}$ & $0.08^{+0.01}_{-0.01}$ & $6.17^{+0.12}_{-0.12}$ & $0.60^{+0.13}_{-0.11}$ & $0.50^{+0.09}_{-0.10}$ & 1.29(120) & 0.02 & 8.30 & $1-4\times10^{-6}$ & 1.0000 & 0.04\ 2237 & B & $1.84^{+0.06}_{-0.06}$ & $0.09^{+0.02}_{-0.02}$ & $6.07^{+0.19}_{-0.20}$ & $0.53^{+0.16}_{-0.12}$ & $0.56^{+0.18}_{-0.18}$ & 1.19(86) & 0.11 & 2.69 & 0.9943 & 0.9984 & 0.05\ 2237 & C & $1.86^{+0.06}_{-0.06}$ & $0.09^{+0.02}_{-0.02}$ & $5.76^{+0.12}_{-0.11}$ & $0.41^{+0.16}_{-0.11}$ & $0.59^{+0.17}_{-0.15}$ & 0.95(81) & 0.61 & 4.94 & 0.9999 & 0.9998 & 0.05\ 2237 & D & $1.81^{+0.05}_{-0.05}$ & $0.10^{+0.02}_{-0.02}$ & $6.08^{+0.16}_{-0.16}$ & $0.50^{+0.13}_{-0.11}$ & $0.50^{+0.14}_{-0.13}$ & 0.93(117) & 0.68 & 4.56 & 0.9998 & 1.0000 & 0.05\ 2237 & Total & $1.82^{+0.02}_{-0.02}$ & $0.077^{+0.007}_{-0.007}$ & $5.89^{+0.08}_{-0.09}$ & $0.66^{+0.09}_{-0.07}$ & $0.57^{+0.07}_{-0.06}$ & 1.36(180) & 0.001 & 4.85 & $1-4\times10^{-15}$ & 1.0000 & 0.025\ [lcccccccccccc]{} 0158 & A & $1.91^{+0.08}_{-0.07}$ & $<0.02$ & $6.46^{+0.06}_{-0.09}$ & $<0.22$ & $0.37^{+0.22}_{-0.24}$ & 1.02(105) & 0.43 & 3.00 & 0.5458 & 0.5534\ 0158 & B & $1.72^{+0.18}_{-0.13}$ & $<0.07$ & ... & ... & ... & 0.90(34) & 0.63 & 3.35 & ... & ...\ 0158 & Total & $1.87^{+0.06}_{-0.06}$ & $<0.01$ & $6.42^{+0.15}_{-0.15}$ & $<0.32$ & $0.36^{+0.23}_{-0.23}$ & 0.97(63) & 0.54 & 3.30 & 0.7415 & 0.7773\ 0435 & A & $1.97^{+0.17}_{0-0.16}$ & $0.10^{+0.08}_{-0.07}$ & $6.53^{+1.51}_{-0.86}$ & $<0.73$ & $0.46^{+0.77}_{-0.39}$ & 0.90(65) & 0.70 & 1.35 & 0.6564 & 0.7210\ 0435 & B & $1.84^{+0.18}_{-0.14}$ & $<0.12$ & $6.64^{+0.12}_{-0.19}$ & $<0.35$ & $0.90^{+0.63}_{-0.50}$ & 0.90(42) & 0.66 & 1.20 & 0.8315 & 0.8924\ 0435 & C & $1.51^{+0.11}_{-0.11}$ & $<0.03$ & $6.43^{+0.11}_{-0.11}$ & $<0.32$ & $0.65^{+0.42}_{-0.39}$ & 1.04(33) & 0.40 & 2.30 & 0.6916 & 0.7609\ 0435 & D & $2.01^{+0.23}_{-0.21}$ & $0.12^{+0.11}_{-0.10}$ & ... & ... & ... & 0.89(47) & 0.73 & 2.30 & ... & ...\ 0435 & Total & $1.82^{+0.07}_{-0.07}$ & $<0.06$ & $6.52^{+0.15}_{-0.13}$ & $<0.35$ & $0.35^{+0.18}_{-0.16}$ & 0.99(77) & 0.56 & 1.70 & 0.8983 & 0.9297\ J1004 & A & $1.80^{+0.08}_{-0.07}$ & $<0.06$ & $6.28^{+0.08}_{-0.10}$ & $<0.33$ & $0.63^{+0.21}_{-0.20}$ & 1.15(71) & 0.18 & 0.52 & 0.9841 & 0.9966\ J1004 & B & $1.89^{+0.06}_{-0.04}$ & $<0.03$ & $6.57^{+0.09}_{-0.08}$ & $<0.23$ & $0.50^{+0.17}_{-0.19}$ & 0.92(102) & 0.71 & 0.85 & 0.9897 & 0.9944\ J1004 & C & $1.90^{+0.08}_{-0.08}$ & $0.05^{+0.04}_{-0.04}$ & $6.22^{+0.29}_{-0.25}$ & $0.49^{+0.28}_{-0.20}$ & $0.59^{+0.26}_{-0.26}$ & 0.83(80) & 0.86 & 1.32 & 0.9507 & 0.9739\ J1004 & D & $1.74^{+0.11}_{-0.10}$ & $<0.11$ & $6.26^{+0.06}_{-0.09}$ & $<0.20$ & $0.50^{+0.22}_{-0.19}$ & 1.23(65) & 0.10 & 2.19 & 0.8882 & 0.9276\ J1004 & Total & $1.84^{+0.04}_{-0.04}$ & $0.03^{+0.02}_{-0.02}$ & $6.37^{+0.82}_{-0.82}$ & $<0.07$ & $0.37^{+0.14}_{-0.14}$ & 0.92(129) & 0.73 & 0.96 & $1-4\times10^{-6}$ & 1.0000\ 1104 & A & $1.77^{+0.05}_{-0.05}$ & $<0.007$ & $7.06^{+0.35}_{-0.35}$ & $<0.82$ & $0.42^{+0.28}_{-0.26}$ & 0.99(78) & 0.50 & 3.77 & 0.6500 & 0.7031\ 1104 & B & $1.87^{+0.06}_{-0.06}$ & $<0.02$ & ... & ... & ... & 0.70(103) & 0.99 & 8.42 & ... & ...\ 1104 & Total & $1.79^{+0.04}_{-0.04}$ & $<0.005$ & $6.99^{+0.27}_{-0.24}$ & $<0.61$ & $0.30^{+0.16}_{-0.16}$ & 1.10(103) & 0.23 & 5.04 & 0.7623 & 0.7835\ 2237 & A & $1.85^{+0.04}_{-0.04}$ & $0.08^{+0.01}_{-0.01}$ & $6.46^{+0.07}_{-0.08}$ & $0.22^{+0.09}_{-0.07}$ & $0.37^{+0.08}_{-0.08}$ & 1.38(110) & 0.005 & 8.41 & 0.9998 & 1.0000\ 2237 & B & $1.89^{+0.08}_{-0.08}$ & $0.09^{+0.02}_{-0.02}$ & $6.29^{+0.22}_{-0.19}$ & $0.35^{+0.13}_{-0.14}$ & $0.45^{+0.21}_{-0.20}$ & 1.29(86) & 0.04 & 2.48 & 0.8984 & 0.9287\ 2237 & C & $1.96^{+0.06}_{-0.06}$ & $0.08^{+0.03}_{-0.02}$ & $6.04^{+0.14}_{-0.13}$ & $0.43^{+0.11}_{-0.09}$ & $0.85^{+0.25}_{-0.24}$ & 1.13(91) & 0.19 & 3.36 & 0.9972 & 0.9988\ 2237 & D & $1.87^{+0.06}_{-0.06}$ & $0.13^{+0.02}_{-0.02}$ & $6.12^{+0.21}_{-0.21}$ & $0.54^{+0.18}_{-0.15}$ & $0.53^{+0.18}_{-0.18}$ & 0.93(92) & 0.67 & 4.83 & 0.9971 & 0.9996\ 2237 & Total & $1.88^{+0.03}_{-0.03}$ & $0.082^{+0.008}_{-0.008}$ & $5.99^{+0.10}_{-0.11}$ & $0.60^{+0.09}_{-0.08}$ & $0.56^{+0.09}_{-0.08}$ & 1.29(148) & 0.009 & 4.64 & $1-10^{-10}$ & 1.0000\ [lcccccccccccc]{} 0158 & A & $1.94^{+0.04}_{-0.04}$ & $<0.01$ & $6.50^{+0.11}_{-0.10}$ & $0.20^{+0.16}_{-0.11}$ & $0.41^{+0.15}_{-0.15}$ & 0.90(121) & 0.77 & 3.46 & 0.9859 & 0.9971\ 0158 & B & $1.90^{+0.15}_{-0.14}$ & $<0.11$ & $6.43^{+0.10}_{-0.14}$ & $<0.9$ & $0.34^{+0.22}_{-0.24}$ & 0.95(44) & 0.57 & 2.22 & 0.4912 & 0.7904\ 0158 & Total & $1.94^{+0.03}_{-0.03}$ & $<0.005$ & $6.52^{+0.09}_{-0.08}$ & $0.32^{+0.09}_{-0.09}$ & $0.39^{+0.13}_{-0.24}$ & 0.76(132) & 0.98 & 3.09 & 0.9997 & 1.0000\ 0435 & A & $1.93^{+0.05}_{0-0.04}$ & $<0.02$ & $6.05^{+0.07}_{-0.09}$ & $<0.42$ & $0.22^{+0.08}_{-0.22}$ & 0.70(104) & 0.99 & 2.06 & 0.9541 & 0.9909\ 0435 & B & $1.91^{+0.06}_{-0.06}$ & $<0.01$ & $6.24^{+0.46}_{-0.11}$ & $<0.15$ & $0.25^{+0.12}_{-0.15}$ & 0.97(69) & 0.55 & 1.09 & 0.6860 & 0.8385\ 0435 & C & $1.88^{+0.05}_{-0.04}$ & $<0.02$ & $6.53^{+0.06}_{-0.38}$ & $<0.13$ & $0.28^{+0.24}_{-0.17}$ & 0.93(85) & 0.66 & 1.70 & 0.9050 & 0.9779\ 0435 & D & $1.80^{+0.06}_{-0.06}$ & $<0.009$ & $5.97^{+0.35}_{-0.22}$ & $<0.47$ & $0.26^{+0.19}_{-0.20}$ & 1.14(77) & 0.19 & 2.23 & 0.4836 & 0.7069\ 0435 & Total & $1.91^{+0.02}_{-0.02}$ & $<0.003$ & $6.06^{+0.11}_{-0.10}$ & $0.24^{+0.08}_{-0.08}$ & $0.25^{+0.08}_{-0.07}$ & 1.08(116) & 0.27 & 1.71 & 0.9942 & 0.9998\ J1004 & A & $1.61^{+0.06}_{-0.05}$ & $<0.02$ & $6.29^{+0.07}_{-0.07}$ & $<0.18$ & $0.40^{+0.20}_{-0.16}$ & 0.94(74) & 0.63 & 0.53 & 0.9551 & 0.9746\ J1004 & B & $1.86^{+0.05}_{-0.04}$ & $<0.03$ & $6.18^{+0.17}_{-0.16}$ & $0.26^{+0.15}_{-0.15}$ & $0.32^{+0.15}_{-0.15}$ & 0.70(102) & 0.99 & 1.12 & 0.9722 & 0.9957\ J1004 & C & $1.85^{+0.04}_{-0.04}$ & $<0.02$ & $6.34^{+0.07}_{-0.09}$ & $0.16^{+0.11}_{-0.13}$ & $0.52^{+0.16}_{-0.14}$ & 0.84(90) & 0.84 & 1.50 & 0.9991 & 0.9995\ J1004 & D & $1.83^{+0.10}_{-0.06}$ & $<0.08$ & $6.42^{+0.05}_{-0.06}$ & $<0.12$ & $0.43^{+0.17}_{-0.19}$ & 0.81(72) & 0.87 & 1.53 & 0.9533 & 0.9774\ J1004 & Total & $1.81^{+0.02}_{-0.02}$ & $<0.01$ & $6.27^{+0.08}_{-0.09}$ & $0.29^{+0.13}_{-0.11}$ & $0.43^{+0.10}_{-0.10}$ & 0.89(131) & 0.80 & 0.96 & $1-2\times10^{-6}$ & 1.0000\ 1104 & A & $1.75^{+0.08}_{-0.08}$ & $<0.05$ & $6.53^{+0.08}_{-0.08}$ & $<0.19$ & $0.46^{+0.19}_{-0.17}$ & 0.66(56) & 0.98 & 2.76 & 0.9871 & 0.9941\ 1104 & B & $1.80^{+0.08}_{-0.08}$ & $<0.06$ & ... & ... & ... & 0.67(60) & 0.98 & 8.94 & ... & ...\ 1104 & Total & $1.80^{+0.05}_{-0.05}$ & $<0.03$ & $6.44^{+0.61}_{-0.56}$ & $<0.16$ & $0.25^{+0.13}_{-0.11}$ & 0.85(64) & 0.80 & 4.19 & 0.9491 & 0.9622\ 2237 & A & $1.86^{+0.05}_{-0.05}$ & $0.11^{+0.02}_{-0.02}$ & $5.97^{+0.19}_{-0.189}$ & $0.39^{+0.20}_{-0.20}$ & $0.27^{+0.13}_{-0.12}$ & 0.87(94) & 0.81 & 8.34 & 0.9521 & 0.9674\ 2237 & B & $1.81^{+0.10}_{-0.10}$ & $0.06^{+0.03}_{-0.03}$ & $6.02^{+0.27}_{-0.26}$ & $0.71^{+0.23}_{-0.19}$ & $0.82^{+0.37}_{-0.31}$ & 0.97(55) & 0.53 & 3.30 & 0.9788 & 0.9921\ 2237 & C & $1.82^{+0.09}_{-0.09}$ & $0.10^{+0.04}_{-0.03}$ & $5.54^{+0.13}_{-0.14}$ & $0.38^{+0.17}_{-0.13}$ & $0.57^{+0.23}_{-0.18}$ & 0.86(76) & 0.81 & 7.25 & 0.9984 & 0.9867\ 2237 & D & $1.63^{+0.09}_{-0.09}$ & $0.02^{+0.03}_{-0.02}$ & $6.16^{+0.23}_{-0.23}$ & $0.33^{+0.44}_{-0.15}$ & $0.38^{+0.25}_{-0.21}$ & 1.09(77) & 0.28 & 4.59 & 0.7631 & 0.7845\ 2237 & Total & $1.78^{+0.13}_{-0.17}$ & $0.07^{+0.01}_{-0.01}$ & $5.74^{+0.13}_{-0.17}$ & $0.69^{+0.25}_{-0.16}$ & $0.55^{+0.12}_{-0.11}$ & 0.83(136) & 0.92 & 5.49 & $1-4\times10^{-9}$ & 1.0000\ Microlensing Analysis {#sec:ML} ===================== We performed a microlensing analysis to interpret the positive EW offset measured in lensed quasars. In magnitude units, we have $$\label{eqn:EW_ml} f = -2.5\log_{10}{\frac{EW_{data}}{EW_{fit}}},$$ where $f$ is the differential magnification magnitude between the magnification of the reflection region and the corona. Here, we have included a modest evolution effect, assuming that the rest-frame EW of high redshift quasars are higher than the local ones to be $EW_{fit} = 0.2$ keV [@iwasawa2015]. We generated magnification maps of these five lenses using the inverse polygon mapping algorithm [@mediavilla2006; @mediavilla2011b]. These magnification maps are 16,000$^2$ pixels and each pixel has a length scale of 0.685 $r_g$ for the five lenses. The rest of the pertinent magnification map properties are listed in Table \[tab:mag\_map\_props\] and further details on the magnification maps can be found in @guer2017. We then generated images of “average” AGN corona and the reflection region that are modified by the relativistic effects caused by the black hole using the software `KERTAP` [@chen2013a; @chen2013b; @chen2015]. We assumed that the X-ray corona and reflection region are located very close to the disk in Keplerian motion, following power-law emissivity profiles, $I \propto r^{-n}$, but with different emissivity indices. The models have three parameters: the Kerr spin parameter $a$ of the black hole, the power-law index for the emissivity profile of the reflection region $n$, and the inclination angle of the accretion disk. To simplify the analysis, we set the inclination angle to 40 degrees, a typical inclination angle for Type I AGN. The emissivity index $n$ for the reflection region was varied from 3.0 to 6.2 in steps of $0.4$ and the spin $a$ was varied between 0 and 0.998 in steps of 0.1. Some example Kerr images of the emissivity profiles are shown in Figure \[fig:kerr\]. These resulting images were then convolved with the magnification maps of each lens image to estimate the amount of microlensing that the corona and reflection region would experience at different locations of the magnification maps. We performed these convolutions in flux units and then converted to magnitude scales. We also tested to see if the orientation of the corona and reflection region with respect to the magnification map matters by rotating the images by 90 degrees, and in general this will not invoke a significant change in the parameter estimations discussed below. The half light radius for the X-ray continuum emission is expected to be $\sim$10$r_g$ (gravitational radii) [@dai2010; @mosquera2013], corresponding to emissivity indices of $n=2.2$ – 3.4 for different spins. We then subtracted the two convolved magnification maps from the continuum and reflection models to estimate the differential microlensing between the two emission regions, $f_j=\mu_{con} - \mu_{ref}$ in magnitude units, as a function of source position on the magnification map for a lensed image and common values of $a$ and $n$. We obtained distributions of these subtracted convolutions by making histograms of the values for randomly selected points in the subtracted convolved images. [lcccccccccccc]{} 0158 & A & 13.2 & 0.348 & 0.428 & 4.13 & 0.16 (MgII)\ 0158 & B & 13.2 & 0.693 & 0.774 & 1.98 & 0.16 (MgII)\ 0435 & A & 11.4 & 0.445 & 0.383 & 6.20 & 0.50 (CIV)\ 0435 & B & 11.4 & 0.539 & 0.602 & 6.67 & 0.50 (CIV)\ 0435 & C & 11.4 & 0.444 & 0.396 & 6.57 & 0.50 (CIV)\ 0435 & D & 11.4 & 0.587 & 0.648 & 4.01 & 0.50 (CIV)\ J1004 & A & 9.1 & 0.763 & 0.300 & 29.6 & 0.39 (MgII)\ J1004 & B & 9.1 & 0.696 & 0.204 & 19.7 & 0.39 (MgII)\ J1004 & C & 9.1 & 0.635 & 0.218 & 11.7 & 0.39 (MgII)\ J1004 & D & 9.1 & 0.943 & 0.421 & 5.75 & 0.39 (MgII)\ 1104 & A & 9.1 & 0.610 & 0.512 & 9.09 & 0.59 ($H_\beta$)\ 1104 & B & 9.1 & 0.321 & 0.217 & 2.42 & 0.59 ($H_\beta$)\ 2237 & A & 38.5 & 0.390 & 0.400 & 4.71 & 1.20 ($H_\beta$)\ 2237 & B & 38.5 & 0.380 & 0.390 & 4.30 & 1.20 ($H_\beta$)\ 2237 & C & 38.5 & 0.740 & 0.730 & 2.15 & 1.20 ($H_\beta$)\ 2237 & D & 38.5 & 0.640 & 0.620 & 3.92 & 1.20 ($H_\beta$)\ We then performed a likelihood analysis using the microlensing magnification distributions and the $EW$s we measured in the data. For a given image and fixed $n$ and $a$, the likelihood of the data given the model is $$\label{eqn:L(n)_image} L_{image}(n,a)=A\sum_j b_j(n,a) e^{-\chi_j^2[f_j(n,a)]/2},$$ where $A$ is a normalization constant and $b_j$ is the bin height of the $j$th value from the convolved histograms discussed previously. The chi-square is calculated from $$\label{eqn:chi(n)} \chi_j^2[f_j(n,a)]=\left(\frac{f_j(n,a)+2.5 \log_{10}{EW_{data}}-2.5 \log_{10}{EW_{fit}}}{2.5 EW_{err}/(EW_{data}\ln{10})}\right)^2,$$ where $EW_{data}$ and $EW_{err}$ are respectively the $EW$ and the $EW$ uncertainty from the spectra analysis done in Section \[sec:spec\] and $f_j$ is the amount of differential microlensing between the continuum and reflection regions in magnitudes. Using Equations \[eqn:chenfit\]–\[eqn:chi(n)\], we then have a likelihood as a function of $n$ and $a$, the index of the emissivity profile and spin parameter, for each image of each object. Examples for [Q2237$+$0305]{} A and [QJ0158$-$4325]{} A are shown in Figure \[fig:mlMAGdist\], where we plot the model likelihood ratios compare to the $a=0.9$ and $n=5.8$ model. We then combined the likelihoods of all the images from a target $$\label{eqn:L(n)_total} L_{total}(n, a)=\prod_{image}B L_{image}(n, a),$$ where B is the new normalization constant and the multiplication applies to all the images of the target. After calculating $L_{total}(n, a)$ for a grid of $(n, a)$ combinations, we can then marginalize over either $a$ or $n$ to obtain the posterior probability for the emissivity index or spin, separately. We have discarded [HE1104$-$1805]{}B, the non-detection case, in the microlensing analysis, and since it contributes to less than a tenth of the sample, we do not expect that our results will change significantly. Figures \[fig:L(a)\_total\] and \[fig:L(n)\_total\] show the marginalized probabilities for the spin and emissivity index parameters. For [Q2237$+$0305]{}, we obtain tight constraints on both the spin and emissivity index parameters with well established probability peaks, and the 68% and 90% confidence limits for the spin parameter are $a > 0.92$ and $a>0.83$, respectively, where we linearly interpolate the probabilities to match the designated limits. The corresponding 68% and 90% confidence limits for the emissivity index are $n > 5.4$ and $n > 4.9$ for [Q2237$+$0305]{}. Compared to other targets, [Q2237$+$0305]{} has the longest exposure among the sample, the line EWs have small relative uncertainties, and the EW deviations from the Iwasawa-Taniguchi relation are large, and because of these factors, the constraints for [Q2237$+$0305]{} are strong. For the remaining four targets [QJ0158$-$4325]{}, [HE0435$-$1223]{}, [SDSSJ1004$+$4112]{}, and [HE1104$-$1805]{}, the individual constraints are weak. However, since the shapes of the probability distributions are similar (Figure \[fig:L(a)\_total\] right), we jointly constrain the remaining targets by multiplying the probability functions, yielding 68% and 90% limits of $a=0.8\pm0.16$ and $a > 0.41$ for the spin parameter and $n=4.0\pm0.8$ and $n=4.2\pm1.2$ for the emissivity index. We need to remove [Q2237$+$0305]{} from the joint sample; otherwise the probabilities for the joint sample will be dominated by a single object. We plot the two dimensional confidence contours of the two parameters for [Q2237$+$0305]{} and the remaining sample in Figure \[fig:con\]. We can also bin the two-dimensional parameter space by the half-light radius after the Kerr lensing effect and calculate the corresponding probabilities in each bin. Figure \[fig:hl\] shows the normalized probabilities as a function of  emission radius for [Q2237$+$0305]{} and the remaining joint sample in the logarithm scale. The half light radius are constrained to be $< 2.4$ $r_g$ and $<2.9$ $r_g$ (68% and 90% confidence) for [Q2237$+$0305]{} and in the range of 5.9–7.4 $r_g$ (68% confidence) and 4.4–7.4 $r_g$ (90% confidence) for the joint sample. DISCUSSION {#sec:discussion} ========== Under the hypothesis that the higher average EW of lensed quasars for a monitoring sequence of observations is a microlensing effect, we explain the offset using a set of general relativistic corona, reflection, and microlensing models. We perform a microlensing analysis to obtain the likelihood as a function of the index of the emissivity profile for the reflection component and spins of black holes, which we have included a modest redshift evolution effect on the rest-frame EW of  lines, such that the spin values obtained are more conservative. For the joint constraint from a sample of four targets, our analysis showed that the relativistic reflection region is more likely to have an emissivity index of $n=4.0\pm 0.8$ and a half light radius of 5.9–7.4 $r_g$ ($1 \sigma$), and therefore originates from a more compact region relative to the continuum emission region. This result confirms the previous qualitative microlensing argument that point towards the reflection region belonging to a more compact region [e.g., @chen2012]. The result also shows that the X-ray continuum cannot be a simple point source “lamppost” model, confirming the earlier analysis result of @popovic2006. The spin value of the joint sample is constrained to be $a=0.8\pm0.16$. This is in agreement with previous studies reporting high spin measurements [e.g., @reis2014; @reynolds2014; @mreynolds2014; @capellupo2015; @capellupo2017] either in the local or high redshift samples. For [Q2237$+$0305]{}, both the spin and emissivity index parameters are well constrained individually with $a > 0.92$ and $n>5.4$ corresponding to 2.25–3 $r_g$ for spins between 0.9 and the maximal value. Overall, our spin measurements favor the “spin-up” black hole growth model, where most of the accretion occurs in a coherent phase with modest anisotropies, especially for $z > 1$ quasars [e.g., @dotti2013; @volonteri2013]. Since this paper uses the relative microlensing magnification as a signal to constrain the emissivity profiles of the reflection region, the technique only probes the  emission region comparable or smaller than the X-ray continuum emission regions. Emission lines originating from this compact region are theoretically predicted to have a broad line profile and with the peak energy varying with the inclination angle. The broad emission line widths, especially for [Q2237$+$0305]{} of $\sim$0.5keV and 4–8 $\sigma$ broad, and the range of line energies between 5.8–6.8keV, provide the confirmation of this theoretical expectation. For the reflections that occur at much larger distances, at the outer portion of the accretion disk, disk wind, broad line region, or torus, they will result in a narrow  line that is not sensitive to this technique. It is also quite possible that our sample is biased because we selected our targets based on their strong microlensing signals at optical wavelengths. However, since being microlensing active and having large spin values are independent, we do not see this bias will significantly affect our results. Furthermore, [HE1104$-$1805]{}B was the only image with no detectable  features and was discarded in the microlensing analysis, suggesting that our somewhat limited exposure times were sufficiently large as to not introduce any non-detection bias. This will not affect the microlensing constraints for [Q2237$+$0305]{}, and will only have limited effect on the joint sample results because it contributes less than a tenth of the sample. @reis2014 and @mreynolds2014 fit a broad relativistic  line to the stacked spectra of gravitationally lensed quasars. This technique assumes that the stacked  line profile resembles the un-lensed line profile; however, the  line peak is observed to be a variable between observations [@chartas2017]. Although this technique has a different set of systematic uncertainties, the resulting constraints are quite similar to the analysis results from this paper. For [Q2237$+$0305]{}, the line fitting method has yielded $a = 0.74^{+0.06}_{-0.03}$ [90% confidence, @mreynolds2014], and the constraint in this paper is $a > 0.83$ (90% confidence). Both studies show that [Q2237$+$0305]{} has a large spin values, while the analysis here points more to a maximal value. The steep emissivity profiles measured in this paper are also broadly consistent with those measurements from local AGN, such as MCG-6-30-15 [@wilms2001; @vf2004; @miniutti2007], 1H0707$-$495 [@zoghbi2010; @dauser2010], and IRAS 13224$-$3809 [@ponti2010]. These steep emission profiles can be resulted by combining the light bending, vertical Doppler boost, or ionization effects to produce slopes as steep as $n\sim7$ [@wilms2001; @vf2004; @fk2007; @svoboda2012]. Unfortunately, the spin measurement technique presented in this paper can only be used to analyze the small sample of targets whose X-ray spectra can be measured with sufficient signal-to-noise ratios using the current generation of X-ray telescopes. The next generation X-ray telescopes with an order of magnitude increase in the effective area will allow these measurements in a much larger sample. Ideally, we need sub-arcsec angular resolutions to resolve the lensed images to increase the constraining power for the size and spin measurements. However, a similar analysis can be applied to the total image of the lensed quasars, where the requirement for the angular resolution is less crucial, because the analysis relies on the time-averaged relative microlensing signals between the X-ray continuum and  emission regions. In addition, quasar microlensing can induce variability in the polarization signals, especially the polarization angle [@chen2015b], which can be detected by future X-ray polarization missions and put constraints on quasar black hole spins independently. We acknowledge the financial support from the NASA ADAP programs NNX15AF04G, NNX17AF26G, NSF grant AST-1413056, and SAO grants AR7-18007X, GO7-18102B. CWM is supported by NSF award AST-1614018. We thank D. Kazanas, L. C. Popovic for helpful discussion and the anonymous referee for valuable comments. Assef, R. J., Denney, K. D., Kochanek, C. S., Peterson, B. M., et al. 2011, , 742, 93 Blackburne, J. A., Kochanek, C. S., Chen, B., Dai, X., & Chartas, G. 2015, , 798, 95 Blackburne, J. A., Kochanek, C. S., Chen, B., Dai, X., & Chartas, G. 2014, , 789, 125 Blandford, R. D. & Znajek, R. L. 1977, , 179, 433 Brenneman, L. (ed.) 2013, Measuring the Angular Momentum of Supermassive Black Holes (New York: Springer) Capellupo, D. M., Netzer H., Lira P., Trakhtenbrot B., & Mejia-Restrepo, J. 2015, , 446, 3427 Capellupo, D. M., Wafflard-Fernandez, G., & Haggard, D. 2017, , 836, L8 Chartas, G., Kochanek, C. S., Dai, X., Poindexter, S., & Garmire, G. 2009, , 693, 174 Chartas, G., Rhea, C., Kochanek, C., Dai, X., Morgan, C., Blackburne, J., Chen, B., Mosquera, A., & Macleod, C. 2016, AN, 337, 356 Chartas, G., Krawczynski H., Zalesky L., Kochanek C. S., Dai X., Morgan C. W., & Mosquera A. 2017, , 837, 26 Chen, B., Dai, X., Kochanek, C. S., et al. 2011, , 740, L34 Chen, B., Dai, X., Kochanek, C. S., Chartas, G., Blackburne, J., & Morgan, C. 2012, , 755, 24 Chen, B., Dai, X., Baron, E., & Kantowski, R. 2013, , 769, 131 Chen, B., Dai, X., & Baron, E. 2013, , 762, 122 Chen, B., Kantowski, R., Dai, X., Baron, E., & Maddumage, P. 2015, , 218, 4 Chen, B. 2015, Scientific Reports, 5, 16860 Dai, X., Chartas, G., Agol, E., Bautz, M. W., & Garmire, G. P. 2003, , 589, 100 Dai, X., Kochanek, C. S., Chartas, G., Kozlowksi, S., Morgan, C. W., Garmire, G., & Agol, E. 2010, , 709, 278 Dai, X., & Guerras, E. 2018, , 853, L27 Dauser, T., Svoboda, J., Schartel, N., et al. 2012, , 422, 1914 Dickey, J. M., & Lockman, F. J.  1990, , 28, 215 Dotti, M., Colpi, M., Pallini, S., Perego, A., & Volonteri, M. 2013, , 762, 68 Fabian, A. C., Nandra, K., Reynolds, C. S., Brandt, W. N., Otani, C., Tanak, Y., Inoue, H., & Iwasawa, K. 1995, , 277, L11 Fabian, A. C. 2012, , 50, 455 Falocco, S., Carrera, F. J., Corral, A., et al. 2012, , 538, A83 Falocco, S., Carrera, F. J., Corral, A., et al. 2013, , 555, A79 Fukumura, K., & Kazanas, D. 2007, , 664, 14 Fukazawa, Y., Hiragi, K., Mizuno, M., et al. 2011, , 727, 19 Garmire, G. P., Bautz, M. W., Nousek, J. A., & Ricker, G. R. 2003, Proc. SPIE, 4851, 28 Guilbert, P. W. & Rees, M. J. 1988, , 233, 475 Guerras, E., Dai, X., Steele, S., Liu, A., Kochanek, C. S., Chartas, G., Morgan, C., & Chen, B. 2016, , 111, 11 Guerras, E., Dai, X., & Mediavilla, E. 2018, arXiv:1805.11498 Iwasawa, K., & Taniguchi, Y. 1993, , 413, 15 Iwasawa, K., Vignali, C., Comastri, A., et al. 2015, , 574, A144 MacLeod, C. L., Morgan, C. W., Mosquera, A., et al. 2015, , 806, 258 Mediavilla, E., Mediavilla, T., Muoz, J. A., Ariza, O., et al. 2011, , 730, 16 Mediavilla, E., Munoz, J. A., Lopez, P., Mediavilla, T., et al. 2006, , 653, 942 Miniutti, G., Fabian, A. C., Anabuki, N., et al. 2007, , 59, 315 Morgan, C. W., Kochanek, C. S., Dai, X., Morgan, N. D., & Falco, E. E. 2008, , 689, 755 Morgan, C. W., Kochanek, C. S., Morgan, N. D., & Falco, E. 2010, , 712, 1129 Mosquera, A. M., Kochanek, C. S., Chen, B., Dai, X., Blackburne, J. A., & Chartas, G. 2013, , 769, 53 Ponti, G., Gallo, L. C., Fabian, A. C., et al. 2010, , 406, 2591 Pooley, D., Blackburne, J. A., Rappaport, S., Schechter, P. L., & Fong W. 2006, , 648, 67 Pooley, D., Blackburne, J. A., Rappaport, S., & Schechter, P. L. 2007, , 661, 19 Popovi[ć]{}, L. [Č]{}., Jovanovi[ć]{}, P., Mediavilla, E., et al. 2006, , 637, 620 Protassov, R., Vandyk, D., Connors A., Kashyap V. L., & Siemiginowska A. 2002, , 545, 571 Reis, R. C., Reynolds, M. T., Miller, J. M., & Walton, D. J. 2014, , 507, 207 Reynolds, C. S. & Nowak, M. A. 2003, PhR, 377, 389 Reynolds, C. S. 2014, , 183, 277 Reynolds, M. T., Walton, D. J., Miller, J. M., & Reis, R. C. 2014, , 792, L19 Svoboda, J., Dov[č]{}iak, M., Goosmann, R. W., et al. 2012, , 545, A106 Thorne, K. S. 1974, , 191, 507 Vaughan, S., & Fabian, A. C. 2004, , 348, 1415 Volonteri, M., Sikora, M., Lasota, J.-P., & Merloni, A. 2013, , 775, 94 Weisskopf, M. C., Brinkman, B., Canizares, C., Garmire G., Murray S., & Van Speybroeck L. P. 2002, PASP, 114, 1 Wilms, J., Reynolds, C. S., Begelman, M. C., et al. 2001, , 328, L27 Zoghbi, A., Fabian, A. C., Uttley, P., et al. 2010, , 401, 2419 [^1]: The rest frame EW is calculated using the XSPEC command eqw and then times $(1+z)$ to correct the cosmological redshift effect.
--- abstract: 'We discuss the asymmetry of cosmic background evolution in time with respect to the quantum bounce in the Loop Quantum Cosmology (LQC), employing the value of scalar field at the bounce $\phi_{\rm B}$. We use the Chaotic and the $R^2$ potentials to demonstrate that a possible deflation before the bounce may counteract the inflation that is needed for resolving the cosmological conundrums, so a certain level of time asymmetry is required for the models in LQC. This $\phi_{\rm B}$ is model dependent and closely related to the amounts of deflation and inflation, so we may use observations to confine $\phi_{\rm B}$ and thus the model parameters. With further studies this formalism should be useful in providing an observational testbed for the LQC models.' author: - 'Wen-Hsuan Lucky Chang' - 'Jiun-Huei Proty Wu' bibliography: - 'reference\_04.bib' title: Time Asymmetry of Cosmic Background Evolution in Loop Quantum Cosmology --- Introduction {#sec1} ============ Singularity at the beginning of spacetime is a long-standing problem in cosmology [@Hawking1970]. One solution is to consider the Loop Quantum Cosmology (LQC), which is a theory of Loop Quantum Gravity (LQG) simplified with the cosmological principle [@Bojowald2002a]. It employs the Friedmann-Robertson-Walker (FRW) model with quantum corrections. The extra terms involve a scalar field to resolve the singularity problem with a quantum bounce [@Bojowald2001a]. In turn this allows for the existence of a ‘parent universe’ [@Ashtekar2006b; @Ashtekar2006c; @Ashtekar2006d]. To evolve the scale factor under this context, the quantum corrected Friedmann equation [@Bojowald2005; @Vandersloot2005] was derived with the Hamiltonian formulation in a semi-classical approach [@Bojowald2001b]. Two major types of quantum corrections are the holonomy [@Singh2005; @Chiou2009; @Chiou2009a] and the inverse volume [@Bojowald2001c]. The Hamiltonian involves the connection variables (known as the Ashtekar variables in LQC), whose equations of motion can be obtained by calculating their Poisson brackets. Because these connection variables are actually functions of the scale factor and the Hubble parameter, we could eventually obtain the evolution equation of the scale factor (see Ref. [@Banerjee2012] for introductory review). Within the LQC framework, inflation occurs naturally after the quantum bounce due to the existence of a scalar field [@Bojowald2002], so that the cosmological conundrums can be resolved in the conventional way [@Guth1981]. Before the quantum bounce this scalar field may also generate a period of damped contraction called ‘deflation’. The amount of deflation and that of inflation may differ and one key is the potential to kinetic energy ratio (PKR) of the scalar field at the quantum bounce. Here we shall directly employ a more intuitive quantity $\phi_{\rm B}$, the $\phi$ value at the quantum bounce, to study the asymmetry between deflation and inflation. This paper investigates in details the dependence of cosmic time asymmetry on $\phi_{\rm B}$. The two inflationary models considered here are the Chaotic potential (a commonly chosen simple model) and the $R^2$ potential (a realistic model to date [@Martin2014]). Here is the structure of this paper. First we lay out our convention of LQC in Section \[sec2\], where Section \[ch2.3.1\] defines the Hamiltonian formalism, with its quantum corrections presented in Section \[ch2.3.2\]. Section \[sec3\] investigates the time asymmetry in cosmic evolution, in particular employing the $\phi$ value at the quantum bounce. Section \[sec4\] discusses the possible deflation and its impact. We conclude our work in Section \[sec6\]. The units of all physical quantities in this paper are normalized to the Planckian units ($c=G=\hbar=k_{\rm B}\equiv1$) unless otherwise labeled. The curvature constant is also presumed to be zero, which is consistent with the current observational results. Cosmic Dynamics {#sec2} =============== Hamiltonian formalism {#ch2.3.1} --------------------- To have a good handle on the quantum mechanical properties of the early universe, we employ the Arnowitt-Deser-Misner (ADM) approach. The Hamiltonian of spacetime is $$\begin{aligned} \label{eq2.14} H_{\rm grav} = -\frac{3}{8\pi\gamma^2} c^2 \sqrt{p},\end{aligned}$$ where $p$ and $c$ (not the speed of light) are the connection variables, which are related to the scale factor $a$ and the Hubble parameter $H$ as [@Bojowald2005] $$\begin{aligned} |p| & = \frac{1}{4}a^2, \label{eq.p}\\ c & = \frac{1}{2}\gamma aH, \label{eq.c}\end{aligned}$$ and satisfy the canonical relation [@Bojowald2005] $$\begin{aligned} \label{eq2.16} \left[c,p\right]_{\rm PB} = \frac{8\pi\gamma}{3}.\end{aligned}$$ The subscript ‘PB’ denotes that the calculation rule follows Possion Bracket rather than the commutation. The Barbero-Immirzi parameter [@BarberoG.1995; @Immirzi1997] $\gamma=\log(3)/\sqrt{2}\pi$ can be obtained from the computation of black hole entropy [@Meissner2004a]. In Eqs.  and we have dropped the curvature parameter of the FRW model and chosen the coordinate length of the finite-sized cubic cell in LQG to be unity. It is obvious that the energy density of the spacetime $$\begin{aligned} \label{eq.rho} \rho_{\rm grav}=p^{-3/2}H_{\rm grav}\end{aligned}$$ is unbounded when the size of the universe goes to zero ($a\rightarrow 0$). On the other hand, the Hamiltonian of the inflaton, which is the only content that matters during a single-field inflation, is $$\begin{aligned} \label{eq2.17} H_\phi = \frac{\pi_\phi^2}{2p^{3/2}} + p^{3/2}V(\phi),\end{aligned}$$ where the scalar field $\phi$ and its conjugate momentum $\pi_\phi$ satisfy the canonical relation [@Chiou2009] $$\begin{aligned} \label{eq2.18} \left[\phi,\pi_\phi\right]_{\rm PB} = 1.\end{aligned}$$ General Relativity (GR) then requires that the total Hamiltonian must be zero at all times: $$\begin{aligned} \label{eq2.19} H_{\rm tot} = H_{\rm grav} + H_{\phi} = 0.\end{aligned}$$ This is the Hamiltonian constraint, which is commonly used in solving the Einstein equations numerically [@53684]. Consequently, the equations of motion that describe the dynamics of the universe are $$\begin{aligned} \label{eq2.20} \frac{dq}{dt} &= [q,H_{\rm tot}]_{\rm PB},\end{aligned}$$ where $q$ represents $p$, $c$, $\phi$, or $\pi_\phi$ [@Grain2010]. This set of equations are equivalent to the Friedmann equation and the fluid equation. Holonomy corrections {#ch2.3.2} -------------------- For the quantum corrections in the above formalism, we adopt a semi-classical approach in LQC [@Bojowald2001b]. The $n$th-order holonomized connection variable $c_h^{(n)}$ is defined as [@Chiou2009] $$\begin{aligned} \label{eq2.21} c_h^{(n)} \equiv \frac{1}{\bar{\mu}}\sum_{k=0}^n \frac{(2k)!}{2^{2k}(k!)^2(2k+1)}(\sin\bar{\mu}c)^{2k+1},\end{aligned}$$ where $\bar{\mu}=\sqrt{\Delta/p}$ is the discreteness variable with $\Delta=2\sqrt{3}\pi\gamma$ being the standard choice of the area gap in the full theory of LQG [@Ashtekar2006b]. One key feature in LQC is that the connection variable in the standard cosmology has to be replaced by holonomies. Thus the Hamiltonian of spacetime with the holonomy correction up to the $n$th-order is [@Chiou] $$\begin{aligned} \label{eq2.22} H^{(n)}_{\rm grav,\bar{\mu}} = -\frac{3}{8\pi\gamma^2} (c_h^{(n)})^2 \sqrt{p}.\end{aligned}$$ Finally the new Hamiltonian constraint is [@Chiou; @Mielczarek2010] $$\begin{aligned} \label{eq2.23} H^{(n)}_{\bar{\mu}} = H^{(n)}_{\rm grav,\bar{\mu}} + H_\phi = 0.\end{aligned}$$ We can apply this to the semi-classical approach as what was done in GR [@Thiemann1998]. With such quantum corrections, it is obvious that the energy density of the spacetime $\rho_{\rm grav}$ is always finite. The extreme values appear when $\bar{\mu}c$ equals $0$, $\pi/2$, or its multiples. When $\bar{\mu}c=\pi/2$, the Hamiltonian $H^{(n)}_{\rm grav,\mu}$ reaches its minimum and thus $H_\phi$ reaches its maximum. The maximal energy density of the inflton $\rho_\phi=p^{-3/2}H_\phi$ is called the ‘critical energy density’ and is related to the holonomies as [@Chiou2009] $$\begin{aligned} \label{eq2.24} \rho_{\rm c}^{(n)} &= \frac{\sqrt{3}m_{\rm pl}^4}{16\pi^2\gamma^3} \left[\sum\limits_{k=0}^n \frac{(2k)!}{2^{2k}(k!)^2(2k+1)}\right]^2,\end{aligned}$$ which is confined between $\rho_{\rm c}^{(0)}\simeq 0.82m_{\rm pl}^4$ and $\rho_{\rm c}^{(\infty)}\simeq 2.02m_{\rm pl}^4$. We note that the standard cosmology is recovered ($c_h^{(n)} \rightarrow c$) when $\bar{\mu}c\rightarrow 0$ (that is, when $p\gg 1$). This indicates that the quantum effects are important only when the universe is tiny ($p\sim\Delta$). Consequently the equations of motion can be obtained as $$\begin{aligned} \label{eq2.25} \frac{dq}{dt} &= [q,H^{(n)}_{\bar{\mu}}]_{\rm PB},\end{aligned}$$ which are equivalent to the Friedmann equation and the fluid equation with quantum corrections [@Bojowald2005; @Vandersloot2005]. According to the literatures [@Chiou2009a; @Chiou2009], the higher-order effects on the cosmic background are distinguishable but secondary. At the end of this section, we note that we choose the lapse function as one in this paper. The time parameter ‘$t$’ therefore corresponds to the coordinate time of a FRW metric in the classical regime. Cosmic time asymmetry {#sec3} ===================== The Bouncing Scenario --------------------- As we have seen in the previous section, the energy density of the scalar field now has a maximum $\rho_{\rm c}^{(n)}$ (when $\bar{\mu}c=\pi/2$) and thus avoids the singularity. To see how this is manifested in the behavior of cosmic expansion, we can use the equations of motion to first obtain the Hubble parameter as [@Chiou] $$\begin{aligned} \label{eq2.26} H &= \frac{\dot{a}}{a} = \frac{2}{p}[p,H^{(n)}_{\bar{\mu}}]_{\rm PB} \nonumber \\ &= \frac{4}{\gamma\sqrt{p}} \cos (\bar{\mu}c) \mathcal{O} _n(\bar{\mu}c) c_h^{(n)},\end{aligned}$$ where $$\begin{aligned} \label{eq2.27} \mathcal{O} _n(\bar{\mu}c) \equiv \sum _{k=0}^{n} \frac{(2k)!}{2^{2k}(k!)^2} (\sin\bar{\mu}c)^{2k}.\end{aligned}$$ The solid curves in Fig. \[fig2.1\] are the numerical solutions of the scale factor $a$, the Hubble parameter $H$, and the comoving Hubble radius $|H^{-1}/a|$, as functions of time $t$. The scale factor $a(t)$ is normalized to unity at $t=0$. It shows that the universe contracts before the quantum bounce and expands after the bounce, with a turning point of $a(0)=1$ corresponding to $\bar{\mu}c=\pi/2$. We refer to the epoch before the bounce as the ‘parent universe’. ![\[fig2.1\]The time evolution of the scale factor (top), the Hubble parameter (middle), and the comoving Hubble radius (bottom). We consider $V(\phi)=0$ in this figure for simply demonstrating the quantum bounce.](Figure01.pdf){width="48.00000%"} For the Hubble parameter, it changes its sign at the bounce. The fact that $H(0)=0$ indicates that the comoving Hubble radius $|H^{-1}/a|$ diverges to infinity at the bounce. This means that the quantum effects are extremely strong such that the whole universe is in causal contact at the bounce. The dashed curves in Fig. \[fig2.1\] are the results in the standard cosmology, without the quantum corrections. In this case the universe starts from singularity at $t=0$, without causal connections at all because the comoving Hubble radius is zero at this time. While the solid curves show symmetry in time with respect to the quantum bounce at $t=0$, such symmetry may be broken in general cases. According to Eqs. , , , , and , we have $$\begin{aligned} \label{eq2.28} \frac{1}{2}\dot{\phi}^2 + V(\phi) = \rho_{\rm c}^{(n)},\end{aligned}$$ which is a constant for given $n$. Thus the PKR of the scalar field at the bounce is a free parameter so we may define a ‘bouncing phase’ as $$\begin{aligned} \label{eq2.29} \theta_{\rm B} = \tan^{-1} \frac{\sqrt{2V(\phi)}}{\dot{\phi}}.\end{aligned}$$ For the cases where $V(\phi)$ is an even or odd function in $\phi$, $\theta_{\rm B}$ determines the level of time asymmetry in the cosmic background dynamics. The case $\theta_{\rm B}=0$ (and thus $\phi=0$ at bounce for the scalar-field potentials considered in this paper) corresponds to a time symmetry with respect to $t=0$; other cases lead to time asymmetry. For the cases where $V(\phi)$ is not symmetric in $\phi$, the cosmic background dynamics is always asymmetric with respect to the bounce. Ref. [@Mielczarek2010] studied a special kind of asymmetric cases called the ‘shark-fin type’, which provides a relatively large number of $e$-foldings in the inflation after quantum bounce. Realistic Scalar Models ----------------------- Eq. , however, has a limit that $V(\phi)$ must stay non-negative, and thus cannot be applied to a general potential. Also, the PKR does not have one-to-one correspondence to the time symmetry. Due to these reasons we consider directly the $\phi$ value at the quantum bounce, labeled as $\phi_{\rm B}$, as a free parameter that quantifies the symmetry. Because the number of $e$-foldings in inflation depends on the value of $\phi$, the value of $\phi_{\rm B}$ is more apparently related to the intrinsic properties of an inflationary model than $\theta_{\rm B}$. For scalar potentials symmetric in $\phi$, the case $\phi_{\rm B}=0$ corresponds to a time-symmetric case; in a time-asymmetric case, the $\phi$ value at the end of deflation would differ from the beginning of inflation leading to a non-zero $\phi_{\rm B}$. Given this new parameter $\phi_{\rm B}$, we first consider the Chaotic inflation $$\begin{aligned} \label{eq2.33} V(\phi) = \frac{1}{2}m_\phi^2\phi^2.\end{aligned}$$ Fig. \[fig2.4\] shows the scale factor and the scalar field as functions of time, at different values of $\phi_{\rm B}$. ![\[fig2.4\]The scale factor (upper panel) and the scalar field (lower panel) as functions of time at different values of $\phi_{\rm B}$, for Chaotic potential.](Figure02.pdf){width="48.00000%"} We have considered the zeroth-order holonomy correction ($n=0$) and chosen the inflaton mass $m_\phi=10^{-6}$ in deriving the results in this figure. It is clear that $\phi_{\rm B}=0$ corresponds to a time-symmetric case, while a larger $\phi_{\rm B}$ corresponds to a larger initial $\phi$ at the beginning of inflation, leading to a larger number of $e$-foldings. In addition, the amount of deflation is less when $\phi_{\rm B}$ is larger. The shark-fin type in Ref. [@Mielczarek2010] corresponds to our case with $\phi_{\rm B} \approx 2.7$, where the time asymmetry is about the largest. Table \[tab2.1\] shows the number of $e$-foldings for Chaotic inflation with various $\phi_{\rm B}$ and $m_\phi$. [|c|m[3em]{}|m[3em]{}|m[3em]{}|m[3em]{}|m[3em]{}|m[0em]{}]{} & $0$ & $0.9$ & $1.8$ & $2.7$ & $3.6$ &\ $10^{-4}$ & $18.5$ & $33.0$ & $52.2$ & $76.0$ & $105$ &\ \[3pt\] $10^{-6}$ & $36.3$ & $66.6$ & $107$ & $157$ & $217$ &\ \[3pt\] $10^{-8}$ & $60.6$ & $113$ & $183$ & $270$ & $374$ &\ \[3pt\] $10^{-10}$ & $91.7$ & $173$ & $280$ & $414$ & $575$ &\ \[3pt\] [|c|m[3em]{}|m[3em]{}|m[3em]{}|m[3em]{}|m[3em]{}|m[0em]{}]{} & $0$ & $0.9$ & $1.8$ & $2.7$ & $3.6$ &\ $10^{-4}$ & $18.5$ & $8.45$ & $2.56$ & $0.33$ & $3.66$ &\ \[3pt\] $10^{-6}$ & $36.3$ & $15.6$ & $3.97$ & $0.25$ & $8.84$ &\ \[3pt\] $10^{-8}$ & $60.6$ & $25.1$ & $5.64$ & $0.19$ & $16.5$ &\ \[3pt\] $10^{-10}$ & $91.7$ & $36.9$ & $7.55$ & $0.17$ & $26.2$ &\ \[3pt\] Next we consider the $R^2$ inflation $$\begin{aligned} \label{eq2.34} V(\phi) = m_{\rm H}^4 \left( 1 - e^{-\sqrt{\frac{2}{3}}\phi} \right),\end{aligned}$$ where $m_{\rm H}$ is the inflaton mass, which is normally denoted as $\Lambda$ in literature. Here the subscript ‘H’ stands for the Higgs-like particle. To clarify, this $R^2$ potential is not a quantum field in Starobinsky gravity but a classical field in GR. The resulting time evolutions of the scale factor and the scalar field at different values of $\phi_{\rm B}$ are presented in Fig. \[fig2.5\], where we have used $n=0$ and $m_{\rm H}=10^{-2}$. ![\[fig2.5\]The same as Fig. \[fig2.4\] for the $R^2$ potential.](Figure03.pdf){width="48.00000%"} Unlike the Chaotic inflation, here we see no case with time symmetry simply because the $R^2$ potential is not symmetric in $\phi$. These results also indicate that the cosmological inflation occurs naturally after the quantum bounce, with its initial condition unambiguously and naturally determined rather than manipulatively designed. This fact was previously studied for both time-symmetric background [@Chiou] and time-asymmetric background [@Mielczarek2010]. The four conditions required for solving the four coupled equations of motion are the Hamiltonian constraint $H^{(n)}_{\bar{\mu}}=0$, the turning point condition $\bar{\mu}c=\pi/2$, the value of $\phi_{\rm B}$, and the normalization of the scale factor $a$. Cosmological Deflation {#sec4} ====================== Quantifying Deflation {#sec4.1} --------------------- When we look into the epoch right before the quantum bounce, the scalar field may induce a damped contraction of the space, which we call the ‘cosmological deflation’. During the deflation, we have $$\begin{aligned} \label{eq2.35} \dot{a}<0, \quad \ddot{a}>0.\end{aligned}$$ In contrast to inflation, the comoving Hubble radius grows with time during deflation. In other words, the size of causally contacted region is increasing. In addition, the energy densities and thus the perturbations are increasing. All these may counteract the inflationary effects that we need for resolving the cosmological conundrums, so a scenario with comparably less deflation is in general needed. This in turn requires asymmetry in time with respect to the quantum bounce. ![\[fig2.6\]Evolution of the normalized Chaotic potential before quantum bounce. The time goes leftwards in the figure, with the bounce as its origin.](Figure04.pdf){width="48.00000%"} Most of the formalisms used for the study of inflation are equally useful for the study of deflation, for example, the slow-roll approximation. Fig. \[fig2.6\] shows how the Chaotic potential evolves with time before the quantum bounce. Deflation takes place when the slope is small and thus near the peaks of the curves in the figure. For deflation we define the number of $e$-foldings similar to that of the inflation as $$\begin{aligned} \label{eq2.36} N_e^{\rm D} \equiv \ln \left( \frac{a_{\rm b}^{\rm D}}{a_{\rm e}^{\rm D}} \right),\end{aligned}$$ where $a_{\rm b}^{\rm D}$ and $a_{\rm e}^{\rm D}$ are the scale factors at the beginning and the end of deflation respectively. For a Chaotic potential under the slow-roll approximations, with $\phi_{\rm e}^{\rm D}$ the $\phi$ value at the end of deflation, this reduces to [@Jiun-HueiProtyWu1996] $$\begin{aligned} \label{eq2.37} N_e^{\rm D} \simeq 2\pi \left(\phi_{\rm e}^{\rm D}\right)^2 - \frac{1}{2} = 4\pi \frac{V(\phi_{\rm e}^{\rm D})}{m_{\phi}^2} - \frac{1}{2}.\end{aligned}$$ Combining this with Fig. \[fig2.6\], we see the dependence of $N_e^{\rm D}$ on $\phi_{\rm B}$. The dependence of $N_e^{\rm D}$ on $m_\phi$ is implicit as $a_{\rm b}^{\rm D}$ and $a_{\rm e}^{\rm D}$ are dependent on $m_\phi$. Table \[tab2.2\] shows the dependence of $N_e^{\rm D}$ on some discrete values of $m_\phi$ and $\phi_{\rm B}$. We see that for a fixed value of $m_\phi$ the case $\phi_{\rm B}=2.7$ always gives the least amount of deflation, as we can also see in Fig. \[fig2.6\] when combined with Eq. . A comparison between Tables \[tab2.1\] and \[tab2.2\] also shows that the case $\phi_{\rm B}=0$ has the same amount of inflation and deflation so their effects are expected to be reciprocally canceled out. This is the time-symmetric case. Such scenarios are of less our interest because the cosmological conundrums revive here. In the following we shall discuss the circumstances where such cancellation can be minimized. Minimizing Deflation {#ch2.3.4} -------------------- We first numerically determine how $N_e^{\rm D}$ depends on $\phi_{\rm B}$. For the Chaotic potential, Fig. \[fig2.7\] shows the $N_e^{\rm D}$ as a function of $\phi_{\rm B}$ at different but fixed values of $m_\phi$. We use $\phi_{\rm crit}$ to denote the value of $\phi_{\rm B}$ at which the minimum $N_e^{\rm D}$ occurs in a curve. It is interesting to note that the minimum values of $N_e^{\rm D}$ in all cases are about the same, $0.17$. We also find that $\phi_{\rm crit}$ increases with $m_\phi$. ![\[fig2.7\]The number of $e$-foldings $N_e^{\rm D}$ for Chaotic deflation as functions of $\phi_{\rm B}$ for different $m_\phi$.](Figure05.pdf){width="48.00000%"} [|m[2.4em]{}|m[2.4em]{}|m[2.4em]{}|m[2.4em]{}|m[2.4em]{}|m[2.4em]{}|m[2.4em]{}|m[2.4em]{}|m[2.4em]{}|m[0em]{}]{} $m_\phi$ & $10^{-3}$ & $10^{-4}$ & $10^{-5}$ & $10^{-6}$ & $10^{-7}$ & $10^{-8}$ & $10^{-9}$ & $10^{-10}$ &\ \[5pt\] $\phi_{\rm crit}$ & $1.83$ & $2.20$ & $2.58$ & $2.95$ & $3.33$ & $3.70$ & $4.08$ & $4.45$ &\ \[5pt\] Table \[tab2.3\] summarizes the $\phi_{\rm crit}$ for different $m_\phi$. Here we surprisingly find that $\phi_{\rm crit}$ has a linear relationship with the order of magnitude of $m_\phi$ as $$\begin{aligned} \phi_{\rm crit} = 0.70 - 0.37 \log_{10}(m_\phi).\end{aligned}$$ On the other hand, for each curve in Fig. \[fig2.7\], we note that the value of $N_e^{\rm D}$ increases more dramatically when $\phi_{\rm B}$ departs from $\phi_{\rm crit}$ to a larger value than to a smaller value. This can be explained in Fig. \[fig2.9\] where we plot the comoving Hubble radius (upper panel) and the scalar field (lower panel) both as functions of time, for the case $m_\phi=10^{-6}$. ![The time evolution of the comoving Hubble radius (upper panel) and the scalar field (lower panel) with $\phi_{\rm B}=2.75$ ($< \phi_{\rm crit}$; brown dashed), $2.95$ ($\approx \phi_{\rm crit}$; orange solid), and $2.98$ ($> \phi_{\rm crit}$; red dashed) for Chaotic potential. The vertical lines denote the beginning (right) and the end (left) of deflation, as the time goes leftwards in the plots.[]{data-label="fig2.9"}](Figure07.pdf){width="42.00000%"} We consider three cases: $\phi_{\rm B} < \phi_{\rm crit}$ (brown dashed), $\phi_{\rm B} \approx \phi_{\rm crit}$ (orange solid), and $\phi_{\rm B} > \phi_{\rm crit}$ (red dashed). In the upper panel, the parts of curves with negative slopes (increasing Hubble radius) indicate the periods when deflation takes place. These periods are shaded down to the lower panel and we see that the change in $\phi$ during deflation is obviously larger in the case when $\phi_{\rm B} > \phi_{\rm crit}$ (red dashed), resulting in the larger amount of deflation as seen in Fig. \[fig2.7\]. We also note that in the lower panel of Fig. \[fig2.9\] the parts in the curves that cross $\phi=0$ can be thought of as the ‘inverse reheating’, at which inflatons are produced by other particles. This is a period when all existing particles are converted to inflaton. This epoch always takes place before the deflation, so the scenario is like a mirror process of the inflation. For the $R^2$ potential, the counter results are shown in Fig. \[fig2.8\] and Table \[tab2.4\]. There is a linear relationship between $\phi_{\rm crit}$ and the order of magnitude of $m_{\rm H}$ as well: $$\begin{aligned} \phi_{\rm crit} = 0.68 - 0.75 \log_{10}(m_{\rm H}).\end{aligned}$$ Again the minimum values of $N_e^{\rm D}$ in all cases are about the same, $0.15$, and for a fixed $m_{\rm H}$ the amount of deflation $N_e^{\rm D}$ increases more quickly when $\phi_{\rm B}$ departs from $\phi_{\rm crit}$ to a larger value than to a smaller value. We verified that the reason of this is the same as discussed in Fig. \[fig2.9\]. In summary, for the selected inflatioary models in this paper, the amount of deflation is minimized when $\phi_{\rm B}$ reaches $\phi_{\rm crit}$. For Chaotic potential this corresponds to the ‘most’ shark-fin type (see Fig. \[fig2.4\]). Since this $\phi_{\rm B}$ is model dependent and closely related to the amounts of deflation and inflation, we may use observations to confine $\phi_{\rm B}$ and thus the model parameters. ![The number of $e$-foldings $N_e^{\rm D}$ for $R^2$ deflation as functions of $\phi_{\rm B}$ for different $m_{\rm H}$.[]{data-label="fig2.8"}](Figure06.pdf){width="48.00000%"} [|m[2.55em]{}|m[2.55em]{}|m[2.55em]{}|m[2.55em]{}|m[2.55em]{}|m[2.55em]{}|m[2.55em]{}|m[2.55em]{}|m[0em]{}]{} $m_{\rm H}$ & $10^{-2}$ & $10^{-2.5}$ & $10^{-3}$ & $10^{-3.5}$ & $10^{-4}$ & $10^{-4.5}$ & $10^{-5}$ &\ \[5pt\] $\phi_{\rm crit}$ & $2.18$ & $2.55$ & $2.93$ & $3.30$ & $3.68$ & $4.05$ & $1.30$ &\ \[5pt\] Conclusion {#sec6} ========== We employed the parameter $\phi_{\rm B}$ to discuss the time asymmetry in the cosmic background evolution with respect to the quantum bounce. It is particularly noted that the time-symmetric scenarios should be avoided because in such cases deflation and inflation may counteract each other, likely leaving the cosmological conundrums unresolved. In the consideration of number of $e$-foldings, there is a critical value of $\phi_{\rm B}$ at which the amount of deflation is minimized. This critical value $\phi_{\rm crit}$ depends on the model parameters, namely the $m_\phi$ and $m_{\rm H}$ in the Chaotic and $R^2$ potentials respectively in our demonstrations. Thus when we study any model in LQC, we should be cautious about the level of time asymmetry in order to have sufficient inflation that is not pre-canceled out by the deflation before the quantum bounce. Within this context, other issues such as the cosmological perturbations also require proper treatment. In this regard we proposed a new formalism for evolving the tensor perturbations (gravitational waves) [@Chang2018a]. All these will need to pass the observational tests such as the cosmic microwave background in the near future. We acknowledge the support from the Ministry of Science and Technology, Taiwan (MOST 103-2628-M-002-006-MY4).
--- author: - 'B. Külebi' - 'S. Jordan' - 'E. Nelan' - 'U. Bastian' - 'M. Altmann [^1]' bibliography: - '15237.bib' date: 'Received 18 June 2010/ Accepted 26 July 2010' title: 'Constraints on the origin of the massive, hot, and rapidly rotating magnetic white dwarf  from an HST parallax measurement' --- [We use the parallax measurements of  to determine its mass, radius, and cooling age and thereby constrain its evolutionary origins.]{} [We observed  with the the Hubble Space Telescope’s Fine Guidance System to measure the parallax of  and its binary companion, the non-magnetic white dwarf . In addition, we acquired spectra of comparison stars with the Boller & Chivens spectrograph of the SMARTS telescope to correct the parallax zero point. For the corrected parallax, we determine the radius, mass, and the cooling age with the help of evolutionary models from the literature.]{} [The properties of  are constrained using the parallax information. We discuss the different cases of the core composition and the uncertain effective temperature. We confirm that  is close to the Chandrasekhar’s mass limit in all cases and almost as old as its companion .]{} [The precise evolutionary history of  depends on our knowledge of its effective temperature. It is possible that it had a single star progenitor possible if we assume that the effective temperature is at the cooler end of the possible range from $30\,000$ to $50\,000$K; if ${\hbox{$T_{\rm eff}$}}$ is instead at the hotter end, a binary-merger scenario for  becomes more plausible.]{} Introduction ============ is a unique hydrogen-rich white dwarf, which was discovered as an EUV source by the ROSAT Wide Field Camera [@Barstowetal95]. An analysis of follow-up spectroscopy established that the stellar surface is covered by a very strong magnetic field with a range of about 170-660MG, implying that  has one of the strongest magnetic fields detected so far in a white dwarf. The optical spectrum together with UV observations taken with the IUE satellite and the Hubble Space Telescope indicated that  possesses a very high effective temperatures in the range from 30000 to 55000K; @Barstowetal95 achieved their best fit for about 49000K. A careful analysis of the EUVE spectrum using the interstellar medium Lyman lines to account for the interstellar extreme ultraviolet absorption implied an effective temperature of $33\,800\,$K [@Vennesetal03]. Within these constraints,  is one of the hottest known magnetic white dwarf (MWD); in any case, it has the highest known temperature of all MWDs with a field strength above $20$MG [@Kawkaetal07; @Kulebietal09]. @Barstowetal95 performed high-speed photometry demonstrating that the optical brightness of  varies almost sinusoidally with a period of $725.4\pm0.9$sec and an amplitude of more than $0\fm 1$; these results were confirmed by @Vennesetal03, who inferred a period of $725.727\pm0.001$sec from the variation in the circular polarisation. The only reasonable explanation of these results is rotation, implying that   rotates more rapidly than any other known white dwarf that is not a member of a close binary. The photometric variation must be caused by differences in the brightness on various parts of the stellar surface. Since no strong absorption lines are detected in the optical, a possible explanation may be a variation in the effective temperature over the stellar surface; the reason for this temperature inhomogeneity is currently not well understood but is probably connected to stronger or weaker contributions to the magnetic pressures in the stellar atmosphere at different locations on the stellar surface with different magnetic field strengths. To achieve a clearer insight into the evolution of , @Burleighetal99 obtained phase-resolved far-UV Hubble Space Telescope () Faint Object Spectrograph spectra. They found that the previous optical results could generally be confirmed, but that the splitting of the  component into subcomponents implied that the field is probably more complicated than indicated by the mean optical spectrum. By compiling a time series of spectra, a model for the magnetic field morphology across the stellar surface was produced using the radiation-transfer models through a magnetised stellar atmosphere from @Jordan92 [see paper for a basic description of the methods] and an automatic least squares procedure. The magnetic geometry could be equally well described by an offset magnetic dipole ($x_{\rm off}=0.057$, $y_{\rm off}=0.004$, and $z_{\rm off}=-0.220$ stellar radii), which produces a surface field strength distribution in the range 140-730MG or an expansion into spherical harmonics up to $l=3$ in which the surface field strengths are constrained to be within the range 180-800MG . The mass of the white dwarf was constrained by estimating the absolute magnitudes (or absolute fluxes) calculated from the spectroscopic fit parameters $T_{\rm eff}$, $\log g$ and white dwarf evolutionary models [e.g. @Wood95; @BenvenutoAlthaus99]. The determination of the mass of  is, in general not straightforward because of the effects of strong magnetic fields; the usual method of using the Stark broadening of the spectral lines to determine $\log g$ and subsequently a mass-radius relation fails in the presence of a magnetic field of several hundred MG; the reason is that the standard theory for Stark broadening assumes degenerate energy levels but the magnetic fields help remove this degeneracy. Nevertheless, the mass determination procedure of  can be improved by the knowledge of its distance.   is inferred to be in a wide-binary double-degenerate system due from its visual companion, which is a non-magnetic DA white dwarf companion () $7^{\prime\prime}$ away. This object was analysed initially by @Barstowetal95, then later by @Kawkaetal07 (for fit parameters see Table\[table:LB\]). @Barstowetal95 derived a distance in the range 33$-$37pc with these parameters using the evolutionary models of @Wood92. The physical companionship of  and  has recently been confirmed by *Spitzer* IRAC obsevations [@Farihietal08] that demonstrated the common proper motion nature of the system. With an effective temperature of $50\,000$K and assuming a distance of 36pc [@Barstowetal95] concluded that the radius of  is about $0.0035$ with a corresponding extreme mass of $1.35$($\log g = 9.5$). Later @VennesKawka08 derived a mass of $1.32\,\pm\,0.03$ using $T_{\rm eff} = 33\,800$K, $\log g = 9.4$ and 27 pc for the distance. If these conclusions are true,  would not only be one of the hottest known MWD but also the most massive ($\approx 1.35$) isolated (due to the large separation of  and  we can assume that both stars did not interact during stellar evolution) white dwarf discovered so far; only two other white dwarfs are known with masses in excess of $1.3$: LHS4033 with a mass in the range 1.31-1.34[@Dahnetal04] and the magnetic white dwarf PG1658+441 with $1.31\pm 0.02$ [@Schmidtetal92]. From the theory of stellar evolution, there are two different ways to produce these massive white dwarfs: either by single-star evolution of a star with an initial mass higher than 7 or 8 [@Dobbieetal06; @Casewelletal09; @Salarisetal09] or from the merger of two white dwarfs with C/O cores [see e.g. @Segretainetal97]. The latter scenario is supported by the rapid rotation of . @JordanBurleigh99 measured the circular polarisation to have a degree of 20% at a wavelength of 5760Å, the strongest ever found in a MWD. Together with the assumed small radius and strong gravity in the stellar photosphere, this also made  a test object for setting limits on gravitational birefringence predicted by theories of gravitation, which violate the Einstein equivalence principle [@Preussetal05]. Since the mass determination of  was based entirely on the uncertain spectroscopic distance of the system, we applied for observing time with the  to measure the trigonometric parallaxes of the white dwarf binary system to either confirm or disregard the conclusions of @Barstowetal95. In this paper, we present the analysis of the parallax measurement with ’s Fine Guidance Sensor (FGS). --- ------- --------------- --------------- ------- 1 14.11 16030$\pm$230 8.19$\pm$0.05 33-37 2 - 16360$\pm$80 8.41$\pm$0.02 30 3 13.90 15580$\pm$200 8.36$\pm$0.05 27 --- ------- --------------- --------------- ------- : Spectroscopically derived parameters of LB 9802.[]{data-label="table:LB"} ${}^1$@Barstowetal95; ${}^2$@Ferrarioetal97; ${}^3$ @Kawkaetal07 Observation =========== Observations with the FGS of the --------------------------------- The observations of the magnetic white dwarf  ($\alpha_{\rm ICRS}=03^{\rm h}17^{\rm m} 16\fs1750$, $\delta_{rm ICRS}=-85\degr 32^{\prime} 25\farcs 45$) and its non-magnetic white dwarf companion   ($\alpha_{\rm ICRS}=03^{\rm h}17^{\rm m} 19\fs 3050$, $\delta_{rm ICRS}=-85\degr 32^{\prime} 31\farcs 15$) with the Hubble Space Telescope were performed with the Fine Guidance Sensor 1r (FGS1r) at three epochs (March 2007, September 2007, and March 2008, see Table\[table:visits\]). The Fine Guidance Sensor is a two-axis, white-light shearing interferometer that measures the angle between a star and ’s optical axis by transferring the star’s collimated and compressed light through a polarising beam splitter and a pair of orthogonal Koesters prisms [see @Nelanetal98; @Nelan10 for a description of the instrument design]. When FGS1r is operated as a science instrument,  pointing is held fixed and stabilized by FGS2 and FGS3 which operate as guiders. To derive an astrometric solution for position, proper motion, and parallaxes, , , and the reference field stars had to be observed at a minimum of three epochs, preferably at the seasons of maximum parallax factor to allow us to cleanly separate their parallaxes from their proper motions. These seasons are separated by about six months. Fortunately, the epochs of maximum parallax factor also resulted in  roll angles (which are constrained by date) such that the two white dwarf stars and the optimal set of astrometric reference stars could be observed at all epochs. Figure\[fig:parallax\] shows the parallactic ellipse and the orientations of the FGS aperture at the times of the observations (there were two March epochs, 2007 and 2008). Experience shows that a minimum of two orbits per epoch are required to achieve the highest possible accuracy in the final parallaxes. Table\[table:visits\] provides the dates of the six orbits for our  programme. Since the two white dwarfs are only $\approx7''$ apart, we were able to use the same reference stars for the two white dwarfs using no more  orbits than would be necessary for a single parallax measurement. Using identical reference stars also ensured that the parallax difference between the two putative companion stars was measured more precisely than their absolute parallaxes since the measurements share the same correction of relative to absolute parallax. In addition, their relative proper motions can be measured to provide an additional check on whether or not the two white dwarfs constitute a bound pair. ------- ---------------------- ---------------------- ---- 10930 Mar 24 2007 17:54:01 Mar 24 2007 18:53:25 01 10930 Mar 24 2007 19:29:48 Mar 24 2007 20:29:12 02 10930 Sep 27 2007 03:28:55 Sep 27 2007 04:28:19 03 10930 Sep 29 2007 01:47:19 Sep 29 2007 02:46:43 04 11300 Mar 29 2008 02:00:30 Mar 29 2008 02:59:53 01 11300 Mar 29 2008 03:36:19 Mar 29 2008 04:35:42 02 ------- ---------------------- ---------------------- ---- :  orbits for  proposal 10930 and 11300.[]{data-label="table:visits"} Spectroscopy of the astrometric reference stars ----------------------------------------------- [lrrrrrrrr]{} name & $\alpha_{\rm ICRS}$ & $\delta_{\rm ICRS}$ & & & & & &\ & & & & & & & &\ & $03^{\rm h}17^{\rm m} 16\fs1750$ & $-85\degr 32^{\prime} 25\farcs 45$ & $14.90\pm 0.02$ & $-0.16$& $-1.13\footnotemark[1]$ & $0.01$& $-0.11$& $15.09$\ & $03^{\rm h}17^{\rm m} 19\fs 3050$ & $-85\degr 32^{\prime} 31\farcs 15$ & $14.11\pm 0.02$ & $+0.07$& $-0.68$& $-0.06$& $-0.18$& $14.22$\ Ref1&$03^{\rm h} 20^{\rm m} 12\fs918 $&$-85\degr 34^{\prime} 56\farcs 175 $& 9.42 & 0.38\ Ref2&$03^{\rm h} 18^{\rm m} 52\fs01 $&$-85\degr 35^{\prime} 20\farcs 8 $ & 12.27 & 0.50& & & &12.55\ Ref3 &$03^{\rm h} 18^{\rm m} 03\fs1 $&$-85\degr 36^{\prime} 02^{\prime\prime} $& 14.60& 1.14\ Ref6 &$03^{\rm h} 13^{\rm m} 59\fs7 $&$-85\degr 30^{\prime} 16^{\prime\prime} $& 14.00& 1.04\ Ref7 &$03^{\rm h} 15^{\rm m} 55\fs9 $&$-85\degr 30^{\prime} 20^{\prime\prime} $& 15.00& 0.84\ Ref8 &$03^{\rm h} 16^{\rm m} 46\fs3 $&$-85\degr 29^{\prime} 48^{\prime\prime} $& 14.37& 1.10\ Ref9 &$03^{\rm h} 18^{\rm m} 55\fs1 $&$-85\degr 36^{\prime} 42^{\prime\prime} $& 14.36& 0.63\ \ ${}^1$ From @Barstowetal95; ${}^2$ http://tdc-www.harvard.edu/catalogs/gsc2.html; ${}^3$ ==TYC9495-788-1[@Hogetal98];\ ${}^4$ =; ${}^5$ theoretical $B-V$ values interpolated for spectral type and MK class, see Table\[table:comparison\]\ ![The parallactic ellipse of the  field and the orientation of the FGS1r field of view at the dates of the observations. The X-axis of the FGS1r is nearly parallel to the line connecting the circles that mark the epochs at which the observations were made.[]{data-label="fig:parallax"}](parfact.png){width="60.00000%"} [rrrrrrrrr]{} star & & & & & & & &\ & & & & && & &\ Ref1 & 9.95 & F3-4V &3.48 & 6.47& 0.41 & 197 & 24 & $5.08^{+0.69}_{-0.55}$\ Ref2 & 12.27 & F7V &3.95 & 8.32& 0.50 & 461 & 55 & $2.17^{+0.29}_{-0.23}$\ Ref3 & 14.60 & K1-2III &0.48 & 14.12& 1.14 & 6668 & 800 & $0.15^{+0.02}_{-0.02}$\ Ref6 & 14.00 & K4V &6.96 & 7.04 & 1.04 & 256 & 31 & $3.91^{+0.53}_{-0.43}$\ Ref7 & 15.00 & K0V &5.98 & 9.02 & 0.84 & 637 & 76 & $1.56^{+0.22}_{-0.16}$\ Ref8 & 14.37 & K1III &0.55 & 13.82 & 1.10 & 5808 & 720 & $0.17^{+0.02}_{-0.02}$\ Ref9 & 14.36 & G3-4V &4.81 & 9.55 & 0.63 & 813 & 98 & $1.23^{+0.23}_{-0.17}$\ \ Since only relative parallaxes can be measured with , we had to estimate the parallaxes of a sample of reference stars in the vicinity of our target objects which comprise our local reference frame (see Figure\[fig:fc\]). Ref4 and Ref5 were not observed by the FGS1r since they were not needed. Spectra of these surrounding stars of similar (or somewhat larger) brightness than  were taken in service mode with the Boller & Chivens spectrograph of the 1.5m SMARTS telescope, located on Cerro Tololo at the Interamerican Observatory in Chile, in two nights between February 16 and 18, 2008. To ensure that the whole optical range is covered, we performed exposures with two gratings (9/Ic and 32/Ib). Both observing nights were affected by passing clouds and the relatively high airmass ($>1.8$) due to the large declination difference between the observatory’s zenith and the target field. Since this could not fully be corrected by flux standards, the energy distribution in the blue channel may be compromised. The classifications for the reference stars were performed by comparing the flux-calibrated spectra to the templates of [@Pickles98]. Since the Pickles library does not cover all spectral subtypes, interpolation by eye was performed where appropriate. For late G- and especially K stars, the MK class III templates were also looked at because in some cases the star actually turned out to be a giant; giants can be clearly distinguished from dwarfs, which show an indentation at 5200 Å that the giants do not or only slightly exhibit. The few metal-poor and metal-rich templates were also used, although the difference in the Pickles spectra is too small to really make a discrimination in this respect. The absolute magnitude determination was based on an interpolation of the data taken from [@Lang92] and Allen’s astrophysical quantities [@Cox00]. It was achieved by parametrising the spectral type so that spectral type F corresponds to 0, G to 1, K to 2, and M to 3, and the spectral type subdivisions correspond to the first decimal, i.e. an G2 star would be represented by 1.2. A 5th degree polynomial is then fitted to determine the $M_V$ - spectral type relation shown in Figure\[fig:specphot\]. This was performed for both luminosity class III and V, assuming that all our stars come from these two luminosity classes. The absolute magnitudes of the reference stars were then calculated using these two functions with their spectral class parametrised in the same way as an argument. The determination of the errors is not straightforward, since not all error sources can be easily quantified. The error in the determination of the spectral type was roughly quantified by calculating the absolute magnitude of the spectral subtypes closest to the determined ones were calculated using the same fit function (for those stars where the derived spectral type was in-between two subdivisions, i.e. in the cases of reference stars 1, 3, and 9 the second next subtype was chosen). The difference between this absolute magnitude and the absolute magnitude obtained for the star is then our estimate of the error in the absolute magnitude caused by the uncertainty in the spectral classification. This assumes that the error in the spectral type is not larger than one subdivision, which might not be true in all cases but should generally be the case. It was generally found that the difference in absolute magnitude between the measured spectral type and its neighbours is about 0.2 mag, so this value was assumed in all subsequent calculations. This error of 0.2 mag corresponds to an error of 12% in distance (see Table\[table:comparison\], 7th column). The corresponding error in the parallax was used for the correction of the relative parallaxes. Given the relation between parallax and distance, the error in the former is not symmetric if that of the former is. The asymmetric nature of the parallax error is represented in column 8 of Table\[table:comparison\]. The errors given in Table\[table:comparison\] do not represent the overall error. A main source of error will most likely be the photometry, which is not of the highest precision. Moreover, our spectra do not allow us to determine the exact evolutionary status of the objects, which influences the accuracy of the absolute magnitude. For the same reason, the influence of metallicity cannot be taken into account, and all stars are assumed to be of solar abundance. Adding these uncertainties with some margin leads to an overall error in distance of 20-30%, with the stars Ref7-9 having the larger errors, since we only have one spectrum (red of Ref7, blue for the other two) for these objects. Since the parallax is the reciprocal of the distance, the stars with a large distance are the more reliable ones, especially the two giants (Ref3 and 8). ![The field of the binary WDs  and  and the reference stars Ref1, …, Ref9. Ref4 and Ref5 were later omitted since they were too faint.[]{data-label="fig:fc"}](fc.jpg){width="50.00000%"} star name $\pi$/mas $\sigma_\pi$/mas $\mu_\alpha$/mas yr$^{-1}$ $\mu_\delta$/mas yr$^{-1}$ $\sigma_{\mu_\alpha}$/mas yr$^{-1}$ $\sigma_{\mu_\delta}$/mas yr$^{-1}$ $\sigma_\xi$/mas $\sigma_\eta$ /mas ----------- ----------- ------------------ ---------------------------- ---------------------------- ------------------------------------- ------------------------------------- ------------------ -------------------- -- 34.380 0.260 -91.165 -15.344 0.435 0.451 0.3427 0.2085 33.279 0.238 -78.894 -27.041 0.424 0.412 0.3042 0.2030 Ref1 4.62 0.39 10.76 19.50 0.782 0.731 0.4544 0.1541 Ref6 3.51 0.40 0.00 0.00 0.000 0.000 0.4862 0.4936 Ref7 1.57 0.00 -21.01 -8.01 0.698 0.702 0.4552 0.3535 Ref8 0.17 0.00 0.00 0.00 0.000 0.000 0.4713 0.3859 Ref9 1.23 0.00 0.00 0.00 0.000 0.000 0.4471 0.2317 ![The spectrophotometric determinations of the absolute magnitude of the reference stars. The abscissa denotes the spectral type encoded in a way that F0 corresponds to 0.0, G0 to 1.0, K0 to 2.0 etc. and the spectral type subdivisions being given by the first decimal. The (red) open squares are the loci of main-sequence stars in this HR-diagram, and the (blue) open hexagons represent the giants (luminosity class III); the two curves show the resulting fits for both luminosity classes. The asterisks show the reference stars of this program. The spectral (sub)type was determined using low resolution spectra and the absolute magnitude was calculated using the fitted polynomial.[]{data-label="fig:specphot"}](re0317_specphot.png){width="50.00000%"} Analysis of the FGS data ======================== Our astrometric measurements used FGS1r in position mode to observe , , and the associated reference field stars. At each of the three epochs, two  orbits were used. Within each orbit, FGS1r sequentially observed each star several times in a round-robin fashion for approximately 30 seconds. The standard FGS data reduction algorithms [@NelanMakidon02] were employed to remove instrumental and spacecraft artifacts (such as photon shot noise, spacecraft jitter and drift, optical distortion of the FGS, differential velocity aberration, etc). The calibrated relative positions of the stars in each of the six visits were combined using a six parameter overlapping plate technique that solves for the parallax and proper motion of each star. This process employed the least squares model GaussFit [@Jefferysetal88] to find the minimum $\chi^2$ best solution. The results of the FGS measurements for , , and the reference stars are given in Table\[table:results\]. The $\sigma_\xi$ and $\sigma_\eta$ are the 1$\sigma$ errors of the fit of the stars onto the “master plate”. Likewise, the parallax and proper motion errors are the 1$\sigma$ dispersion in those values measured for the individual observations (e.g.  and  were observed approximately four to five times each in every  orbit, for a total of 24 to 30 individual measurements). The errors quoted in Table\[table:results\] are typical of the performance of the FGS1r [for comparison see e.g. @Benedictetal07], indicating that our observations are nominal. The best solution was obtained by directly solving for the trigonometric parallax of Ref1 and Ref6, for which we obtain values consistent with their predicted spectroscopic parallaxes. Likewise, we derive the optimal solution when we use the FGS1r data to solve for the proper motion of Ref6 and Ref7. The bulk proper motion of the field is constrained by setting Ref6, Ref8, and Ref9 to have no proper motion. The astrometric reference star Ref2 was not used because FGS1r resolved it to be a wide binary system, which caused an acquisition failure in the second epoch. The parallaxes for  and   differ by 1.101 mas, which is about four times the 1$\sigma$ of their individual errors. This result includes an application of the standard “lateral colour” correction that removes the apparent shift of an object’s position in the FGS field of view due to the refractive elements in the instrument’s optical train. The correction is given as $\delta x=(B-V)\cdot lcx$ and $\delta y=(B-V)\cdot lcy$. The coefficients $lcx=-1.09$ mas and $lcy = -0.68$ mas are derived from the observed relative positions of two calibration stars, LATCOL\_A ($B-V=1.9$), and LATCOL\_B ($B-V=0.2$), at several  roll angles. However,  is significantly hotter and bluer than the blue calibration star LATCOL\_B. It is clear from Figure\[fig:parallax\] that an error in the lateral colour correction (especially in this case, along the FGS X-axis, which is nearly aligned with the line connecting the two circles marking the dates of the observations) will produce an error in the measured parallax. To evaluate the validity of applying the standard lateral colour correction (which is based solely on a star’s value of B-V) to , we revisited the interpretation of the astrometric results of the lateral colour calibration observations. Details of this “plausibility” investigation will be published as an STScI FGS Instrument Scientist Report (Nelan, in preparation) but summarized here. The spectral energy distribution (SED) of the two lateral colour calibrations stars, in addition to , and  were convolved with the wavelength-dependent sensitivity of the FGS over its bandpass (the sensitivity decreases from $\approx20\%$ at $4000\AA$ to $\approx2\%$ at $7000 \AA$ in a near linear fashion, where sensitivity refers to the probability that a photon will be detected). The number of photons observed (i.e., actually detected) by the FGS for each star at a given wavelength ($N_{photons}(\lambda)$) was normalized to unity at (for the moment) an arbitrary $\lambda_{o}$. The effective FGS colour of each star was then defined to be the ratio of the wavelength weighted sum ($\sum ((\lambda_{o}-\lambda)*N_{photons}(\lambda))$ for all $\lambda<\lambda_{o}$ (the “blue” sum) to the similar “red” sum ($\sum ((\lambda-\lambda_{o})*N_{photons}(\lambda))$ for all $\lambda\geq\lambda_{o}$ over the FGS bandpass. The value of $\lambda_{o}$ is the boundary between the blue and red such that for a source emitting the same number of photons at every wavelength the blue and red wavelength weighted sums are equal and the colour ratio is unity. For the FGS, we find that $\lambda_{o}=5092\AA$. The SEDs of both the red calibration star LATCOL\_A and  were represented as black body curves with $T=2\,900$K and $T=50\,000$K, respectively, while LATCOL\_B and  were represented by stellar model atmospheres using a code based upon the Kurutz models. For LATCOL\_B, a solar abundance, ${\hbox{$T_{\rm eff}$}}=8\,000$K, and $\log g=4.1$ were assumed. For , we assumed a hydrogen-atmosphere white dwarf with ${\hbox{$T_{\rm eff}$}}=16\,030$K and $\log g=8.2$. Using these SEDs, we computed for each star the wavelength weighted blue/red ratio described above, for which we found (blue/red) = 0.13, 1.42, 1.79, and 2.54 for LATCOL\_A, LATCOL\_B, , and , respectively. (A more precise estimate of the blue/red ratios for these four stars will use observed SEDs, which are currently unavailable. Here we simply evaluate the plausibility of this concept.) If we assume that the lateral colour shift in the relative position of two stars is proportional to the difference in their blue/red ratios, we can use the the astrometric results of the lateral colour calibration, which found that the blue star LATCOL\_B was shifted by -1.87 mas relative to LATCOL\_A, and their blue/red ratios to determine the proportionality constant $\alpha=-1.85/(1.42-0.13)=-1.44$ mas. Applying this to  and , we find the lateral colour-induced shift in the position of  relative to   to be -1.08 mas. The parallax result cited in Table\[table:results\] already includes a lateral colour correction of -0.25 mas in the position of  relative to  (based solely upon the (B-V) of each star). This differs by -0.83 mas when using the difference in their blue/red ratios. If we apply this correction, the parallax difference of the two stars is reduced to 0.27 mas, which is $\approx$1$\sigma$ of the individual measurements. We conclude that the two white dwarfs have the same parallax, and that this 0.27 mas difference is caused by the imprecise model SEDs used to construct the blue/red ratios. The measured parallax of   is also affected by errors in the lateral colour correction, but to a lesser extent since at (B-V)$=0.07$ it is closer to the colour of the blue calibration star LATCOL\_B (B-V$=0.2$). Nonetheless, the parallax quoted in Table\[table:results\] may be too large by up to 0.4 mas, based on the difference in the predicted relative shift between two stars with (B-V) = 0.2 and 0.07 using the standard lateral correction and the blue/red ratio correction. Given the imprecision of the (blue/red) based correction, we take the parallax of   to be $\pi=33.279\pm0.238$ mas using the standard lateral colour correction.  is $7^{\prime\prime}$ distant from  at a position angle P.A. = $145.856\degr$ as measured by FGS1r. From the measured proper motions (Table\[table:results\]),  is moving away from  at $16.26\pm0.86$ mas yr$^{-1}$ along a position angle of $133.62\degr$, which is nearly aligned with the line of sight between the two stars. The computation of the proper motions is dominated by the observations from the first and third epochs, which were performed at the same   orientation. Therefore the uncertainty in the lateral colour correction has no effect. At a distance of 30.05 pc (calculated from the parallax of ), this corresponds to $0.489\pm0.026$AU yr$^{-1}$, i.e. $2.33\pm0.12$km s$^{-1}$. We compare this tangential space velocity with an estimated orbital speed. If we assume that this is a bound binary system with a separation of $7^{\prime\prime}$ (210 AU at 30.05 pc), and with the total mass ranging from 2.02-2.31 ${\mbox{\,$\rm M_{\sun}$}}$ (see Sect.\[section:mass\]), a circular orbit yields a period of 2004 yr (for the higher mass estimate) to 2143 yr (for the lower mass); this corresponds to orbital speeds of 3.12-2.92 km s$^{-1}$ for  with respect to . These estimates are comparable to the tangential space velocity measured by FGS. This result and the close spatial proximity of the two stars supports the conclusion that  and  constitute a bound system. Although the FGS photometry shows a peak-to-peak amplitude variation between $V=14.60$ and $V=14.84$ (with $0.01$ accuracy estimated using  as a reference) consistent with the result from @Barstowetal95, the sampling was not good enough to confirm the 725second photometric variability quantitatively by means of a Fourier analysis of this sparse data set. Determination of the stellar parameters ======================================= Mass and radius determinations of  and   {#section:mass} ---------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Contour plots for $|M_V^{\rm obs}-M_V^{\rm theo}|/$mag as a function of mass in ${\mbox{\,$\rm M_{\sun}$}}$ and ${\hbox{$T_{\rm eff}$}}$ for CO (top), ONe (bottom) core compositions constructed according to Eq.\[eq:MV\] and theoretical models from @Wood95, @HolbergBergeron06 for the CO models, and @Althausetal05 [@Althausetal07] for the ONe models. The bar to the right indicates the colour coding for the magnitude differences, the line in the darkest region $|M_V^{\rm obs}-M_V^{\rm theo}|<0.5$mag delinating $M_V^{\rm obs}-M_V^{\rm theo}=0$ and the vertical lines the possible range of effective temperatures (30000-50000K). []{data-label="fig:fit"}](mvsT_CO02.png "fig:"){width="50.00000%"} ![Contour plots for $|M_V^{\rm obs}-M_V^{\rm theo}|/$mag as a function of mass in ${\mbox{\,$\rm M_{\sun}$}}$ and ${\hbox{$T_{\rm eff}$}}$ for CO (top), ONe (bottom) core compositions constructed according to Eq.\[eq:MV\] and theoretical models from @Wood95, @HolbergBergeron06 for the CO models, and @Althausetal05 [@Althausetal07] for the ONe models. The bar to the right indicates the colour coding for the magnitude differences, the line in the darkest region $|M_V^{\rm obs}-M_V^{\rm theo}|<0.5$mag delinating $M_V^{\rm obs}-M_V^{\rm theo}=0$ and the vertical lines the possible range of effective temperatures (30000-50000K). []{data-label="fig:fit"}](mvsT_ONe02.png "fig:"){width="50.00000%"} ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- To determine the mass of , we used synthetic bolometric colours and absolute magnitudes for carbon-oxygen (CO) core white-dwarf cooling models with thick hydrogen layers ($M_{\rm H}/M_\ast=10^{-4}$) [@Wood95; @HolbergBergeron06][^2]; when required, we used oxygen-neon (ONe) core white-dwarf cooling models with hydrogen layers of $M_{\rm H}/M_\ast=10^{-6}$ [@Althausetal05; @Althausetal07][^3]. We determined the “observed” absolute visual magnitude $M_V^{\rm obs}=V+5\log \pi -5=12.51$ mag from $V=14.90$ and $\pi=0.033279^{\prime\prime}$. For a given effective temperature and surface gravity, the theoretical bolometric magnitude $M_{\rm bol}$, the bolometric correction B.C.=$M_{\rm bol}-M_V$, and mass $m$ for  were calculated. The theoretical absolute visual magnitude was defined by $$\label{eq:MV} M_V^{\rm theo}({\hbox{$T_{\rm eff}$}},m)=M_{\rm bol}({\hbox{$T_{\rm eff}$}},m)-{\rm B.C.}({\hbox{$T_{\rm eff}$}},m).$$ The contour plots for $|M_V^{\rm obs}-M_V^{\rm theo}|$ are shown in Figure\[fig:fit\] for the two possible core compositions. For both compositions, a satisfactory minimum could be reached only for parts of the range of effective temperatures between 30000 and 50000K because the tables were limited to an upper value of $\log g=9.5$ for the case of the CO cores ($\log g=9.5$ corresponds to a mass of 1.37 for 30000K and a mass of 1.46 for 50000K) and to an upper limit of 1.28 for the ONe models. We calculated the minimum of $|M_V^{\rm obs}-M_V^{\rm theo}|$ for a given mass of our range of effective temperatures; when a mass solution could not be reached inside the calculated grids, we extrapolated the theoretical magnitudes. For an effective temperature of $30\,000$K, we estimated masses of $1.32\pm0.02$ (CO core) and $1.28\pm0.02$ (ONe core). Our CO-core calculations are consistent with the estimates of 1.31-1.37  [@Ferrarioetal97], who assumed a distance of 30pc. The highest temperature for which we could obtain a solution in the $|M_V^{\rm obs}-M_V^{\rm theo}|$ diagram is about $48\,000$K from which we inferred a mass of 1.46. Any additional extrapolation may introduce substantial uncertainty because we are then approaching the Chandrasekhar limit. In the grid of theoretical values for ONe cores, we performed significant extrapolation to obtain solutions above 30000K (see Figure\[fig:extrapolation\]). For ${\hbox{$T_{\rm eff}$}}=30\,000$K, we obtained a mass of $1.28$ and inferred an error of $\pm 0.015$ from the uncertainty in the observed visual magnitude and the parallax. For an effective temperature of $50\,000$K, we derived $1.38$  with a slightly higher error estimate of $0.020$ due to the uncertainty of the extrapolation. The results are summarised in Table\[table:re\_mass\]. We applied the same procedure to  by using our new parallax measurements and the information from the literature outlined in Table\[table:LB\]. Our mass estimate for the visual magnitude given by @Barstowetal95 is consistent with the former results [@Ferrarioetal97; @Kawkaetal07 see Table\[table:lb\_mass\]] although we find that our calculations with the visual magnitude provided by @Kawkaetal07 is incompatible with our mass determination if we assume that the spectroscopically determined masses for  are correct. ![The mass of  versus absolute $V$ magnitudes for an ONe white dwarf. The different curves correspond to the effective temperatures 30000-50000K. Above 1.28, we have to perform an extrapolation for ${\hbox{$T_{\rm eff}$}}>30\,000$K. Since we cannot strictly estimate the extrapolation error, we visually added some uncertainty to the extrapolated values, which was subsequently used to estimate the errors in Table\[table:re\_mass\]. The red line denotes the “observed” $M_V$. []{data-label="fig:extrapolation"}](ONe_highmass_ext01.png "fig:"){width="50.00000%"}\ With the knowledge of the $M_{\rm bol}$ for a given mass, the radius can be directly estimated at a given ${\hbox{$T_{\rm eff}$}}$. The radius estimates yield slightly different values when two core models are considered (see Table. \[table:re\_mass\]). This is caused assuming the assumption for the hydrogen layer mass to be $M_{\rm H}/M_\ast=10^{-4}$ in the CO cooling models [@Wood95] versus the $M_{\rm H}/M_\ast=10^{-6}$ content in the ONe cooling models [@Althausetal05]. This produces different luminosities for a given effective temperature. Age determination of  and {#section:age} -------------------------- The assessment of the cooling ages of  and  is important to the understanding of the evolutionary history of the system. It was possible to evaluate the cooling ages of both objects with the mass estimates that we determined. For our estimations, we used the grids of white dwarf cooling sequences for CO and ONe cores [e.g. @Wood95; @BenvenutoAlthaus99] for their respective range of grid parameters; for masses above the available values, we extrapolated the age values in a way similar to that for the visual magnitudes (see Figs.\[fig:extrapolation\] and \[fig:age\_extrapolation\]). Surprisingly, the difference in the cooling age of the two binary components is smaller than formerly estimated. For both assumed chemical compositions, the cooling age of the non-magnetic white dwarf  is within the error bars of the cooling age of the magnetic and very massive  (see Tables\[table:re\_mass\] and \[table:lb\_mass\]). For the case of an ONe core with an effective temperature as high as 50000K, our conclusion is poorly constrained due to the extremely large uncertainties introduced by the extrapolation. Previous age estimates were unreliable because they inferred a cooling age of  shorter than that of , simply based on its higher effective temperature. If we use the elementary theory of cooling by @Mestel65 assuming for a fixed effective temperature of the white dwarf, the cooling age is a function of the mass and radius $t_{\rm cool}\propto M/R^2$. This means that the cooling age for low-mass white dwarfs ($<0.5\,{\mbox{\,$\rm M_{\sun}$}}$) is simply proportional to mass $M^{5/3}$. As the mass of the white dwarf approaches the Chandrasekhar limit the radius asymptotically approaches zero, which means that ages for a given effective temperature depend even more strongly on the mass. The masses estimated here are quite close to the Chandrasekhar limit ($\geq 1.30\,$) where post-Newtonian corrections should be considered for the stellar equilibrium [@Chandrasekhar64; @ChandrasekharTooper64]. However, these corrections mostly affect the dynamical stability of the star, leading to collapse before reaching the Chandrasekhar limit, but induce only small corrections to mass-radius relationship. This is because the estimated radii are three orders of magnitude larger than the Schwarzschild-radius: $GM/c^2R_{WD}\sim10^{-3}$. Hence, we do not expect any effect on our mass determinations, as also noted by @KoesterChanmugam90. The evolutionary history of the and  system {#section:evol} =========================================== Core ${\hbox{$T_{\rm eff}$}}$/K mass/${\mbox{\,$\rm M_{\sun}$}}$ radius/$0.01{\mbox{\,$\rm R_{\sun}$}}$ $t_{\rm cooling}$/Myr ------ ---------------------------- ---------------------------------- ---------------------------------------- ----------------------- CO 30000 $1.32\pm0.020$ $0.405\pm0.011$ $281_{-31}^{+36}$ 50000 $>1.46$ $0.299\pm0.008$ $>318$ ONe 30000 $1.28\pm0.015$ $0.416\pm0.011$ $303_{-38}^{+40}$ 50000 $1.38\pm0.020$ $0.293\pm0.008$ $192_{54}^{+110}$ : Mass and age estimations for  using different core compositions and temperatures. The differences in radius estimates are caused by the different hydrogen content for different core models (see Sec.\[section:mass\]).[]{data-label="table:re_mass"} $V$/mag mass/${\mbox{\,$\rm M_{\sun}$}}$ $t_{\rm cooling}$/Myr --------- ---------------------------------- ----------------------- 14.11 $0.84\pm0.05$ $279_{-39}^{+68}$ 13.90 $0.76\pm0.05$ $223_{-30}^{+36}$ : Mass and age estimations for  using different $V$ magnitudes in the literature and an average effective temperature of 16000 K.[]{data-label="table:lb_mass"} ${}^1$ using the visual magnitude from @Barstowetal95; ${}^2$ using the visual magnitude from @Kawkaetal07 The projected distance of 210 AU between the two white dwarfs and their small relative proper motion suggest that they are companions and therefore share a common origin. The ages of both objects should therefore be equal or comparable within the error bars; this condition must be fulfilled for the correct evolutionary schemes of both white dwarfs. The case of  is straightforward because its evolutionary history is not complicated by either a strong magnetic field or an extreme mass. Therefore, the simple single-star evolution of  places constraints on the total age of . As mentioned above, previous analyses suggested a younger age for  than  and for this reason the system was assumed to have an “age dilemma” [@Ferrarioetal97]. Therefore, an alternative scenario was proposed in which  is a result of the merging of two white dwarfs that have lower-mass progenitors. Single-star origin of {#sec:single} ---------------------- With our new results, we undertook a more precise investigation. We firstly considered the single-star scenario for . To determine the total age of  and  from the zero-age main-sequence (ZAMS) to their current stage, we used the latest semi-empirical initial-to-final-mass relations (IFMR) [@Casewelletal09; @Salarisetal09] to estimate their initial masses. By considering a diverse range of the theoretical schemes to calculateg the IFMR (metallicity, overshoot parameter, etc.), we deduced the progenitor mass of  to be in the range $4.0-4.5$. For the extremely high (final) mass of , the corresponding IFMR is quite uncertain. Theoretically, it was shown that 9-10  mass stars would evolve into massive oxygen-neon (ONe) white dwarfs because of the off-centred carbon ignition in the partially degenerate conditions of their cores [@Ritossaetal96; @Garcia-Berroetal97]. With these constraints in mind, we consider more carefully a possible range of initial masses between 8 and 10 solar masses. The total age (time on the main-sequence plus the white dwarf cooling time) of  depends strongly on its initial mass. For the $0.84$ mass , the initial masses in the range $4.0-4.5$  yield main-sequence lifetimes of 170-130Myr (the progenitor ages were calculated using the evolutionary tracks from @Bertellietal09 for solar metallicity). This means that the total evolutionary age of  is in the range $410-450$Myr. With 40-30 Myr, the pre-white-dwarf lifetime is extremely short for progenitor masses in the range between $8$ and $10\,$, respectively. For an effective temperature of $30\,000\,$K and our resulting mass of $1.28-1.32$ for   [which would be the progeny of a 8 star, see @Casewelletal09; @Salarisetal09], we derive total ages in the range $320-340\,$Myr. If alternatively we assume an effective temperature of $50\,000\,$K for  and a CO core, we end up with a total lifetime of $\approx350$ Myr; for the ONe core case, the corresponding value would be $\approx220$Myr. We reiterate that our estimate for the errors is rather large in the ONe case at $50\,000\,K$ (see Table\[table:re\_mass\]) because of the uncertainties in the extrapolation. Hence, omitting the case with ONe core at $50\,000\,$K, we can say that the total age of  is in the range $320-350$Myr. There are additional theoretical uncertainties in the IFMR due to magnetism and rapid rotation that should be important for an extreme case such as . The effect of both of these factors on the IFMR has been the subject of some discussion. @Dominguezetal96 argued that rapid rotation has a positive effect on the core growth, such that a rapidly rotating star of mass 6.5  may produce a white dwarf of mass 1.1-1.4. Observational evidence of this was found by @Catalanetal08.  is the fastest rotating isolated white dwarf and this rotation may be a relic of a rapidly rotating progenitor. The assumption of a 6.5 mass star as the progenitor does not relieve the “age dilemma” considerably since the progenitor age for this case is $\sim70$Myr, which does not differ much from the 40-30Myr estimated for 8-10 mass stars. @Catalanetal08 also argued that MWDs are relatively more massive than expected on the basis of their inferred progenitors via the IFMR of non-magnetic white dwarfs. However, @WickramasingheFerrario05 and @FerrarioWickramasinghe05 both concluded based on their population synthesis studies this effect is of only minor importance. Since the effect of rotation and magnetism on the evolutionary age is unclear or rather small, we did not consider them in our age estimations. Based on these considerations, we conclude that the total age of  is $410-450$Myr at least $\sim100\,$Myr older than the respective value for  ($320-350$Myr). This discrepancy implies that the single-star evolution scenario might not be applicable to . However, the mass estimates leading to the cooling ages determined above neglected the influence of magnetism. The magnetic nature of  is likely to affect the determination of its mass because of the mass-radius relation. @OstrikerHartwick68 discussed the effect of magnetism and rapid rotation on white dwarfs. Both magnetism and rotation act against the gravitation, causing an extended radius; hence, white dwarfs with strong internal magnetization have larger radii for a given mass. To calculate the cooling tracks from synthetic colours and magnitudes of white dwarfs, mass-radius determinations are used implicitly. Hence our estimates of the masses and ages are impaired by the lack of mass-radius relations taking into account the effect of the magnetic field. For a white dwarf with 1.05, the radius is increased by a factor ${\rm e}^{\frac{3}{3-n}\delta}={\rm e}^{3.5\delta}$, where $\delta$ is the ratio of the magnetic energy to the gravitational energy of the star, and $n$ is the polytropic index [@ShapiroTeukolsky83]. In the case of , an internal magnetization of $<B>=10^{12}-10^{13}$G seems plausible; this would imply that $\delta \approx 0.1$ and therefore an increase in the radius by $\sim40\%$. Since  has an even higher mass, $n$ is in this case close to 3 and thereby the increase in radius for a given mass is even higher. For an effective temperature of 30000K, our measured radius is $0.410\times 10^{-2}\,{\mbox{\,$\rm R_{\sun}$}}$, whereas for 50000K it is $0.295\times 10^{-2}\,{\mbox{\,$\rm R_{\sun}$}}$. When we correct the influence of the magnetic field on the radius we end up with a higher mass than determined in Sect.\[section:mass\]. If  were of higher mass the cooling time would increase so that the age dilemma no longer exists for the assumption of single-star evolution. As an initial consideration, cooling ages of $\sim400\,$Myr, which would diminish the age inconsistency, are possible for , if it has a mass of 1.32 rather than 1.28 (ONe case; 0.04 discrepancy), or 1.38 rather than 1.32 (CO case; 0.06 discrepancy), for an effective temperature of $30\,000$K. The corrected radius of $R_0=0.32\times 10^{-2}{\mbox{\,$\rm R_{\sun}$}}$ implies a mass of 1.38 from the mass-radius relationship. This value implies that the corrections are plausibly high enough to account for the missing evolutionary age as discussed above. If we consider ${\hbox{$T_{\rm eff}$}}=50\,000$K for , the mass estimates based purely on the total evolutionary age of the system would imply values well above the Chandrasekhar limit. Although it is known that strong internal magnetic field strengths also modify the Chandrasekhar limit [@OstrikerHartwick68], it is still difficult to quantitatively assess the masses and their effect on cooling ages in this regime. Binary origin of {#section:binary} ----------------- The merger scenario for ultramassive white dwarfs was initially proposed by @Bergeronetal91 for GD50, @Marshetal97 proposed that this scenario could explain the properties of the hot and massive white dwarf population. For , it was similarly proposed to explain both the high angular momentum and high mass of this star [@Ferrarioetal97]. @Vennesetal03 also suggested that the scenario could produce a strong and non-dipolar magnetic field. They argued qualitatively that the high angular momentum is a result of the total orbital momentum of a coalescing binary and that the strong non-dipolar magnetic field can be generated by dynamo processes due to the differential rotation caused in turn by the merging. The type of binary evolution that can lead to a double-degenerate system has been investigated in detail, since it represents a channel for producing SN Ia explosions [@Webbink84; @IbenTutukov84]. In this scenario, a binary system consisting of two intermediate-mass stars (5-9) goes through one or two phases of a common envelope (CE) and evolves to a double white dwarf system. If the final double-degenerate system has orbital periods in the range between 10s and 10h, it will lose angular momentum through gravitational radiation and merge in less than a Hubble time. The merging process leads to a massive central product with a surrounding Keplerian disk. Depending on the total mass of the system, the temperature in the envelope and the accretion to the merger product, the system can evolve either to a SN Ia or by an accretion-induced collapse (AIC) to a neutron star. When the total mass of the system is insufficient to create the density and the temperature to burn carbon under degenerate conditions, the system will end up as an ultra-massive white dwarf. To test whether this scenario is indeed applicable to the case of , we have to trace back to the point in the stellar evolution where the merging could have happened, using the cooling age of  and subtracting it from the total evolutionary age of . Using this progenitor age estimate and the theoretical constraints from the theory of binary star evolution, we can estimate the masses of the possible merging counterparts. We begin by estimating the mass of the (secondary) binary component that needs longer to become a white dwarf. To obtain a lower limit to its mass, we assume the longest time from the main-sequence to the merging process considering the mass transfer episodes predicted by the binary scenario. After both white dwarfs are formed, the time needed for the binary to merge due to gravitational radiation depends strongly on the orbital parameters and mass of the double-degenerate system. Depending on the properties of the system, coalescence can be as fast as 0.1 Myr or as slow as 200 Myr [@IbenTutukov84]. To obtain a lower limit to the total evolutionary time for the system, we neglect the time needed for the double-degenerate system to coalesce. @IbenTutukov85 discussed the evolution of 3 to 12 stars that experience two phases of mass transfer. The phase of the mass transfer can take as long or even longer than the time the star spends on the main-sequence. For a 5  star, the main-sequence phase lasts $\sim$90Myr [@Bertellietal09], while in the binary-evolution scenario it takes 140Myr from the main sequence until the formation of the white dwarf. This means that 230Myr are needed for a 5 star to evolve into a white dwarf rather than the 100 Myr that we assumed for single-star evolution. The possible cooling ages considered for  (280Myr) and  (280 - 320 Myr, when we assume an effective temperature of about 30000K) imply that the maximum time needed for binary evolution is at most the main-sequence age of , which is 130-170Myr (for 4.0-4.5). The upper limit of 170Myr is comparably short relative to the 230Myr of binary evolution time. This provides a lower mass limit for the system. The resulting mass of a white dwarf that is a product of a 5 star in this binary evolution scheme is 0.752 [@IbenTutukov85], which is lighter than inferred from the IFMRs determined for single-star evolution. Since the pre-white-dwarf evolution is too long for an initial 5, star we need a more massive progenitor hence should end up with a secondary white dwarf more massive than 0.752. For the primary star, we assume that it has only a slightly higher mass than the secondary to deduce a lower limit to the total coalescing mass. However, this assumption leads to serious inconsistencies, because the total mass of two components would result in more than $1.5\,{\mbox{\,$\rm M_{\sun}$}}$ being above the masses estimated for . This lower limit is also robust when we consider mass loss. Firstly, smoothed particle hydrodynamic (SPH) simulations show that only a very small mass loss is expected during merging [$\sim10^{-3}\,{\mbox{\,$\rm M_{\sun}$}}$, see e.g. @Loren-Aguilaretal09], and secondly, we expect almost all of the Keplerian disk to be accreted on the merger product. Wind mass-loss from the Keplerian disk is assumed to be lower than 10% of the accretion rate [@MochkovitchLivio90]; this means that at least 90% of the disk is expected to be accreted. @Loren-Aguilaretal09 also estimate $0.1-0.3\,{\mbox{\,$\rm M_{\sun}$}}\ $ for the disk masses, which would imply a total mass loss of $\leq 0.01-0.03\,$. We note that infrared studies have detected possible disks surrounding massive white dwarfs [@Hansenetal06]. This included  for which no convincing evidence of a disk was found in the *Spitzer* IRAC bands [see also @Farihietal08]. If  were the product of a merger of two white dwarfs, all of the matter from the Keplerian disk should have been accreted. In this scenario, total mass limits well above the estimated   mass cannot be avoided. This estimation eliminates the possibility of a binary origin for  with a current effective temperature as low as 30000K. However, if the total mass of the binary system does not exceed the estimated value for , the time needed for the accretion of all the material from the disk is much longer than the evolutionary timescale. The accretion rate is expected to be $\leq 10^{-12}$ /yr for flows with laminar viscosity [@Loren-Aguilaretal09]. For disk material of $0.1-0.3\,{\mbox{\,$\rm M_{\sun}$}}$, that its complete accretion time of $1-3 \times 10^{5}\,$Myr is three orders of magnitude longer than the evolutionary timescale. If the binary scenario were correct, the Keplerian disk should have been observed unless the accretion rate of the disk was much higher than theoretically predicted. Only accretion rates higher than $10^{-10}\,$/yr would lead to a total disappearance of the disk. When two equal-mass white dwarfs merge, the symmetry of the process leads to a rotating ellipsoidal composed of CO around the white dwarf rather than a Keplerian disk. If  were still in the process of accretion we would have observed CO in the spectra but this is not the case. On the other hand, if all the material of the surrounding ellipsoid had already been accreted (mass-loss can be neglected as discussed above) the mass of  would be higher than observed (above the Chandrasekhar limit). We also considered the possible effect on the cooling ages of additional heating of the white dwarf core due to the merging process. Recent SPH simulations indicate the possibility of heating to $\sim\,10^9\,K$ in the core [@YoonLanger05; @Loren-Aguilaretal09]. However, because of the $T^{-5/2}$ dependence of the cooling age according to the elementary theory [@Mestel65], the effect of this extra heating on the cooling ages is expected to be small ($\sim2\,$Myr) and can be neglected. When we consider the ONe core case for an effective temperature of $50\,000\,$K, leading to an average cooling age of $\sim190\,$Myr, we end up with an upper limit to the evolution time of the secondary of 220-260Myr; the 40Myr spread in evolutionary time is only due to the uncertainty in the  progenitor mass (between 4.0-4.5). Our estimated upper limit to the total age is comparable to the evolutionary timescale of a 5 star in a binary system as considered above. However, the cooling time estimate for  in this case is considerably uncertain (see Table\[table:re\_mass\]) due to the extrapolation. Within these large error margins, we would in principle be able to obtain a sub-Chandrasekhar mass for the merger product, but this process is very unlikely when we consider the time needed for the white dwarfs to merge [$10-100\,$Myr @IbenTutukov84]. Nevertheless, the possibility of a binary origin for an ONe core  at ${\hbox{$T_{\rm eff}$}}=50\,000\,$K cannot be entirely excluded. We note that the effect of magnetic field strength on the structure of the white dwarf, considered in Sect.\[sec:single\] is also important to binary evolution. The implementation of this effect leads to an inference of slower cooling for  as in the single-star scenario. This would yield shorter progenitor timescales for a constant evolutionary time, leading to the lower limits on the total mass of the coalescing white dwarfs becoming even more massive. This diminishes again the probability of binary evolution for ${\hbox{$T_{\rm eff}$}}=30\,000\,$K. However, for ${\hbox{$T_{\rm eff}$}}=50\,000\,$K the uncertainties still permit the possibility of merging. Furthermore, the effect of magnetism on the stellar structure ensure that this scenario remains favourable due to the higher Chandrasekhar mass limit [@OstrikerHartwick68; @ShapiroTeukolsky83]. ![The mass of  versus logarithmic age in years for an ONe core white dwarf. The different curves correspond to the effective temperatures 30000-50000K. Since we cannot strictly estimate the extrapolation error we visually added some uncertainty to the extrapolated values, which was subsequently used to estimate the errors in Table\[table:re\_mass\]. []{data-label="fig:age_extrapolation"}](ONe_massvsage01.png "fig:"){width="50.00000%"}\ Discussion and conclusions ==========================  belongs to the very rare population of ultra-massive white dwarfs with masses exceeding 1.1. The competing theoretical explanations of the origin of these white dwarfs are single-star evolution versus the merging of two degenerate stars. Without considering mass-loss during stellar evolution, we have shown that an upper limit of 1.1 for the final white dwarf mass would exist for the white dwarfs because of the ignition of carbon in the core of the progenitor star. However, taking into account the effect of mass loss, high-mass ONe-core white dwarfs can be produced [see @Weidemann00 for a review]. Furthermore, it was proposed that even 9 to 10 mass stars evolve into ONe core white dwarfs of mass 1.26 and 1.15 respectively, because of the off-centred carbon ignition in the partial degenerate conditions of their cores [@Ritossaetal96; @Garcia-Berroetal97]. In the light of our current results, we have undertaken a more precise investigation of the possible evolutionary scenarios for . We have shown that the cooling ages are almost the same for the two components. The detailed analysis very much depends on a precise determination of the effective temperature; for ${\hbox{$T_{\rm eff}$}}=30\,000\,$K, we can use the calculations by @Wood95 and @BenvenutoAlthaus99 and conclude that within the limits of the uncertainties  is at least as old as . For a consistent interpretation of the system, we also have to take into account the time scales of the pre-white-dwarf evolution. The more massive progenitor of  should evolve more rapidly than the progenitor of . Taking this into account, the total age difference between  and  amounts to $\sim100\,$Myr if single-star evolution is considered. On the other hand, the alternative binary merger scenario proposed by @Ferrarioetal97 and @Vennesetal03 as a solution to this age dilemma has severe drawbacks. When the evolutionary timescales are considered, the progenitor age of  at ${\hbox{$T_{\rm eff}$}}=30\,000\,$K yields lower limits on the mass of the merger product that is considerably higher than its estimated mass for all cases. For , we have large uncertainties in the cooling age estimate only for an effective temperature of $50\,000\,$K, so that we cannot fully exclude the binary scenario. We have also considered the effects of the magnetic fields on both of the scenarios. Magnetic fields cause an increase in radius, hence an underestimate of the mass, which would imply longer cooling ages than estimated. For the case of ${\hbox{$T_{\rm eff}$}}=30\,000\,$K, the effect of magnetism makes the single-star scenario possible while further eliminating the binary merger origin; for the high ${\hbox{$T_{\rm eff}$}}$ of $50\,000\,$K, even the inclusion of magnetic effects ensures that the single-star scenario is possible; the binary scenario remains possible within our large uncertainties. With our measurement of the parallaxes and relative proper motion of  and  with ’s FGS, we have established that the wide binary system of these two stars is indeed a bound system. We have estimated the masses and ages of   and  based on the current white dwarf cooling tracks for different core compositions and hydrogen layer masses. Owing to the magnetic nature of this object, the temperature determination of  is difficult and should be repeated in the future taking into account all available observations and including a more detailed determination of the magnetic field geometry. For the mass and radius determination, we have considered the highest and lowest possible temperature and with these estimates we have discussed the evolutionary history and the possible origin of . Our results show that for a cooler, less massive  the binary scenario can be excluded within our uncertainties. We also proposed that the “age dilemma” might be solved when the effects of the magnetism on the structure of the white dwarf is considered. If  were hotter and more massive, then a binary origin scenario would be more plausible. This work was supported by the Deutsches Zentrum für Luft- und Raumfahrt (DLR) under grant 50 OR 0201. B. Külebi is a student of International Max Planck School of Astronomy (IMPRS) and a part of Heidelberg Graduate School of Fundamental Physics (HGSFP). We would like thank Enrique Garc[í]{}a-Berro for providing us with the data of their SPH simulations and the anonymous referee for his valuable suggestions. [^1]: Based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. The Guide Star Catalogue-II is a joint project of the Space Telescope Science Institute and the Osservatorio Astronomico di Torino. [^2]: `http://www.astro.umontreal.ca/\simbergeron/CoolingModels` [^3]: `http://www.fcaglp.unlp.edu.ar/evolgroup/tracks.html`
--- abstract: 'Let $Z:\zu\ra\R$ be a continuous function. We show that if $Z$ is “homogeneously multifractal” (in a sense we precisely define), then $Z$ is the composition of a monofractal function $g$ with a time subordinator $f$ (i.e. $f$ is the integral of a positive Borel measure supported by $\zu$). When the initial function $Z$ is given, the monofractality exponent of the associated function $g$ is uniquely determined. We study in details a classical example of multifractal functions $Z$, for which we exhibit the associated functions $g$ and $f$. This provides new insights into the understanding of multifractal behaviors of functions.' address: 'Laboratoire d’Analyse et de Mathématiques Appliquées - Université Paris-Est - UFR Sciences et Technologie - 61, avenue du Général de Gaulle, 94010 Créteil Cedex, France' author: - Stéphane Seuret title: | On multifractality and time subordination\ for continuous functions --- \[1994/12/01\] Introduction and motivations {#intro} ============================ Local regularity and multifractal analysis have become unavoidable issues in the past years. Indeed, physical phenomena exhibiting wild local regularity properties have been discovered in many contexts (turbulence flows, intensity of seismic waves, traffic analysis,..). From a mathematical viewpoint, the multifractal approach is also a fruitful source of interesting problems. Consequently, there is a strong need for a better theoretical understanding of the so-called multifractal behaviors. In this article, we investigate the relations between multifractal properties and time subordination for continuous functions. The most common functions or processes used to model irregular phenomena are monofractal, in the sense that they exhibit the same local regularity at each point. Let us recall how the local regularity of a function is measured. Let $Z \in L^\infty_{loc}(\zu)$. For $\alpha\geq 0$ and $t_0\in \zu$, $Z$ is said to belong to $C^\alpha_{t_0}$ if there are a polynomial $P$ of degree less than $[\alpha]$ and a constant $C$ such that, locally around $t_0$, $$\label{defpoint} |Z(t) - P(t-t_0)| \leq C |t -t_0|^\alpha.$$ The pointwise [Hölder ]{}exponent of $Z$ at $t_0$ is $h_Z(t_0) = \sup\{\alpha\geq 0: \ f\in C^\alpha_{t_0} \}.$ The singularity spectrum of $Z$ is then defined by $d_Z(h)=\dim \{t: h_Z(t)=h\}$ ($\dim$ stands for the Hausdorff dimension, and $\dim \emptyset = -\infty$ by convention). Hence, a function $Z:\zu\ra\R$ is said to be monofractal with exponent $H>0$ when $h_Z(t)=H$ for every $t\in \zu$. For monofractal functions $Z$, $d_Z(H) =1$, while $d_Z(h)=-\infty$ for $h\neq H$. Sample paths of Brownian motions or fractional Brownian motions are known to be almost surely monofractal with exponents less than 1. For reasons that appear below, [**we focus on monofractal functions associated with an exponent $H \in (0,1]$.**]{} More complex models had to be used and/or developed, for at least three reasons: the occurrence of intermittence phenomena (mainly in fluid mechanics), the presence of oscillating patterns (for instance in image processing), or the presence of discontinuities (in finance or telecommunications). Such models may have multifractal properties, in the sense that the support of their singularity spectrum is not reduced to a single point. Among these processes, whose local regularity varies badly from one point to another, let us mention Mandelbrot multiplicative cascades and their extensions [@BMP; @MANDEL2; @kahane; @BM1] , (generalized) multifractional Brownian motions [@JLV; @BJR] and Lévy processes [@BERTOIN; @JAFFLEVY] (for discontinuous phenomena). Starting from a monofractal process as above in dimension 1, a simple and efficient way to get a more elaborate process is to compose it with a time subordinator, i.e. an increasing function or process. Mandelbrot, for instance, showed the pertinency of time subordination in the study of financial data [@MANDEL]. From a theoretical viewpoint, it is also challenging to understand how the multifractal properties of a function are modified after a time change [@RMP; @MBLEVY]. A natural question is to understand the differences between the multifractal processes above and compositions of monofractal functions with multifractal subordinators. A function $Z:\zu\ra\R$ is said to be the composition of a monofractal function with a time subordinator (CMT) when $Z$ can be written as $$\label{decomp} Z=g\circ f,$$ where $g:\zu\ra\R$ is monofractal with exponent $0<H<1$ and $f:\zu\ra\zu$ is an increasing homeomorphism of $\zu$. In this article, we prove that if a continuous function $Z:\zu\ra\R$ has a “homogeneous multifractal” behavior (in a sense we define just below), then $Z$ is CMT. Hence, $Z$ is the composition of a monofractal function with a time subordinator, and shall simply be viewed as a complication of a monofractal model. This yields a deeper insight into the understanding of multifractal behaviors of continuous functions, and gives a more important role to the multifractal analysis of positive Borel measures (which are derivatives of time subordinators). We explain in Section \[self\] and \[multi\] how this decomposition can be used to compute the singularity spectrum of the function $Z$. Let us begin with two cases where a function $Z$ is obviously CMT: 1\. If $Z$ is the integral of any positive Borel measure $\mu$, then $Z = Id_{\zu}\circ Z$, where the identity $Id_{\zu} $ is monofractal and $Z$ is increasing. Remark that in this case, $Z$ may even have exponents greater than 1. 2\. Any monofractal function $Z_H$ can be written $Z_H=Z_H \circ Id_{\zu}$, where $ Z_H$ is monofractal and $ Id_{\zu}$ is undoubtably an homeomorphism of $\zu$. These two simple cases will be met again below. To bring general answers to our problem and thus to exhibit another class of CMT functions , we develop an approach based on the oscillations of a function $Z:\zu \ra\R$. For every subinterval $I\subset \zu$, consider the oscillations of order 1 of $Z$ on $I$ defined by $$\omega_I(Z) = \sup_{t,t'\in I } |Z(t)-Z(t')|= \sup_{t\in I } Z(t) -\inf_{t\in I } Z(t).$$ [**In the sequel, we assume that $Z$ is continuous and for every non-trivial subinterval $I$ of $\zu$, $\omega_I(Z) >0$.** ]{} This entails that $Z$ is nowhere locally constant, which is a natural assumption for the results we are looking for. It is very classical that the oscillations of order 1 characterize precisely the pointwise [Hölder ]{}exponents strictly less than 1 (see Section \[prel\]). Let us introduce the quantity that will be the basis of our construction. For every $j\geq 1$, $k\in\{0,..., 2^j-1\}$, we consider the dyadic intervals $I_{j,k} = [k2^{-j}, (k+1) 2^{-j})$, so that $\bigcup_{k=0,..., 2^j-1} I_{j,k} = [0,1[$, the union being disjoint. For every $j\geq 1$ and $k\in\{0,..., 2^j-1\}$, for simplicity we set $\omega_{j,k}(Z) =\omega_{I_{j,k}}(Z) $($=\omega_{\overline{I_{j,k}}}(Z) $ since $Z$ is $C^0$). For every $j\geq 1$, let $H_j(Z)$ be the unique real number such that $$\label{defhj} \sum_{k=0}^{2^j-1}( \omega_{j,k}(Z) ) ^{1/H_j(Z)} =1.$$ We then define the intrinsic monofractal exponent of $Z$ $H(Z)$ as $$\label{defh} H(Z)= \liminf_{j \ra +\infty} H_j(Z).$$ This quantity $H(Z)$ characterizes the asymptotic maximal values of the oscillations of $Z$ on the whole interval $\zu$. This exponent is the core of our theorem, because it gives an upper limit to the maximal time distortions we are allowed to apply. It is satisfactory that $H(Z)$ has a functional interpretation. Indeed, if $Z$ can be decomposed as (\[decomp\]), then the exponent of the monofractal function $g$ shall not depend on the oscillation approach nor on the dyadic basis. In Section \[secgen\] we explain that $$\label{form} H(Z)= \inf\left\{p>0: Z\in B^{1/p,\infty}_{p,{loc}}((0,1))\right\}= \inf\left\{p>0: Z \in \mathcal{O}_{p}^{1/p}((0,1))\right\} ,$$ where $ {B}^{q,\infty}_{1/q,loc}((0,1))$ and $\mathcal{O}_{p}^{1/p}((0,1))$ are respectively the Besov space and [*oscillation space*]{} on the open interval $(0,1)$ (see Jaffard in [@JAFFBEY] for instance). For multifractal functions $Z$ satisfying some multifractal formalism, the exponent $H(Z)$ can also be read on the singularity spectrum of $Z$. Indeed (see Section \[secgen\]), $H(Z)$ corresponds to the inverse of the largest possible slope of a straight line going through 0 and tangent to the singularity spectrum $d_Z$ of $Z$. These remarks are important to have an idea [*a priori*]{} of the monofractal exponent of $g$ in the decomposition $Z=g\circ f$. They also give an intrinsic formula for $H(Z)$. Let us come back to the two simple examples above: 1\. For the integral $Z$ of any positive measure $\mu$, $\sum_{k=0}^{2^j-1} \omega_{j,k}(Z) = \sum_{k=0}^{2^j-1} \mu(I_{j,k} ) =1$, hence $H_j(Z)=H(Z)=1$, which corresponds to the monofractal exponent of the identity $Id_{\zu}$ from the oscillations viewpoint. 2\. The first difficulties arise for the monofractal functions $Z_H$. When $Z_H$ is monofractal of exponent $H$, then we don’t have necessarily $H(Z_H)=H$. We always have $H(Z_H)\leq 1$ (see Lemma \[lem0\] in Section \[prel\]), but it is always possible to construct wild counter-examples. Nevertheless, we treat in details the examples of the Weierstrass functions and the sample paths of (fractional) Brownian motions in Section \[mono\], for which the exponent $H(Z_H)$ meets our requirements. Unfortunately, the knowledge of $H(Z)$ is not sufficient to get relevant results. For instance, consider a function $Z$ that has two different monofractal behaviors on $[0,1/2)$ and $[1/2,1]$. Such an $Z$ can be obtained as the continuous juxtaposition of two Weierstrass function with distinct exponents $H_1<H_2$: We have $H(Z)=H_1$, and $Z$ can not be written as the composition of a monofractal function with a time subordinator. This is a consequence of Lemma \[lemmonof\], which asserts that two monofractal functions $g_1$ and $g_2$ of disctinct exponents $H_1$ and $H_2$ never verify $g_1 = g_2 \circ f$ for any continuous increasing function $f:\zu\ra\zu$ (indeed, such an $f$ would “dilate” time everywhere, which is impossible). We need to introduce a homogeneity condition [**C1**]{} to get rid of these annoying and artificial cases. This condition heuristically imposes that the oscillations of any restriction of $Z$ to a subinterval of $\zu$ have the same asymptotic properties as the oscillations of $Z$ on $\zu$. [**Condition [**C1**]{}:**]{}\ Let $J\geq 0$, and $K\in\{0,..., 2^j-1\}$. Let $Z_{J,K}$ be the function $$\begin{aligned} \label{eq00}Z_{J,K} : \nonumber t \in\zu\longmapsto \frac{Z \circ \varphi_{J,K}(t)}{\omega_{J,K}(Z)} \in \R,\end{aligned}$$ where $\varphi_{J,K}$ is the canonical affine contraction which maps $\zu$ to $I_{J,K}$. Condition [**C1**]{} is satisfied for $Z$ when there is a real number $H>0$ such that for every $J\geq 0$ and $K\in\{0,..., 2^j-1\}$, $H(Z_{J,K}) =H (= H(Z))$. Hence $Z_{J,K}$ is a renormalized version of the restriction of $Z$ to the interval $I_{J,K}$. Remark that $H(Z_{J,K})$ does not depend on the normalization factor $ {1}/{\omega_{J,K}(Z)}$. Although self-similar functions are good candidates to satisfy [**C1**]{}, a function $Z$ fulfilling this condition does not need at all to possess such a property. In order to guarantee that $Z$ is CMT, we strengthen the convergence toward $H(Z_{J,K})$. [**Condition [**C2**]{}:**]{}\ Assume that Condition [**C1**]{} is fulfilled. There are two positive sequences $(\ep_J)_{J\geq 0}$ and $(\eta_J)_{J\geq 0}$ and two real numbers $0<\alpha<\beta$ with the following property: 1. $(\ep_J)_{J\geq 0}$ and $(\eta_J)_{J\geq 0}$ are positive non-increasing sequences that converge to zero, and $ \ep_J =o\left( \frac{1}{ (\log J)^{2+\kappa}}\right)$ for some $\kappa>0$. 2. For every $J\geq 0$ and $K\in\{0,..., 2^J-1\}$, the sequence $(H_j({Z}_{J,K}))_{j\geq 1}$ converges to $H=H({Z }_{J,K})$ (it is not only a liminf, it is a limit) with the following convergence rate: For every $j\geq [J\eta_J]$, $$\begin{aligned} \label{eq1} &&|H - H_j({Z }_{J,K}) | \leq \ep_J,\\ \label{eq1'} \mbox{and } \mbox{for every $k\in\{0,...,2^j-1\}$,}&& 2^ {-j\beta}\leq \omega_{j,k}(Z_{J,K}) \leq 2^{-j \alpha} .\end{aligned}$$ Assuming that $H({Z}_{J,K})$ is a limit is of course a constraint, but not limiting in practice, since this condition holds for most of the interesting functions or (almost surely) for most of the sample paths of processes. Similarly, the decreasing behavior (\[eq1’\]) is not very restrictive: such a behavior is somehow expected for a $C^\gamma$ function. The convergence speed (\[eq1\]) is a more important constraint, but the convergence rate we impose on $(\ep_J)_{J\geq 0}$ toward 0 is extremely slow, and is realized in the most common cases, as shown below. \[maintheo\] Let $Z:\zu \rightarrow \R$ be a continuous function. Assume that $Z$ satisfies [**C1**]{} and [**C2**]{}. Then $Z$ is CMT and the function $g$ in (\[decomp\]) is monofractal of exponent $H(Z)$. Such a decomposition is of course not unique: If $Z$ is CMT and $w:\zu\ra\zu$ is $C^\infty$ and strictly increasing, then $Z= (g\circ w) \circ (w^{-1}\circ f)$, where $g\circ w$ is still a monofractal function of exponent $H(Z)<1$ and $w^{-1}\circ f$ is an increasing function. Nevertheless, if two decompositions (\[decomp\]) exist respectively with functions $g_1$, $g_2$, $f_1$ and $f_2$, then $g_1$ and $g_2$ are necessarily monofractal with the same exponent $H(Z)$. This is again a consequence of Lemma \[lemmonof\]. An important consequence of Theorem \[maintheo\] is that the (possibly) multifractal behavior of $Z$ is contained in the multifractal behavior of $f$. More precisely, since $f$ is an increasing continuous function from $\zu$ to $\zu$, $f$ is the integral of a positive measure, say $\mu$, on $\zu$. The local regularity of $\mu$ is classically quantified through a local dimension exponent defined for every $t\in\zu$ by $$\alpha_\mu(t) = \liminf_{r\ra 0^+} \frac{|\log \mu(B(t,r))|}{r} =\liminf_{j\ra+\infty} \frac{|\log_2 \mu(B(t, 2^{-j}))|}{j},$$ where $B(t,r)$ stands for the ball (here an interval) with center $t$ and radius $r$, and $|A|$ is the diameter of the set $A$ ($|B(t,r)|=2r$). The singularity spectrum of $\mu$ is then $$\label{defdmu} \tilde d_\mu(\alpha) = \dim \{t : \alpha_\mu(t)=\alpha\}.$$ It is very easy to see that if $\alpha_\mu(t_0)=\alpha$, then $h_f(t_0) = \alpha H$. Hence for every $h\geq 0$, $d_f(h) = \tilde d_\mu(h/H)$, i.e. there is a direct relationship between the singularity spectrum of $Z$ and the one of $\mu$. As a conclusion, Theorem \[maintheo\] increases the role of the multifractal analysis of measures, since for the functions satisfying [**C1**]{} and [**C2**]{}, their multifractal behavior is ruled exclusively by the behavior of $\mu$. As an application of Theorem \[maintheo\], we will prove the following Theorem \[thself\], which relates the so-called self-similar functions $Z$ introduced in [@JAFFFORM] with the self-similar measures naturally associated with the similitudes defining $Z$. Let us recall the definition of self-similar functions. Let $\phi$ be a Lipschitz function on $[0,1]$ (we suppose that the Lipschitz constant $C_\phi$ equals 1, without loss of generality), and let $S_0, S_1, ...., S_{d-1}$ be $d$ contractive similitudes satisfying: 1. for every $i\neq j$, $S_i((0,1))\cap S_i((0,1)) =\emptyset $ (open set condition), 2. $\displaystyle \bigcup _{i=0}^{d-1} S_i(\zu) = \zu$ (the intervals $S_i(\zu)$ form a covering of $\zu$). We denote by $0<r_0,r_1,...,r_{d-1}<1$ the ratios of the non trivial similitudes $S_0,...,S_{d-1}$. By construction $\displaystyle \sum_{k=0 }^{d-1} r_k =1 $. Let ${\lambda}_0, {\lambda}_1,...,{\lambda}_{d-1}$ be $d$ non-zero real numbers, which satisfy $$\label{cond1} 0<\chi_{\min} = \min_{k=0,...,d-1} \left|\frac{r_k}{{\lambda}_k}\right| \leq \chi_{\max} = \max_{k=0,...,d-1} \left|\frac{r_k}{{\lambda}_k}\right| <1.$$ \[defiself\] A function $Z: \zu\to\zu$ is called self-similar when $Z$ satisfies the following functional equation $$\label{defself} \forall \, t\in \zu, \ \ Z(t) = \sum_{k=0}^{d-1} {\lambda}_k \cdot (Z\circ (S_k)^{-1}) (t) + \phi(t).$$ Relation (\[cond1\]) ensures that $Z$ exists and is unique [@JAFFFORM]. Let us consider the unique exponent $\beta>1$ such that $$\label{defbeta} \sum_{k=0} ^{d-1} ({\lambda}_k)^\beta =1.$$ This $\beta$ is indeed greater than 1, since $\sum_{k=0} ^{d-1} r_k =1$ and $|{\lambda}_k|>r_k$ for all $k$ by (\[cond1\]). With the probability vector $(p_0,p_1,..., p_{d-1})=(|{\lambda}_0|^\beta , |{\lambda}_1|^\beta ,..., |{\lambda}_{d-1}|^\beta )$ and the similitudes $(S_k)_{k=0,...,d-1}$ can be associated the unique self-similar probability measure $\mu$ satisfying $$\label{defmu} \mu = \sum_{k=0} ^{d-1} p_k \cdot (\mu\circ S_k ^{-1}).$$ \[thself\] Let $Z$ be defined by (\[defself\]). Then, either $Z$ is a $\kappa$-Lipschitz function for some constant $\kappa>0$ (expliciteley found in Section \[self\]), or $Z$ is CMT and there is a monofractal function $g$ of exponent $1/\beta$ such that $$\label{resthself} \mbox{for every $t\in\zu$, } Z(t) =g (\mu[0,t]),$$ where $\mu$ is the self-similar measure (\[defmu\]) naturally associated with the parameters used to define $Z$. The multifractal analysis of $Z$ follows from the multifractal analysis of $\mu$, which is a very classical problem (see [@BMP]). The paper is organized as follows. In Section \[proof\], Theorem \[maintheo\] is proved, by explicitly constructing the monofractal function $g$ and the time subordinator $f$. Section \[secgen\] contains the possible extensions of Theorem \[maintheo\], the explanation of the heuristics (\[form\]), and the discussion for exponents greater than 1. In Section \[mono\], \[self\] and \[multi\], we detail several classes of examples to which Theorem \[maintheo\] applies. First we prove that the usual monofractal functions $Z$ with exponents $H$ verify [**C1**]{} and [**C2**]{}. We prove Theorem \[thself\] in Section \[self\]. Finally we explicitly compute and plot the time subordinator and the monofractal function for a classical family of multifractal functions $(Z_a)_{a\in \zu}$ which include Bourbaki’s and Perkin’s functions. Let us finish by the direct by-products and the possible extensions of this work: The reader can check that the proof below can be adapted to more general contexts: - the dyadic basis can be replaced by any $b$-adic basis. - if $(\ep_J)$ converges to zero (without any given convergence rate), then (under slight modifications of $(\eta_J)$) the same result holds true. We focused on a simpler case, but in practice, a convergence rate $ \ep_J =o\left( \frac{1}{ (\log J)^{2+\kappa}}\right)$ shall always be always obtained. - The fact the the quantities $H({Z }_{J,K})$ are limits is only used at the beginning of the proof. In fact, only the existence of the scale $[J\eta_J]$ such that (\[eq1\]) and (\[eq1’\]) hold true at scale $[J\eta_J]$ is determinant. In particular, the conditions may be relaxed: We could treat the case where the $H({Z }_{J,K})$ are only liminf (and not limits). Again, in practice they are often limits, this is why we adopted this viewpoint. Preliminary results {#prel} =================== Oscillations and pointwise regularity ------------------------------------- For every $t\in \zu$, let $I_j(t) $ be the unique dyadic interval of generation $j$ that contains $t$, and $I_j^+(t) = I_j(t)+2^{-j}$, $I_j^-(t) = I_j(t)-2^{-j}$. Let us recall the characterization of the pointwise [Hölder ]{}exponents smaller than 1 in terms of oscillations of order 1 (see for instance Jaffard in [@JAFFBEY]). \[lem1\] Let $Z:\zu\ra \R$ a $C^\gamma$ function, for some $\gamma>0$. Assume that $h_Z(t)<1$. Then $$h_Z(t) = \liminf_{r\ra 0^+} \frac{|\log \om_{B(t,r)} (Z)|}{ |\log r|} = \liminf_{j\ra +\infty} \frac{|\log_2 \om_{B(t,2^{-j})}(Z)|}{j}.$$ In Lemma \[lem2\], we impose some uniform behavior of the oscillations of $Z$ on a nested sequence of coverings of $\zu$. This is used later to prove the monofractality property of the function $g$ in the decomposition $Z=g\circ f$ (Section \[proof\]), and also to decompose self-similar functions (in Section \[self\]). \[lem2\] Let $Z:\zu \ra \R$ be a continuous function, $t\in (0,1)$ and $H\in (0,1)$. Suppose that there exists an infinite sequence $(T_n)$ of coverings of $\zu$ such that - each $T_n$ is a finite sequence of disjoint non-trivial intervals of $\zu$, such that $\bigcup_{T\in T_n} T =\zu$, - $ \lim _{n\ra +\infty} \max_{T\in T_n} |T| =0$, - each interval $T$ in $T_n$ is contained in a unique interval $T'$ of $T_{n-1}$, - for every $T\in T_n$ and $T\subset T' \in T_{n-1}$, we have $ |T'|^{1+ Z_n } \leq { |T |} \leq |T'| $, for some positive sequence $(Z_n)$ that converges to zero when $n\ra +\infty$. Then: 1. If there exists a positive sequence $(\kappa_n)_{n\geq 1}$ such that for every $T \in T_n$, $\omega_{T}(Z) \leq |T| ^{H-\kappa_n}$, then for every $t\in \zu$, $h_Z(t) \geq H$. 2. If there exists a positive sequence $(\kappa_n)_{n\geq 1}$ such that for every $T \in T_n$, $\omega_{T}(Z) \geq |T| ^{H+\kappa_n}$, then for every $t\in \zu$, $h_Z(t) \leq H$. Remark that in part (2) of this Lemma, the property needs to be satisfied only for a subsequence $({n_k})_{k\geq 1}$ of integers. Let $t\in (0,1)$, and $r>0$ small enough. For every $n\geq 1$, $t$ belongs to one interval $T\in T_n$, that we denote $T_n(t)$. Denote by $n_r$ the smallest integer $n$ so that $T_n(t) \subset B(t,r)$. By construction, $t\in T_{n_r-1}(t)$ and $|T_{n_r-1}(t)| \geq r$ (since $ T_{n_r-1}(t)\not\subset B(t,r)$). By the fourth property of the sequence $(T_n)$, we have $ 2r \geq | T_{n_r}(t)| \geq |T_{n_r-1}(t)| ^{1+Z_{n_r}} \geq r^{1+Z_{n_r}}$. Let us start by part (2), which is very easy to get. We have $\om_{B(t,r)}(Z) \geq \om_{T_{n_r}(t)}(Z) \geq |T_{n_r}(t)|^{H+\kappa_{n_r}} \geq r^{(1+Z_{n_r})(H+\kappa_{n_r})} $. Applying Lemma \[lem1\], and using that $Z_{n_r}$ and $\kappa_{n_r}$ go to zero when $r$ goes to zero, we obtain $h_Z(t) \leq H$. We now focus on part (1), which is slightly more delicate. If $B(t,r) \subset T_{n_r-1}(t)$, then we have $\om_{B(t,r)} (Z)\leq \om _{T_{n_r-1}(t)}(Z) \leq |T_{n_r-1}(t)|^{ H-\kappa_{n_r-1}} \leq (2r)^{(H-\kappa_{n_r-1})/(1+Z_{n_r-1})}$. If $B(t,r) \not\subset T_{n_r-1}(t)$, then there is an integer $p$ (which depends on $r$) such that $ B(t,r)\bs T_{n_r-1}(t) $ is covered by one interval $T\in T_{p}$ and not covered by any interval of $T_{p+1}$. Using the same arguments as above, we get $|T| \leq | B(t,r)\bs T_{n_r-1}(t) | ^{1/(1+Z_{p+1})} \leq r^{1/(1+Z_{p+1})}$ (remark that $|B(t,r)\bs T_{n_r-1}(t)|\leq r$). Now we have $$\begin{aligned} \om_{B(t,r)} (Z)& \leq &\om _{T_{n_r-1}(t)}(Z)+\om _{B(t,r)\bs T_{n_r-1}(t)}(Z) \\ & \leq &(2r)^{(H-\kappa_{n_r-1})/(1+Z_{n_r-1})} + |T|^{H-\kappa_{p}}\\ & \leq &(2r)^{(H-\kappa_{n_r-1})/(1+Z_{n_r-1})} + r^{(H-\kappa_p)/(1+Z_{p+1})} \end{aligned}$$ Since $\kappa_{n_r}$, $Z_{n_r}$, $\kappa_p$ and $Z_{p}$ converge to 0 as $r \ra 0$, Lemma \[lem1\] yields $h_Z(t) \geq H$. Two easy properties for the study of $H(Z)$ ------------------------------------------- Let us begin with an easy upper-bound for $H(Z)$. \[lem0\] Let $Z:\zu \ra \R$ be a non-constant continuous function. Then $H(Z)\leq 1$. We can assume without loss of generality that $\om_{\zu}(Z) =1$. Let $j\geq 1$. By construction, $\sum_{k=0}^{2^j-1} \om_{j,k}(Z) \geq 1$. In order to have (\[defhj\]), we necessarily have $H_j(Z)\leq 1$. Hence the result. \[lemmonof\] Let $g_1$ and $g_2$ be two real monofractal functions on $\zu$ of disctinct exponents $0<H_1 < H_2<1$. There is no continuous strictly increasing function $f:\zu\ra\zu$ such that $g_1 = g_2 \circ f$. Suppose that such a function $f$ exists. Let $\ep>0$. This function $f$ is Lebesgue-almost everywhere differentiable. There is a set $E$ of positive Lebesgue measure such that for every $t\in E$, $f'(t) >0$. Around such a $t$, we have $f(t+h) - f(t) = f'(t)h +o(t)$. Consequently, since $h_{g_2}(f(t)) = H_2$, for every $|h|$ small enough we have $$|(g_2\circ f)(t+h) - (g_2\circ f) (t)| \leq |f(t+h) - f(t) | ^{H_2 -\ep} \leq C |h|^{H_2 -\ep} .$$ This shows that $h_{g_2\circ f} (t)\geq H_2$. Using again that $h_{g_2}(f(t)) = H_2$, there is a sequence $(h'_n)_{n\geq 1}$ converging to zero such that for every $n\geq 1$, $|g_2(f(t)+ h'_n) - g_2(f(t))| \geq |h'_n|^{H_2+\ep}$. Choosing $h_n$ so that $f(t+h_n) = f(t)+h'_n$, we see that $$| (g_2\circ f)(t+h_n) - (g_2\circ f) (t)| \geq |f(t+h_n) - f(t) | ^{H_2 +\ep} \geq C |h_n|^{H_2 +\ep} .$$ This holds for an infinite number of real numbers $(h_n)$ converging to zero. Hence $h_{g_2\circ f} (t)= H_2$, which contradicts $h_{g_1}(t) = H_1$. A functional interpretation of $H(Z)$ ------------------------------------- Note first that the previous results hold in the case where a $b$-adic basis, $b\geq 2$, is used instead of the dyadic basis. In fact, there is a functional interpretation of the exponent $H(Z)$, independent of any basis, provided by the [*Oscillation spaces*]{} of Jaffard [@JAFFBEY] and the Besov spaces. Let us recall their definition, that we adapt to our context of nowhere differentiable functions. Let $Z$ be a $C^\gamma$ function on $(0,1)$, where $C^\gamma$ is the global homogeneous [Hölder ]{}space and $\gamma>0$. Since [@JAFFFORM] where the theoretical foundations of multifractal analysis of functions were given, a quantity classically considered when performing the multifractal analysis of $Z$ is the scaling function $\eta_Z(p)= \sup\left \{s>0: Z\in B^{s/p,\infty}_{p,{loc}}((0,1))\right\}$. Later, in [@JAFFBEY], Jaffard also proved the pertinency in multifractal analysis of his oscillation spaces $\mathcal{O}^{s/p}_{p}((0,1))$, whose definitions are based on wavelet leaders (we do not need much more details here). He also considered the associated scaling function $\zeta_Z(p) = \sup\left\{s>0: Z \in \mathcal{O}^{s/p}_{p}((0,1))\right\}$. Finally, still in [@JAFFBEY], Jaffard studied the spaces $\mathcal{V}^{s/p}_p((0,1))$, which are closely related to our exponent $H(Z)$, defined as follows: Denote, for $j\geq 1$ and $k\in \{0,...,2^j-1\}$, $\Omega_{j,k} (Z)= \omega_{[ k2^{-j} - 3 2^{-j}, k2^{-j} + 3 2^{-j} ]}(Z)$, and consider the associated scaling function (we assume hereafter that $Z$ is nowhere differentiable, as in Theorem \[maintheo\]) $$\nu _Z (p) = 1+ \liminf_{j\ra +\infty} \frac{\log_2 \sum_{k=0}^{2^j-1} (\Omega_{j,k} (Z))^p } { -j}.$$ For $p>0$ fixed, it is obvious that there is a constant $C_p>1$ such that $$1/C_p\sum_{k=0}^{2^j-1} (\omega_{j,k} (Z))^p \leq \sum_{k=0}^{2^j-1} (\Omega_{j,k} (Z))^p \leq C_p \sum_{k=0}^{2^j-1} (\omega_{j,k} (Z))^p,$$ since $ \omega_{j,k}(Z) \leq \Omega_{j,k}(Z) \leq \sum_{l\in\{-3,-2,...,2,3\}} \omega_{j,k+l}(Z) $. As a consequence, $\nu _Z(p) = 1+ \liminf_{j\ra +\infty} \frac{\log_2 \sum_{k=0}^{2^j-1} (\omega_{j,k} (Z))^p } { -j}.$ Comparing the definition of $H(Z)$ with this formula, we easily see that $H(Z)$ is the unique positive real number such that $\nu_Z(1/H(Z)) =1$. The main point is that the three scaling functions $\eta_Z$, $\zeta_Z$ and $\nu _Z$ coincide as soon as $p\geq 1$ [@JAFFBEY], and $\eta_Z (1/H(Z)) = \zeta_Z (1/H(Z))=1$. Using the property of the Besov domains, we have $$H(Z) ^{-1} = \inf\left\{p>0: Z \in B^{1/p,\infty}_{p,\mbox{loc}} ((0,1))\right\}= \inf\left\{p>0: Z \in \mathcal{O}^{1/p}_{p} ((0,1))\right\} .$$ Precisions for functions satisfying a multifractal formalism ------------------------------------------------------------ Consider the scaling function $\zeta_Z(p)$ above. Then for any function $Z$ having some global [Hölder ]{}regularity [@JAFFBEY], $Z$ is said to obey the multifractal formalism for functions if its singularity spectrum is obtained as the Legendre transform of its scaling function, i.e. $$\mbox{for every $h\geq 0$, } \ \ d_Z(h) = \inf _{p \in \R} (ph -\zeta_Z(p)+1) \ \ (\in \R^+ \cup\{-\infty\}) .$$ In particular, since $\zeta_Z(1/H(Z))=1$, we always have $d_Z(h) \leq h/ H(Z)$ (by using $p=1/H(Z)$ in the inequality above). Moreover, assume that $h_c=\zeta'_Z(1/H(Z))$ exists and that $Z$ satisfies the multifractal formalism associated with $\zeta_Z$ at the exponent $h_c$. This means that the inequality above holds true for $h=h_c$, i.e. $d_Z(h_c) = h_c/H(Z)$. From the two last properties we get that $1/H(Z)$ is the slope of the tangent to the (concave hull of the) singularity spectrum of $Z$, as claimed in the introduction. Proof of the decomposition of Theorem \[maintheo\] {#proof} ================================================== The functions $g$ and $F$ are constructed iteratively. First remark that since $(\eta_j)$ converges to zero, one can also assume, by first replacing $\eta_j$ by $\max(\eta_j, 1/\log j)$ and then by imposing that $(\eta_j)$ is non-increasing, that the sequence $(\eta_j)$ satisfies: - for every $j\geq 1$, $\eta_j \geq 1/\log j$, - $(j\eta_j)$ is now a non-decreasing sequence and $j\eta_j \ra +\infty$ when $j\ra +\infty$, - $(\eta_j)$ still satisfies (\[eq1\]) and (\[eq1’\]). Assume that conditions [**C1**]{} and [**C2**]{} are fulfilled. First step of the construction of $g$ and $f$ --------------------------------------------- The exponent $H(Z_{0,0})=H(Z)=H$ is the limit of the sequence $H_j(Z)$, so there exists a generation $J_0\geq 1$ such that for every $j\geq J_0$, $ |H - H_{ j}(Z) | \leq \ep_{0}.$ We set $H_0=H_{J_0}(Z)$, and by construction we have $\sum_{k=0}^{2^{J_0}-1}( \omega_{J_0,k}(Z) ) ^{1/H_0} =1.$ We then define the first step of the construction of the function $f$: we set $$\begin{aligned} f_0(t) = \sum_{ k'=0}^{k-1} ( \omega_{J_0,k'}(Z) ) ^{1/H_0} + ( \omega_{J_0,k }(Z) ) ^{1/H_0} (2^{J_0} t-k) \ \mbox{ if } t\in I_{J_0,k}.\end{aligned}$$ This function $f_0$ is strictly increasing, continuous and affine on each dyadic interval. Moreover, $f_0(\zu) = \zu$. Let us denote $U_{J_0,k}$ the image of the interval $I_{J_0,k}$ by $f_0$, for every $k\in\{0,...,2^{J_0}-1\}$. The set of intervals $\{U_{J_0,k}: k \in\{0,...,2^{J_0}-1\}$ clearly forms a partition of $[0,1)$. One remarks that $$\label{eq3} \forall k\in\{0,...,2^{J_0}-1\}, \ \ |f_0 (I_{J_0,k} ) | = |U_{J_0,k}| = ( \omega_{J_0,k }(Z) ) ^{1/H_0} .$$ The first step of the construction of $g$ is then naturally achieved as follows: we set $$\begin{aligned} g_0(y) & = & Z((f_0)^{-1}(y)) \ \mbox {for $y\in \zu$},\\ \mbox {or equivalently } \ g_0(f_0(t)) & = & Z(t) \ \mbox {for $t\in \zu$}.\end{aligned}$$ This function $g_0$ maps any interval $U_{J_0,k}$ to the interval $Z(I_{J_0,k})$, and thus satisfies: $$\omega _{U_{J_0,k} }({g_0}) = \omega_{J_0,k } (Z)= |U_{J_0,k}| ^{H_0}.$$ As a last remark, there are two real numbers $0<\alpha'<\beta'$ such that for every $k$ $2^{-J_0\beta'/H_0} \leq |U_{J_0,k}| \leq 2^{-J_0\alpha'/H_0}$. Without loss of generality, we can assume that $\alpha' =\alpha$ and $\beta'=\beta$ ($\alpha$ and $\beta$ appear in condition [**C2**]{}) by changing $\alpha$ into $\min(\alpha',\alpha)$ and $\beta=\max(\beta,\beta')$, so that $$\label{eq22} \mbox{for every $k$, } \ 2^{-J_0\beta/H_0} \leq |U_{J_0,k}| \leq 2^{-J_0\alpha/H_0}.$$ First iteration to get the second step of the construction of $g$ and $f$ ------------------------------------------------------------------------- We perform the second step of the construction. Let us focus on one interval $I_{J_0,K}$, on which we refine the behavior of $f_0$. By condition [**C2**]{} and especially (\[eq1\]), we have $$\label{eq5} \sum_{k'=0,...,2^{ [J_0\eta_{J_0}]}-1} (\omega_{ [J_0\eta_{J_0}],'k} (Z_{J_0,K}))^{1/H_1} =1,$$ where $H_1 = H_{[J_0\eta_{J_0}]}(Z_{J_0,K})$ satisfies $\ |H - H_1| \leq \ep_{J_0}$. Let $J_1= J_0+ [J_0\eta_{J_0}]$, hence $J_1-J_0= [J_0\eta_{J_0}]$. Remark that, by (\[eq1’\]), we have for every $k'\in\{0,...,2^{ [J_0\eta_{J_0}]}-1\}$ $$\label{eq20} 2^{- [J_0\eta_{J_0}] \beta} \leq | \omega_{ [J_0\eta_{J_0}],k'} (Z_{J_0,K}) | \leq 2^{- [J_0\eta_{J_0}] \alpha}.$$ Now, remembering the definition of $Z_{J_0,K}$, we obtain that for every $k'\in\{ 0,...,2^{J_1-J_0}-1\}$, $$\label{eq21} \om_{ J_1-J_0,k'} (Z_{J_0,K}) = \frac{ \om_{J_1,K 2^{J_1-J_0}+k'}(Z)}{\om_{J_0,K}(Z)}.$$ Consequently, (\[eq5\]) is equivalent to $$\sum_{k=0,...,2^{J_1}-1: I_{J_1,k}\subset I_{J_0,K}} (\omega_{ J_1,k} (Z))^{1/H_1} =(\omega_{ J_0,K} (Z))^{1/H_1} ,$$ and thus $$\sum_{k=0,...,2^{J_1}-1: I_{J_1,k}\subset I_{J_0,K}} (\omega_{ J_1,k} (Z))^{1/H_1} (\omega_{ J_0,K} (Z))^{1/H_0-1/H_1} =(\omega_{ J_0,K} (Z))^{1/H_0} .$$ We now define the function $f_1$ as a refinement on $f_0$ on the dyadic interval $I_{J_0,K}$. We set for every $ k\in\{K2^{J_1-J_0},..., (K+1)2^{J_1-J_0}-1\}$ and for $t\in I_{J_1,k}$ $$\begin{aligned} f_1(t) & = & f_0(K2^{-J_0}) \\ &+&\sum_{ k'=K2^{J_1-J_0}}^{k-1} ( \omega_{J_1,k'}(Z) ) ^{1/H_1} (\omega_{ J_0,K} (Z))^{1/H_0-1/H_1} \\ &+& ( \omega_{J_1,k}(Z) ) ^{1/H_1}(\omega_{ J_0,K} (Z))^{1/H_0-1/H_1} (2^{J_1} t-k) .\end{aligned}$$ This can be achieved simultaneously on every dyadic interval $I_{J_0,K}$, $K\in\{0,...,2^{J_0}-1\}$, by using the same generation $J_1$ for the subdivision (indeed, condition [**C2**]{} ensures that the convergence rate of $H_j(Z_{J_0,k})$ does not depend on $k$). The obtained function is again an increasing continuous function, affine on every dyadic interval of generation $J_1$. Let us denote $U_{J_1,k}$ the image of the interval $I_{J_1,k}$ by $f_1$, for every $k\in\{0,...,2^{J_1}-1\}$. The set of intervals $\{U_{J_1,k}: k \in\{0,...,2^{J_1}-1\}$ again forms a partition of $[0,1)$. We get $$\label{eq4} \forall k\in\{0,...,2^{J_1}-1\}, \ \ |U_{J_1,k}| = ( \omega_{J_1,k }(Z) ) ^{1/H_1}(\omega_{ J_0,K} (Z))^{1/H_0-1/H_1} .$$ but the main point is that we did not change the size of the oscillations of $f_0$ on the dyadic intervals of generation $J_0$, i.e. $f_1(I_{J_0,K}) = f_0(I_{J_0,K}) $. The second step of the construction of $g$ is realized by refining the behavior of $g_0$: Set $$\begin{aligned} g_1(y) & = & Z((f_1)^{-1}(y)) \ \mbox {for $y\in \zu$}.\end{aligned}$$ This function $g_1$ maps any interval $U_{J_1,k}$ to the interval $Z(I_{J_1,k})$, and thus satisfies: $$\omega _{U_{J_1,k} }({g_1}) = \omega_{J_1,k } (Z) \mbox { with } |U_{J_1,k}| = ( \omega_{J_1,k }(Z) ) ^{1/H_1}(\omega_{ J_0,K} (Z))^{1/H_0-1/H_1} .$$ Finally, we want to compare the size of the interval $U_{J_1,k}$ with the size of its father interval (in the preceding generation) $U_{J_0,K}$. For this, let us choose $k\in\{ 0,...,2^{J_1 }-1\}$ and $K\in\{ 0,...,2^{ J_0}-1\}$ are such that $I_{J_1,k}\subset I_{J_0,K}$ (hence $k$ can be written $k=K.2^{J_1-J_0} +k'$ with $k'\in\{0,...,2^{J_1-J_0}-1\}$). Then, by (\[eq21\]), $$\begin{aligned} |U_{J_1,k}| & = & ( \omega_{J_0,K }(Z))^{1/H_0} (\omega_{J_1-J_0,k' }(Z_{J_0,K}) ) ^{1/H_1}\\ & = & |U_{J_0,k}|(\omega_{J_1-J_0,k' }(Z_{J_0,K}) ) ^{1/H_1}. \end{aligned}$$ Using (\[eq20\]) we get $$\begin{aligned} |U_{J_1,k}| \geq |U_{J_0,k}| 2^{-(J_1-J_0)\beta/H_1}= |U_{J_0,k}| 2^{-[J_0\eta_{J_0}]\beta/H_1}. \end{aligned}$$ On the other side, we know by (\[eq22\]) that $ |U_{J_0,k}| \leq 2^{-J_0 \alpha/H_0}$, hence $$\begin{aligned} |U_{J_0,k}| \geq |U_{J_1,k}| \geq |U_{J_0,k}|^{1+ \eta_{J_0}\frac{\beta H_0}{\alpha H_1}} , \end{aligned}$$ where the left inequality simply comes from the fact that $I_{J_1,k}\subset I_{J_0,K}$. General iterating construction of $g$ and $f$ --------------------------------------------- This procedure can be iterated. Assume that the sequences $(J_p)_{p\geq 1}$, $(f_p)_{p\geq 1}$ and $(g_p)_{p\geq 1}$ are constructed for every $p\leq n$, and that they satisfy: 1. for every $1\leq p\leq n$, $J_p=J_{p-1} +[J_{p-1}\eta_{J_{p-1}}]$ and $|H-H_p| \leq \ep_{J_{p-1}}$, 2. for every $1\leq p\leq n$, $f_p$ is a continuous strictly increasing function, affine on each dyadic interval $I_{J_p,k}$ and if we set $f_p(I_{J_p,k}) = U_{J_p,k}$, then $$\label{eq7} |f_p(I_{J_p,k})| = |U_{J_p,k}| = ( \omega_{J_p,k }(Z) ) ^{1/H_p} \prod_{m=0}^{p-1}(\omega_{ J_m,K_m(k)} (Z))^{1/H_m-1/H_{m+1}},$$ where $K_m(k)$ is the unique integer such that $I_{J_p,k } \subset I_{ J_m,K_m(k)}$, for $m<p$, 3. For every $1 \leq p\leq n$, the set of intervals $\{U_{J_p,k}: k \in\{0,...,2^{J_p}-1\}$ forms a partition of $[0,1)$. 4. For every $1\leq p\leq n$, if $U_{J_n,k} \subset U_{J_{p-1},K_{p-1}(k)}$, then $${|U_{J_{p-1},K_{p-1}(k)}|} ^{1+ \eta_{J_{p-1}}\frac{\beta H_{p-1}}{\alpha H_{p}}} \leq {|U_{J_p,k}|}\leq {|U_{J_{p-1},K_{p-1}(k)}|} ,$$ 5. for every $1\leq p\leq n$, for $y\in \zu$, $g_p(y) = Z((f_p)^{-1}(y)) $ 6. for every $1\leq p\leq n$, for every $k\in\{0,...,2^p-1\}$, we have $f_m(k2^{-p}) = f_p(k2^{-p}) $ for every $p\leq m\leq n$. The last item ensures that once the value of $ f_p$ at $k2^{-p}$ has been chosen, every $f_m$, $m\geq p$, will take the same value at $k2^{-p}$. To build $f_{n+1}$ and $g_{n+1}$, the procedure is the same as above. We use $J_{n+1} = J_n + [J_n\eta_{J_n}]$, and we focus on one interval $I_{J_n,K}$. We have by (\[eq1\]) $$\begin{aligned} \sum_{k=0,...,2^{J_{n+1}-J_n}-1 } (\omega_{ J_{n+1}-J_n,k} (Z_{J_n,K}))^{1/H_{n+1} } =( \omega_{J_n,K }(Z) ) ^{1/H_{n+1}} ,\end{aligned}$$ where $H_{n+1} = H_{[J_n\eta_{J_n}]}(Z_{J_n,K})$ satisfies $\ |H - H_{n+1}| \leq \ep_{J_n}$. we have for every $k'\in\{0,...,2^{ [J_n\eta_{J_n}]}-1\}$ $$\label{eq30} 2^{- [J_n\eta_{J_n}] \beta} \leq | \omega_{ [J_n\eta_{J_n}],k'} (Z_{J_n,K}) | \leq 2^{- [J_n\eta_{J_n}] \alpha}.$$ and $$\label{eq31} \om_{ J_{n+1}-J_n,k'} (Z_{J_n,K}) = \frac{ \om_{J_{n+1},K 2^{J_{n+1}-J_n}+k'}(Z)}{\om_{J_n,K}(Z)}.$$ The same manipulations as above yield $$\begin{aligned} \label{eq8} &&\sum_{k=0,...,2^{J_{n+1}}-1: I_{J_{n+1},k}\subset I_{J_n,K} } (\omega_{ J_{n+1},k} (Z))^{1/H_{n+1} }\prod_{m=0}^{n}(\omega_{ J_m,K_m(k)} (Z))^{1/H_m-1/H_{m+1}} \\ \nonumber&& =( \omega_{J_n,K }(Z) ) ^{1/H_n} \prod_{m=0}^{n-1}(\omega_{ J_m,K_m(k)} (Z))^{1/H_m-1/H_{m+1}} ,\end{aligned}$$ Then $f_{n+1}$ is a refinement on $f_n$: For every $ k\in\{K2^{J_{n+1}-J_n},..., (K+1)2^{J_{n+1}-J_n}-1\}$ and for $t\in I_{J_{n+1},k}$ $$\begin{aligned} f_{n+1}(t) &= & f_n(K2^{-J_n}) \\ & +&\sum_{ k'=K2^{J_{n+1}-J_n}}^{k-1} ( \omega_{J_{n+1},k'}(Z) ) ^{1/H_{n+1}} \prod_{m=0}^{n}(\omega_{ J_m,K_m(k')} (Z))^{1/H_m-1/H_{m+1}} \\ & + &\ ( \omega_{J_{n+1},k}(Z) ) ^{1/H_{n+1}}\prod_{m=0}^{n}(\omega_{ J_m,K_m(k)} (Z))^{1/H_m-1/H_{m+1}} (2^{J_{n+1}} t-k) .\end{aligned}$$ Remark that for every $(k,k') \in\{K2^{J_{n+1}-J_n},..., (K+1)2^{J_{n+1}-J_n}-1\}^2$, for every $m\in \{0,...., n\}$, $K_m(k)=K_m(k')$. This can be achieved simultaneously on every dyadic interval $I_{J_n,K}$, $K\in\{0,...,2^{J_n}-1\}$, by using the same generation $J_{n+1}$ for the subdivision. The obtained function is again an increasing continuous function which is affine on every dyadic interval of generation $J_{n+1}$. We then define $g_{n+1}$ by $ g_{n+1}(y) = Z((f_{n+1})^{-1}(y))$ for $y\in \zu$. Let $U_{J_{n+1},k}$ be the image of the interval $I_{J_{n+1},k}$ by $f_{n+1}$, for every $k\in\{0,...,2^{J_{n+1}}-1\}$. This function $g_{n+1}$ maps any interval $U_{J_{n+1},k}$ to the interval $Z(I_{J_{n+1},k})$, and thus satisfies: $$\omega _{U_{J_{n+1},k} }({g_{n+1}}) = \omega_{J_{n+1},k } (Z)$$ with $$|U _{J_{n+1},k}| =( \omega_{J_{n+1},k}(Z) ) ^{1/H_{n+1}}\prod_{m=0}^{n}(\omega_{ J_m,K_m(k)} (Z))^{1/H_m-1/H_{m+1}} .$$ At this point, all the items of the iteration are ensured, except the item (4). We prove it now. As above, let us choose $k\in\{ 0,...,2^{J_ {n+1} }-1\}$ and $K\in\{ 0,...,2^{ J_2}-1\}$ are such that $I_{J_{n+1},k}\subset I_{J_n,K}$, and let $k'\in\{0,...,2^{J_{n+1}-J_n}-1\}$ be such that $k=K.2^{J_{n+1}-J_n} +k'$. We have by (\[eq31\]) $$\begin{aligned} \nonumber |U_{J_{n+1},k}| & = & ( \omega_{J_n,K }(Z))^{1/H_0} (\omega_{J_{n+1}-J_n,k' }(Z_{J_n,K}) ) ^{1/H_{n+1}}\\ \label{eq40}& = & |U_{J_{n},k}| (\omega_{J_{n+1}-J_n,k' }(Z_{J_n,K}) ) ^{1/H_{n+1}}. \end{aligned}$$ Then, by (\[eq20\]), $$\begin{aligned} |U_{J_{n+1},k}| \geq |U_{J_n,k}| 2^{-(J_{n+1}-J_n)\beta/H_n}= |U_{J_n,k}| 2^{-[J_n\eta_{J_n}]\beta/H_{n+1}}. \end{aligned}$$ As above, since by (\[eq22\]) that $ |U_{J_n,k}| \leq 2^{-J_n \alpha/H_n}$, we have $$\begin{aligned} |U_{J_n,k}| \geq |U_{J_{n+1},k}| \geq |U_{J_n,k}|^{1+ \eta_{J_n}\frac{\beta H_n}{\alpha H_{n+1}}}. \end{aligned}$$ Convergence of $(g_n)_{n\geq 0}$ and $(f_n)_{n\geq 0}$ {#secconv} ------------------------------------------------------ The convergence of the sequence $(f_n)$ to a function $f$ is almost immediate. Indeed, each $f_n$ is an increasing function from $\zu$ to $\zu$, and by item (5) of the iteration procedure, for every $j\geq 1$, for every $k\in\{0,...,2^j-1\}$, $f_m(k 2^{-j})$ is constant as soon as $J_m \geq j$. Recall that for every $m$ and $k$, $|f_m(I_{J_m,k})| = |U_{J_m,k}|$. By (\[eq40\]), and using (\[eq20\]), we obtain that $|U_{J_{m+1},k}|\leq |U_{J_m,k}| 2^{-(J_{m+1}-J_m)\alpha},$ and iteratively $$\label{tn} \mbox{for every $m\geq 1$, }\ |U_{J_{m},k}|\leq C2^{-J_{m}\alpha},$$ for some constant $C$. Hence the sequence $(|U_{J_{m+1},k}|)_{m\geq 1}$ converge exponentially fast to zero, with an upper bound independent of $k$. As a consequence, if $m\geq n $, then $$\begin{aligned} \|f_n-f_m\|_\infty & \leq & \max_{k\in\{0,..., 2^{J_m}-1\} } |f_m(I_{{J_m,k}})| \leq \max_{k\in\{0,..., 2^{J_m}-1\} } |U_{J_m,k}|\\ & \leq & C2^{-J_{m }\alpha}.\end{aligned}$$ This Cauchy criterion immediately gives the uniform convergence of the function series $(f_n)$ to a continuous function $f$, whose value at each dyadic number is known as explained just above. The limit function $f$ is also strictly increasing, since it is strictly increasing on the dyadic numbers. The convergence of the functions sequence $(g_n)_{n\geq 0}$ is then straightforward. Indeed, each $f_n$ is an homeomorphism of $\zu$, and admits a continuous inverse $f_n^{-1}$. We thus have, for every $n\geq 1$, $g_n = Z \circ f_n^{-1}$. The series $(f_n^{-1})$ also converges uniformly on $\zu$. Since $Z$ is uniformly continuous on $\zu$, $(g_n)$ converges uniformly to a continuous function $g:\zu\ra\zu$. Remark that $f$ also admits an inverse function $f^{-1}$, and that $g=Z\circ f^{-1}$. Properties of $g$ and $f$ ------------------------- Obviously, $f$ is a strictly increasing function from $\zu$ to $\zu$, which is what we were looking for. All we have to prove the monofractality property of $g$. This will follow from Lemma \[lem2\]. It has been noticed before that if we set, for every $n\geq 0$, $T_n = \{U_{J_n,k}: k\in \{0,...,2^{J_n}-1\} \}$, then every $T_n$ forms a covering of $\zu$ constituted by pairwise distinct intervals. We obviously have: - $\lim_{n\ra +\infty} \max_{T\in T_n} |T| =0$ (using the remarks of Section \[secconv\] above), - $(T_n)$ is a nested sequence of intervals, - by item (4) of the iteration procedure, if $T \in T_n \subset T'\in T_{n-1}$, then we have ${|T'|} ^{1+ Z_n } \leq {|T|} \leq {|T'|}$, , with $Z_{n} = \eta_{J_{n-1}}\frac{\beta H_{n-1}}{ \alpha H_{n}}$. This sequence $(Z_n)$ converges to zero, since $(\eta_n)$ converges to zero and $(H_n)$ converges to $H$. In order to apply Lemma \[lem2\] and to get the monofractality property of $g$, it is thus enough to prove the last required properties, i.e. there is a positive sequence $(\kappa_n)$ converging to zero such that for every $T\in T_n$, $ |T|^{H+\kappa_n} \leq \om_{T} (g) \leq |T|^{H- \kappa_n}$. For this, let $n\geq 1$ and $T\in T_n$. This interval $T$ can be written $U_{J_n,k}$ for some $k\in \{0,...,2^{J_n}-1\}$. We have $|U_{J_n,k}| = ( \omega_{J_n,k }(Z) ) ^{1/H_n} \prod_{m=0}^{n-1}(\omega_{ J_m,K_m(k)} (Z))^{1/H_m-1/H_{m+1}}$ by construction , and $g(U_{J_n,k})=g_n(U_{J_n,k})=\om_{J_n,k}(Z)$. We just have to verify that $ |U_{J_n,k}|^{H+\kappa_n}\leq \om_{J_n,k}(Z) \leq |U_{J_n,k}|^{H-\kappa_n}$, for some $\kappa_n>0$ independent of $k$. We have $$\begin{aligned} \log |U_{J_n,k}| = \frac{1}{H_n} \log \omega_{J_n,k }(Z) + \sum_{m=0}^{n-1}( \frac{1}{H_m} -\frac{1}{H_{m+1}}) \log \omega_{ J_m,K_m(k)} (Z) \end{aligned}$$ Writing that $ \left|\frac{1}{H_m} -\frac{1}{H_{m+1}} \right|\leq \frac{1}{H^2} (\ep_{J_m} +\ep_{J_{m+1}} +o(\ep_{J_m}))\leq \frac{2}{H^2} (\ep_{J_m} +o(\ep_{J_m}))$ and $\frac{1}{H_n} \leq \frac{1}{H}(1 + \frac{\ep_{J_n}}{H}+ o(\ep_{J_n}))$, we obtain $$\begin{aligned} \label{eq9} \ \ \ \ \ \ \ \ \ \ \left|\frac{\log |U_{J_n,k}|} {\log \omega_{J_n,k }(Z)} -\frac{1}{H} \right|\leq \frac{\ep_{J_n}}{H^2} +\frac{2}{H^2} \frac{ \sum_{m=0}^{n-1}( \ep_{J_m}+ o(\ep_{J_m})) \log \omega_{ J_m,K_m(k)} (Z) }{ \log \omega_{J_n,k }(Z) }\end{aligned}$$ Let us denote $d_{m,k} = - \log \omega_{ J_m,K_m(k)} (Z)$, and $\psi_m=\ep_{J_m}$ for every $m$ and $k$. Comparing the last inequality with the desired result, all we have to show is that $$\label{conv} \frac{ \sum_{m=0}^{n-1} \psi_{m}d_{m,k} }{ d_{n,k}} \ra 0 \ \mbox { when $n \ra +\infty$,}$$ indepently of $k$. This is obtained as follows: Start from $J_1$, that we suppose (without loss of generality) to be greater than 100. Recall that, by the remarks made at the beginning of Section \[proof\], we assumed that for every $n\geq 1$, $\eta_{J_{n }} \geq (\log J_{n }) ^{-1}$. Subsequently, every term $J_n$ is greater than $l_n$, where $(l_n)$ is the sequence defined recursively by $l_{n+1} = l_n (1+1/\log l_n)$ and $l_1=100$. Let us study the growth rate of such a sequence. It is obvious that $\lim_{n\ra +\infty} l_n =+\infty$. We set $v_{n} = \log l_n$. We have $v_{n+1}=v_n + \log (1+1/v_n) \geq v_n + (1-\ep)/v_n$ for every $n$, with $\ep$ that can be taken less than $1/4$ since $v_1$ is large enough. In particular, since $v_1 \geq \sqrt{2}$, $v_{2} \geq \sqrt{2} +(1-\ep)/\sqrt{2} \geq \sqrt{3}$. Recursively, if we assume that $v_n \geq \sqrt{n+1}$, then $v_{n+1} \geq \sqrt{n} +(1-\ep)/\sqrt{n} \geq \sqrt{n+1}$. Hence the sequence $(l_n)$ converges to $+\infty$ faster than $\exp{\sqrt{n}}$, and thus faster than any polynomial $n^\delta$. ( We could be more precise, and prove using the same arguments that the growth rate of $(l_n)$ is exactly $\exp{\sqrt{n}}$.) Let us now find an upper bound for $\psi_m$. Using the item (1) of condition [**C2**]{}, we have that $ \ep_j =o\left( \frac{1}{ (\log j)^{2+\kappa} }\right)$. Using the lower bound we found for $J_m$ with $\ep$ chosen small enough, we get that $\psi_m =o(n^{-1/2(2+\kappa)}) \leq o(n^{-1-\kappa/2} ) $. The crucial point is that $\sum_{m\geq 1} \psi_m <+\infty$. Now, since $d_{m,k}$ is a sequence increasing toward $+\infty$, rewrite the left term of (\[eq9\]) as $ \sum_{m=0}^{n-1} \psi_{m}\frac{d_{m,k} }{ d_{n,k}}$, where $0\leq \frac{d_{m,k} }{ d_{n,k}}\leq 1$. By a classical Caesaro method, we get that (\[conv\]) is true, independtly of $k$. This directly implies, by (\[eq9\]), that independlty of $k$, $\left|\frac{\log |U_{J_n,k}|} {\log \omega_{J_n,k }(Z)} -\frac{1}{H} \right| \leq \kappa_n$, for some sequence $\kappa_n$ that converges to zero. We now apply Lemma \[lem2\], which implies that $g$ is monofractal with exponent $H$. Around Theorem \[maintheo\] {#secgen} =========================== Possible extensions for exponents greater than 1 ------------------------------------------------ Let us finally say a few words about functions having regularity exponents greater than 1. The presence of a polynomial in the definition (\[defpoint\]) of the pointwise [Hölder ]{}exponent is a source of problems when analyzing the local regularity after time subordination. Indeed, suppose that a continuous function $g_1$ behaves like $ |t-t_0|^{\alpha}$ ($0<\alpha<1$) around a point $t_0$, and that another continuous function $g_2$ behaves like $a(t-t_0) + |t-t_0|^{3/2}$ ($a\neq 0$) around $t_0=g_1(t_0)$. Then $h_{g_1}(t_0) =\alpha$, $h_{g_2}(t_0)=3/2$, but $h_{g_2\circ g_1} (t_0) = \alpha$, which is different than the expected regularity $3\alpha/2$. Applying the construction above and getting a decomposition of a function $Z$ as $Z=g\circ f$, because of such problems, we didn’t find any way to guarantee the monofractality of $g$. This is related to the fact that, still for the just above toy example, $\om_{B(t_0,r)}(g_2) \sim 2ar$ when $r$ is small enough, while one would expect $\om_{B(t_0,r)}(g_2) \sim r^{3/2}$. The use of oscillations of order greater than 2 (so that $\om^2_{B(t_0,r)}(g_2) \sim r^{3/2}$) was not sufficient for us to prove Theorem \[maintheo\] for exponents greater than 1. An unsatisfactory result is the following: If $Z$ has all its pointwise [Hölder ]{}exponents less than $M>1$, then $W_{1/2M}\circ Z$ has all its exponents smaller than 1 ($W_{1/2M}$ is the Weierstrass function (\[defwei\]) monofractal with exponent $1/2M$), and one shall try to apply Theorem \[maintheo\] to this function. As a consequence, this problem is still open and of interest. The case of classical monofractal functions and processes {#mono} ========================================================== It is satisfactory to check that classical monofractal functions verify the conditions of Theorem \[maintheo\], and that the exponent $H(Z)$ is actually equal to their monofractal exponent. The proofs below are also representative examples of the method used to get convergence rates for $H_j(Z)$ to $H(Z)$. Weierstrass-type functions -------------------------- Let $0<\alpha<1$, $\beta> \alpha$ and $b>1$ be three real numbers. Let $w$ be a bounded function that belongs to the global [Hölder ]{}class $C^\beta((0,1))$. Consider the Weierstrass-type function $$\label{defwei} Z(t) = \sum_{k=0}^{\infty} b^{-\alpha k} w(b^k t).$$ By [@HB], either the function $Z$ is $C^\beta$, or it is monofractal with exponent $\alpha$. For $w(t)=\sin(t)$, we obtain the classical Weierstrass functions monofractal with exponent $\alpha$. In fact, it is proved in [@HB] that, if $Z\notin C^\beta$ (which is our assumption from now on), then there is a constant $C>1$ such that $$\label{minor1} C^{-1} 2^{-j\alpha} \leq \om_{{j,k}} (Z) \leq C 2^{-j \alpha}.$$ As a direct consequence, $C^{-1/\alpha} \leq \sum_{k=0}^{2^j-1}(\om_{j,k}(Z)) ^{1/\alpha} \leq C^{1/\alpha}$, and obviously $H(Z) = \alpha$. Let us find the convergence rate of $H_j(Z)$ toward $H(Z)$. We are looking for a value of $\ep>0$ and for a scale $J_0$ for which $ \sum_{k=0}^{2^j-1}(\om_{j,k}(Z)) ^{1/(\alpha+\ep)}>1$, for every $j\geq J_0$. Let $j\geq 1$. We have, by (\[minor1\]), $$\sum_{k=0}^{2^j-1}(\om_{j,k}(Z)) ^{1/(\alpha+\ep)} \geq 2^j C^{-1/(\alpha+\ep)} 2^{-j \alpha/(\alpha+\ep)} .$$ For $\ep$ small, $1/(\alpha+\ep) =1/\alpha- \ep/\alpha^2 +o(\ep)$, and thus our constraint is reached as soon as $1< C^{-1/\alpha+\ep/\alpha^2+o(\ep)} 2 ^{j \ep/\alpha +o(j\ep_j)}$. This leads to $$j \ep/\alpha +o(j\ep) + (\log_2 C)(-1/\alpha+\ep/\alpha^2+o(\ep)) >0.$$ There is a generation $J_0$ such that the last inequality is realized by $\ep = \frac{2 \log_2 C}{J_0}$. Subsequently, one necessarily has $1/H_j(Z) \geq 1/(\alpha +\ep)$ for every $j\geq J_0$, since $$\sum_{k=0}^{2^j-1}(\om_{j,k}(Z)) ^{1/(\alpha+\ep)} > \sum_{k=0}^{2^j-1}(\om_{j,k}(Z)) ^{1/H_j(Z)} =1$$ and the mapping $h \ra \sum_{k=0}^{2^j-1}(\om_{j,k}(Z)) ^{1/h}$ is increasing with $h$. Hence $H_j(Z) \leq \alpha+\ep$. Using the same method, we obtain $H_j(Z) \geq \alpha -2\ep$ for $j\geq J_0$. Finally, we have found $J_0$ large enough so that for every $j\geq J_0$, $|H_j(Z)- \alpha | \leq \ep_0 $, where we have set $\ep_{0}=2\ep= 4 \log_2 C/( J_0)$. For every $J\geq 1$ and $K\in\{0,...,2^J-1\}$, we easily get the same convergence rates of $H_j(Z_{J,K})$ toward $\alpha$ from the self-affinity property of the Weierstrass functions. More precisely, fix $J$ and $K$, and let $j\geq J+1$. Remark that by construction of $Z_{J,K}$, we have $\om_{j-J,k}(Z_{J,K}) = \frac{\om_{j,K2 ^{-J}+k }(Z)}{\om_{J,K}(Z)}$. We are looking for a value of $\ep$ for which $$\sum_{k=0,...,2^{j}-1: I_{j,k}\subset I_{J,K}} \left(\frac{\om_{j,k}(Z)}{\om_{J,K}(Z)}\right) ^{1/(\alpha+\ep)}>1,$$ for every $j$ large enough. By (\[minor1\]) (used two times), and remarking that there are $2^{j-J}$ dyadic intervals of generation $j$ included in $I_{J,K}$, we get $$\sum_{k=0,...,2^{j}-1: I_{j,k}\subset I_{J,K}}\left(\frac{\om_{j,k}(Z)}{\om_{J,K}(Z)}\right) ^{1/(\alpha+\ep)} \geq 2^{j-J} C^{-2/(\alpha+\ep)} 2^{-(j-J) \alpha/(\alpha+\ep)} .$$ The same computations as above yield that, if we impose $\eta_J= 1/\log_2 J$ and $\ep_J= 4 (\log_2 C )(\log_2 J)/J$, then for every $j\geq J+J\eta_J$, $H_{j-J}(Z_{J,K}) \leq \alpha-\ep_J $. Similarly, we obtain $H_j(Z) \geq \alpha + 2\ep_J $ for $j\geq J+J\eta_J$. Finally, for every $j\geq J+J\eta_J$, $|H_{j-J}(Z_{J,K})-\alpha | \leq \ep_J$, and $\ep_j=o(1/(\log j)^{2+\kappa})$. Consequently, the Weierstrass functions satisfy [**C1**]{} and [**C2**]{} with $H=\alpha$, and they are also monofractal from our viewpoint. Sample paths of Brownian motions and fractional Brownian motions ---------------------------------------------------------------- Classical estimations on the oscillations of sample paths of Brownian motions $(B_t)_{t\geq 0}$ yield ([@JAFFBEY]) $$\begin{aligned} \mathbb{P} \left( \ \om_{{j,k}} (B_t) \leq j2^{-j /2}\ \right) & \leq & \frac{1}{2\pi} \exp {(- j^2\pi^2)}\\ \mathbb{P} \left( \ \om_{{j,k}} (B_t) \geq \frac{1}{j} 2^{-j/2} \ \right) & \leq & \frac{4j}{2\pi} \exp {(- j^2/8)}\end{aligned}$$ Hence, by a classical Borel-Cantelli argument, with probability one, there is a generation $J_c$ such that for every $j\geq J_c$, we have the bounds $\frac{1}{j} 2^{-j/2} \leq \om_{{j,k}} (B_t) \leq j2^{-j /2}$ for the oscillations. The same computations as for the Weierstrass functions show that there is a generation $J_0$ such that if $j\geq J_0 \geq J_c$ , then $$\begin{aligned} \sum_{k=0}^{2^j-1} (\om_{j,k}(B_t))^{1/(1/2+\ep_{J_0})} >1 \mbox{ and } \sum_{k=0}^{2^j-1} (\om_{j,k}(B_t))^{1/(1/2-\ep_{J_0})} <1,\end{aligned}$$ where $\ep_{J_0} \geq C \frac{\log J_0}{J_0}$ (for some suitable constant $C$). As a consequence, $|H_j(B_t) -1/2| \leq \ep_{J_0}$ for every $j\geq J_0$. The self-similarity property of Brownian motions yields that for every $J\geq 1$ and $K\in\{0,...,2^J-1\}$, for every $j\geq J/\log J$, $|H_j((B_t)_{J,K})-1/2| \leq \ep_J $, where $\ep_J =C \frac{\log^2 J}{J}$, for some constant $C $ independent of $J$ and $K$. We omit the details here, that can be easily checked by the reader. Consequently, a sample path of Brownian motion satisfies with probability one [**C1**]{} and [**C2**]{}, with $H(B_t)=1/2$. Similar estimations on the oscillations of fractional Brownian motions $B_h$ of Hurst exponent $h$ lead to the same almost sure result for the sample paths, which also satisfy almost surely [**C1**]{} and [**C2**]{} with $H(B_h) =h$. Applications to self-similar functions: Theorem \[thself\] {#self} ========================================================== We consider the class of self-similar functions defined in Definition \[defiself\], with the parameters of $Z$ and the contractions $S_k$ satisfying (\[cond1\]). The multifractal analysis of such a function $Z$ is performed in [@JAFFFORM]. Here we are going to prove that, under the conditions (\[cond1\]) on the ${\lambda}_k$ and the $S_k$, $Z$ is CMT, and that the multifractal behavior of $Z$ can be directly deduced from this analysis. It is a case where our analysis provides a natural way to compute the singularity spectrum of $Z$. Preliminary results on the oscillations of $Z$ ---------------------------------------------- Let us introduce some notations: for every $n\geq 1$, for every $(\ep_1,\ep_2,...,\ep_n) \in \{0,1,...,d-1\} ^n$, we denote $I_{\ep_1,\ep_2,...,\ep_n}$ the interval $S_{\ep_1} \circ S_{\ep_2} \circ... \circ S_{\ep_n} (\zu)$. The integer $n$ being given, the open intervals $\stackrel{\circ}{(I_{\ep_1,\ep_2,...,\ep_n})}$ are pairwise disjoint, and the union of the closed intervals $ I_{\ep_1,\ep_2,...,\ep_n}$ equals $\zu$. Now fix an integer $n\geq 1$ and a sequence $(\ep_1,\ep_2,...,\ep_n) \in \{0,1,...,d-1\} ^n$. The interval $ I_{\ep_1,\ep_2,...,\ep_n}$ has a length equal to $r_{\ep_1}r_{\ep_2}\cdot \cdot\cdot r_{\ep_n}$. Finally, by iterating $n$ times formula (\[defself\]), we get that for every $t\in I_{\ep_1,\ep_2,...,\ep_n}$, $$\begin{aligned} \label{decompf} Z(t) & = &{\lambda}_{\ep_1}\cdot{\lambda}_{\ep_2}\cdot\cdot\cdot{\lambda}_{\ep_n} \cdot (Z\circ S_{\ep_n}^{-1} \circ S_{\ep_{n-1}}^{-1} \circ... \circ S_{\ep_1}^{-1} ) (t) \\ \nonumber&+ &{\lambda}_{\ep_1}\cdot{\lambda}_{\ep_2}\cdot\cdot\cdot{\lambda}_{\ep_{n-1}} \cdot (\phi\circ S_{\ep_{n-1}}^{-1} \circ S_{\ep_{n-2}}^{-1} \circ... \circ S_{\ep_{1}}^{-1} ) (t) \\ \nonumber &+& ... \\ \nonumber &+& {\lambda}_{\ep_1} \cdot{\lambda}_{\ep_2}\cdot (\phi\circ S_{\ep_2}^{-1} \circ S_{\ep_1}^{-1} ) (t) \\ \nonumber &+& {\lambda}_{\ep_1} \cdot (\phi\circ S_{\ep_1}^{-1} ) (t) \\ \nonumber &+& \phi(t). \end{aligned}$$ Recall that $\chi_{\max}$ is defined in (\[cond1\]). \[lem3\] Let $\kappa = \frac{\chi_{\max}}{1-\chi_{\max}}$. Then either $Z$ is a $\kappa$-Lipschitz function, or there is a constant $C>1$ such that for every $n\geq 1$, for every $(\ep_1,\ep_2,...,\ep_n) \in \{0,1,...,d-1\} ^n$, $$\label{majmin1} C^{-1} \cdot |{\lambda}_{\ep_1}\cdot{\lambda}_{\ep_2}\cdot\cdot\cdot{\lambda}_{\ep_n} |\leq \om_{I_{\ep_1,\ep_2,...,\ep_n}}(Z) \leq C \cdot |{\lambda}_{\ep_1}\cdot{\lambda}_{\ep_2}\cdot\cdot\cdot{\lambda}_{\ep_n} |.$$ We first find an upper-bound for $ \om_{I_{\ep_1,\ep_2,...,\ep_n}}(Z) $. We use the iterated formula (\[decompf\]). Let $n$ and $(\ep_1,\ep_2,...,\ep_n) \in \{0,1,...,d-1\} ^n$. Remark that when $t$ ranges in ${I_{\ep_1,\ep_2,...,\ep_n}}$, $(S_{\ep_n}^{-1} \circ S_{\ep_{n-1}}^{-1} \circ... \circ S_{\ep_1}^{-1} ) (t) $ ranges in $\zu$. Hence the oscillation of the first term of (\[decompf\]) is upper-bounded by $ | {\lambda}_{\ep_1} {\lambda}_{\ep_2 } \! \cdot \! \cdot \! \cdot \! {\lambda}_{\ep_n} | \cdot \om_{\zu}(Z)$. Now, for every $k\in\{1,...,n-1\}$, when $t$ ranges in ${I_{\ep_1,\ep_2,...,\ep_n}}$, $(S_{\ep_k}^{-1} \circ S_{\ep_{k-1}}^{-1} \circ... \circ S_{\ep_1}^{-1} ) (t) $ ranges in $I_{\ep_{k+1},...,\ep_n}$. Using that $\phi$ is a Lipschitz function, we get that the oscillation of each term of the form $ {\lambda}_{\ep_1}{\lambda}_{\ep_2} \! \cdot \! \cdot \! \cdot \! {\lambda}_{\ep_{k}} \cdot (\phi \circ S_{\ep_{k}}^{-1} \circ S_{\ep_{k-1}}^{-1} \circ... \circ S_{\ep_{1}}^{-1} ) (t) $ is upper bounded by $ |{\lambda}_{\ep_1} {\lambda}_{\ep_2} \! \cdot \! \cdot \! \cdot \! {\lambda}_{\ep_{k}} | ( r_{\ep_{k+1}} \! \cdot \! \cdot \! \cdot \! r_{\ep_n})$. Finally, we obtain using (\[cond1\]) $$\begin{aligned} \om_{I_{\ep_1,\ep_2,...,\ep_n}}(Z) & \leq & | {\lambda}_{\ep_1} {\lambda}_{\ep_2} \! \cdot \! \cdot \! \cdot \! {\lambda}_{\ep_n} |+ \sum_{k=1} ^{n-1} |{\lambda}_{\ep_1} {\lambda}_{\ep_2} \! \cdot \! \cdot \! \cdot \! {\lambda}_{\ep_{k}} |\cdot ( r_{\ep_{k+1}} \! \cdot \! \cdot \! \cdot \! r_{\ep_n})\\ &\leq & | {\lambda}_{\ep_1} {\lambda}_{\ep_2} \! \cdot \! \cdot \! \cdot \! {\lambda}_{\ep_n} |\Big[ 1+ \sum_{k=1}^{n-1}\big( \prod_{j=1}^{k} \frac{r_j}{{\lambda}_j} \Big) \Big] \\ &\leq & | {\lambda}_{\ep_1} {\lambda}_{\ep_2} \! \cdot \! \cdot \! \cdot \! {\lambda}_{\ep_n} | \Big[ 1+ \sum_{k=1}^{n-1} \chi_{\max}^k \Big] \leq C_1 | {\lambda}_{\ep_1}{\lambda}_{\ep_2}\ \! \cdot \! \cdot \! \cdot \! {\lambda}_{\ep_n} |,\end{aligned}$$ where $C_1= 1+ \sum_{k=1}^{+\infty} \chi_{\max}^k < +\infty$. We now move to the lower bound. Assume that $Z$ is not $\kappa$-Lipschitz. There are two real numbers $0\leq t_0 ,t'_0 \leq 1$ such that $|Z(t'_0)-Z(t_0)| \geq (\kappa+\eta) |t'_0-t_0|$, for some $\eta>0$. Let $n$ and $(\ep_1,\ep_2,...,\ep_n) \in \{0,1,...,d-1\} ^n$. Let us call $t_n = S_1 \circ S_{2} \circ ... \circ S_{n}(t_0)$ and $t'_n = S_1 \circ S_{2} \circ ... \circ S_{n} (y_0)$. We obviously have $t_n, t'_n \in I_{\ep_1,\ep_2,...,\ep_n}$, and thus $\om_{I_{\ep_1,\ep_2,...,\ep_n}}(Z) \geq |Z(t'_n) -Z(t_n)|$. Using again (\[decompf\]), we get by the same lines of computations as above $$\begin{aligned} &&\hspace{-10mm} |Z(t'_n) -Z(t_n)| \\ \geq && \hspace{-5mm} |{\lambda}_{\ep_1} {\lambda}_{\ep_2} \! \cdot \! \cdot \! \cdot \! {\lambda}_{\ep_n} |\cdot |Z(t'_0) -Z(t_0)| - \sum_{k=1} ^{n-1} |{\lambda}_{\ep_1} {\lambda}_{\ep_2} \! \cdot \! \cdot \! \cdot \! {\lambda}_{\ep_{k}} | ( r_{\ep_{k+1}} \! \cdot \! \cdot \! \cdot \! r_{\ep_n}) |t'_0-t_0|\\ \geq &&\hspace{-5mm} | {\lambda}_{\ep_1} {\lambda}_{\ep_2} \! \cdot \! \cdot \! \cdot \! {\lambda}_{\ep_n} |\cdot |Z(t'_0) -Z(t_0)| \left[ 1 - \sum_{k=1}^{n-1}\big( \prod_{j=1}^{k} \frac{r_j}{ |{\lambda}_j |} \Big) \frac{|t'_0 -t_0|}{|Z(t'_0) -Z(t_0)|}\right] \\ \geq &&\hspace{-5mm} | {\lambda}_{\ep_1} {\lambda}_{\ep_2} \! \cdot \! \cdot \! \cdot \! {\lambda}_{\ep_n} |\cdot |Z(t'_0) -Z(t_0)| \Big[ 1- \frac{ 1} {\kappa +\eta}\sum_{k=1}^{+\infty} \chi_{\max}^k \Big] \geq C_2 \cdot | {\lambda}_{\ep_1}{\lambda}_{\ep_2}\ \! \cdot \! \cdot \! \cdot \! {\lambda}_{\ep_n} |,\end{aligned}$$ where $C_2 = |Z(t'_0) -Z(t_0)| ( 1- \frac{ 1} {\kappa +\eta}\frac{\chi_{\max}}{1-\chi_{\max}}) >0$ by assumption. Finally, (\[majmin1\]) is proved with $C= \max(C_1, C_2^{-1})$. Comparaison of $Z$ with a self-similar measure ---------------------------------------------- In order to prove that the function $Z$ (\[defself\]) satisfies our conditions [**C1**]{} and [**C2**]{}, we introduce a self-similar measure $\mu$, whose multifractal behavior will be compared with the one of $Z$, and the notion of multifractal formalism. Let us consider the exponent $\beta>1$ such that (\[defbeta\]) holds and the associated self-similar measure $\mu$ defined by (\[defmu\]) $ \mu = \sum_{k=0} ^{d-1} p_k \cdot (\mu\circ S_k ^{-1}).$ In our case where the similitudes do not overlap, it is easily checked that by construction, for every $n$ and $(\ep_1,\ep_2,...,\ep_n) \in \{0,1,...,d-1\} ^n$, we have $\mu( I_{\ep_1,\ep_2,...,\ep_n}) = p_{\ep_1} p_{\ep_2} \! \cdot \! \cdot \! \cdot p_{\ep_n} = | {\lambda}_{\ep_1} {\lambda}_{\ep_2} \! \cdot \! \cdot \! \cdot \! {\lambda}_{\ep_n} |^{\beta}$. This class of measures has been extensively studied [@BMP; @CM; @OLSEN; @PERES]. For instance, the multifractal analysis of $\mu$ is very well known. For this, let us introduce the so-called $L^q$-spectrum of $\mu$ defined by $$\label{deftau} \tau_\mu: q\in \R \mapsto \tau_\mu(q)= \liminf_{j\ra +\infty} \tau_\mu(j,q), \ \mbox{ where } \tau_\mu(j,q) = \frac {\log_2 \sum_{k=0} ^{2^j-1} \mu(I_{j,k}) ^q}{- j}.$$ We only recall the properties we need [@CM; @OLSEN; @PERES] \[prop4\] 1. For every $q\in \R$, $ \tau_\mu(q)$ is the unique real number satisfying the equation $\sum_{k=0}^{d-1} (p_k )^q (r_k)^{\tau_\mu(q)} =1$. The mapping $q\mapsto \tau_\mu(q)$ is analytic on its support. Moreover, the liminf used to define $\tau_\mu(q)$ is in fact a limit for every $q$ such that $\tau_\mu(q)$ is finite. 2. There is an interval of exponents $I_\mu= [\alpha_{\min},\alpha_{\max}]$ such that for every $\alpha\in I_\mu$, $\tilde d_{\mu}(\alpha) = (\tau_\mu)^*(\alpha) $, where $(\tau_\mu)^*(\alpha) := \inf_{q\in \R} (q\alpha- \tau_\mu(q))$ is by definition the Legendre transform of $\tau_\mu$. 3. If $\alpha\notin I_\mu$, then $ \{x : \alpha_\mu(t)=\alpha\} = \emptyset $. 4. There is $M \geq 1$ such that for every $j,k$ large enough, $2^{-j M} \leq \mu(I_{j,k}) \leq 2 ^{-j /M}$. Part (2) above is known as the multifractal formalism for measures, when it holds. Let us come back to the function $Z$. The reader can check that such a function $Z$ satisfies [**C1**]{} and [**C2**]{}, and is thus CMT. Here we propose a quick proof of Theorem \[thself\], especially adapted to this case. The aim is to prove that $Z$ can be written $Z= g\circ$ Each dyadic interval $I_{j,k}$ is included in one dyadic interval $I_{\ep_1,...,\ep_n}$, and contains a dyadic interval $I_{\ep_1,...,\ep_n,\ep_{n+1}}$, such that $I_{\ep_1,...,\ep_n}$ and $I_{\ep_1,...,\ep_n,\ep_{n+1}}$ can be written respectively $I_{j',k'}$ and $I_{j'',k''}$ with $0\leq j-j' , j''-j' \leq C$, for some constant $C$ independent of $j$ and $k$. Consequently, $\om_{ I_{\ep_1,...,\ep_n,\ep_{n+1}}} (Z) \leq \om_{I_{j,k} } (Z) \leq \om_{ I_{\ep_1,...,\ep_n}} (Z) $, and thus $$C^{-1}\mu({ I_{\ep_1,...,\ep_n,\ep_{n+1}}})^{1/\beta} \leq \om_{{j,k} } (Z) \leq C\mu({ I_{\ep_1,...,\ep_n}} )^{1/\beta}.$$ Using now the self-similarity properties of the measure and the open set condition, we see that $ \max_k ({p_k}) \cdot {\mu (I_{\ep_1,...,\ep_n,\ep_{n+1}} ) } \geq {\mu(I_{j,k}) }$ and $\mu(I_{j,k}) \geq \min_k (p_k) \cdot \mu({ I_{\ep_1,...,\ep_{n}}})$. Hence, combining this with the last double inequality, we obtain that for every $j$ and $k$ $$\label{majmin3} C^{-1}\mu(I_{j,k})^{1/\beta} \leq \om_{{j,k} } (Z) \leq C\mu( I_{j,k})^{1/\beta}$$ for another constant $C$, i.e. Proposition \[lem3\] extends to all dyadic intervals. Let us check that $Z$ satisfies conditions [**C1**]{} and [**C2**]{}. Remark that, because of (\[majmin3\]), $$C^{-\beta} = C^{-\beta} \sum_{k=0}^{2^j-1} \mu(I_{j,k}) \leq \sum_{k=0}^{2^j-1} (\om_{j,k}(Z))^{\beta} \leq C^{\beta} \sum_{k=0}^{2^j-1} \mu(I_{j,k}) = C^{\beta} .$$ Let $\ep_1>0$. Using part (4) of Proposition \[prop4\] to find upper- and lower-bounds for $\om_{j,k}(Z)$ uniformly in $k$, we obtain $$\begin{aligned} && \sum_{k=0}^{2^j-1} (\om_{j,k}(Z))^{\beta -\ep_1} \geq \sum_{k=0}^{2^j-1} (\om_{j,k}(Z))^{\beta} (C\mu(I_{j,k}))^{ -\ep_1} \geq C^{-\beta-\ep_1} 2^{j\ep_1/M} \\ \mbox{and}&& \sum_{k=0}^{2^j-1} (\om_{j,k}(Z))^{\beta +\ep_1} \leq \sum_{k=0}^{2^j-1} (\om_{j,k}(Z))^{\beta} (C\mu(I_{j,k}))^{ \ep_1} \geq C^{\beta+\ep_1} 2^{-j\ep_1 M} .\end{aligned}$$ Hence, the same computations as in the case of Weierstrass functions lead to the following choice: for some $J_0$ large enough, we set $\ep_{1} = \frac{2\beta M \log_2 C}{\log J_0}$, and thus for every $j \geq J_0$, $|H_j(Z) - H(Z)| \leq \ep_1$. Let now $J,K$ be two integers, $\ep >0$, and focus on $H(Z_{J,K})$. The same computations as above and as in the Weierstrass case yield for $j\geq J$ $$\begin{aligned} \sum_{k'=0}^{2^{j-J}-1} ({\om_{j-J,k'}(Z_{J,K})})^{\beta -\ep} &= & \sum_{k=0: I_{j,k}\subset I_{J,K}}^{2^j-1} \left(\frac{\om_{j,k}(Z)}{\om_{J,K}(Z)}\right)^{\beta -\ep} .\end{aligned}$$ First notice that $\left(\frac{1}{\om_{J,K}(Z)}\right)^{\beta -\ep} \geq \left(\frac{1}{C\mu(I_{J,K})^{1/\beta}}\right)^{ -\beta+\ep} \geq C^{-\beta-\ep} \mu(I_{J,K}) ^{-1+\ep/\beta} $. Then we remark that $$\begin{aligned} \sum_{k=0: I_{j,k}\subset I_{J,K}}^{2^j-1} (\om_{j,k}(Z))^{\beta -\ep} & \geq &\sum_{k=0: I_{j,k}\subset I_{J,K}}^{2^j-1} (\om_{j,k}(Z))^{\beta } (C\mu(I_{j,k}))^{ -\ep} . \end{aligned}$$ Combining these inequalities we get $$\begin{aligned} \sum_{k'=0}^{2^{j-J}-1} ({\om_{j-J,k'}(Z_{J,K})})^{\beta -\ep}&\geq & C^{-\beta-2\ep} \sum_{k=0: I_{j,k}\subset I_{J,K}}^{2^j-1} \frac{(\om_{j,k}(Z))^{\beta }}{\mu(I_{J,K})} \frac {\mu(I_{J,K}) ^{\ep/\beta} } { \mu(I_{j,k})^{ \ep} }.\end{aligned}$$ Let us focus on $\frac {\mu(I_{J,K}) ^{\ep/\beta} } { \mu(I_{j,k})^{ \ep} }$. Since $\beta>1$, we have $\frac {\mu(I_{J,K}) ^{\ep/\beta} } { \mu(I_{j,k})^{ \ep} } \geq \left(\frac {\mu(I_{J,K}) } { \mu(I_{j,k})}\right)^{ \ep} $. This quantity is lower bounded by $L^{(j-J) \ep}$ for some constant $L$ (uniformly in $k$ and $K$), since the ratio of the $\mu$-measures of a dyadic interval and its father (in the dyadic tree) is uniformly upper- and lower-bounded for our dyadic self-similar measure $\mu$. Finally, we obtain $$\begin{aligned} \sum_{k'=0}^{2^{j-J}-1} ({\om_{j-J,k'}(Z_{J,K})})^{\beta -\ep}&\geq & C^{-\beta-2\ep} \sum_{k=0: I_{j,k}\subset I_{J,K}}^{2^j-1} \frac{(\om_{j,k}(Z))^{\beta }}{\mu(I_{J,K})} L^{(j-J) \ep}\\ &\geq & C^{-2\beta-2\ep} \sum_{k=0: I_{j,k}\subset I_{J,K}}^{2^j-1} \frac{\mu(I_{j,k})}{\mu(I_{J,K})} L^{(j-J) \ep} \\ & \geq & C^{-2\beta-2\ep} L^{(j-J) \ep}.\end{aligned}$$ Hence if we fix $\eta_J = 1/\log_2 J$, then the sum above is greater than 1 as soon as $j\geq J+[J\eta_J]$ and $\ep \geq \frac{4 \beta \log C \log_2 J}{J \log L}$. Thus for $j\geq J+[J\eta_J]$, $H_{j-J}(Z_{J,K}) - H(Z_{J,K}) \leq \frac{1}{\beta-\ep} \leq \frac{1}{\beta} +\ep_J$ with $\ep_J= \frac{8 \log C \log_2 J}{J \beta\log L}$. Similarly one shows that for $j\geq J+[J\eta_J]$, $H_{j-J}(Z_{J,K}) - H(Z_{J,K}) \geq \frac{1}{\beta} -\ep_J$, and [**C2**]{} holds true for $Z$. Applying Theorem \[maintheo\] yields that $Z$ is CMT and can be written as $Z=g\circ F$, where $g$ is monofractal of exponent $1/\beta$. Computation of the singularity spectrum of $F$ ---------------------------------------------- Applying directly the construction of Section \[proof\], we find a function $g$ monofractal with exponent $1/\beta$ and a strictly increasing function $f$ such that $Z= g \circ f$. One can even enhance this result as follows. Following the proof of Section \[proof\], we see that for every $p\geq 1$, for every $k\in \{0,1,...,2^{J_p}-1\}$, $$\mu(I_{J_p,k}) ^{1 + \kappa_p} \leq |f(I_{J_p,k}) | \leq \mu(I_{J_p,k}) ^{1 -\kappa_p},$$ where $(\kappa_p)_{p\geq 1}$ is a positive sequence decreasing to zero and $\mu$ is defined by (\[defmu\]). Let us denote by $F$ the integral of the self-similar measure $\mu$, i.e. for $t\in\zu$ $F(t)= \mu([0,x])$. We claim that $f= g_1 \circ F$ for some function $g$ which belongs to $C^{1-\eta}(\zu)$, for every $\eta>0$. Indeed, define for every $t\in\zu$ $g_1(t) = f\circ F^{-1} (t)$. This is possible since $F$ is an homeomorphism of $\zu$. By construction, for every $p\geq 1$, for every $k\in \{0,1,...,2^{J_p}-1\}$, $g_1(F \big( I_{J_p,k}) \big) = f \circ F^{-1} \circ F \big( I_{J_p,k}) = f \big( I_{J_p,k}) $, thus by the inequality above, $$[F(I_{J_p,k} )|^{1 + \kappa_p} = \mu(I_{J_p,k}) ^{1 + \kappa_p} \leq |g_1(F \big( I_{J_p,k}) \big) | \leq \mu(I_{J_p,k}) ^{1 -\kappa_p} = [F(I_{J_p,k} )|^{1 -\kappa_p} ,$$ where we used that $[F(I_{J_p,k} )| = \mu(I_{J_p,k})$. Now the sets of intervals $\{ F \big( I_{J_p,k}) \big): k\in \{0,1,...,2^{J_p}-1\}\}$ obviously forms a covering of $\zu$ to which Lemma \[lem2\] can be applied with $H=1$. FInally, we find that $Z=g\circ g_1 \circ F = g_2 \circ F$, where $g_2$ is clearly monofractal with exponent $1/\beta$ since $g$ and $g_1$ are monofractal respectively with exponents $1/\beta$ and $1$ (the resulting function $g_2$ is monofractal since the oscillations of $g$ and $g_1$ are upper and most important lower bounded on every interval). An example of function satisfying [**C1-C2**]{} in a triadic basis {#multi} ================================================================== We recall the contruction of multifractal functions of [@OKA], which somehow generalizes the Bourbaki’s and Perkin’s functions. Let us consider the function $Z_a$ defined for $0\leq a\leq 1$ as the limit of an iterated construction: Start from $Z_a^0(t)=t $ on $\zu$, and define $Z_a^j(t)$ recursively on $\zu$ by the following scheme: Suppose that $Z_a^j$ is continuous and piecewiese affine on each triadic interval $[k3^{-j}, (k+1)3^{-j}]$, $k\in\{0,...,3^j-1\}$. Then $Z_a^{j+1}$ is constructed as follows: On each triadic interval $[k3^{-j}, (k+1)3^{-j}]$, $Z_a^{j+1}$ is still a continuous function which is affine on each triadic subinterval $[k'3^{-(j+1)}, (k'+1)3^{-(j+1)}]$ included in $[k3^{-j}, (k+1)3^{-j}]$, and $$\begin{aligned} Z_a^{j+1} (k3^{-j}) & = & Z_a^{j} (k3^{-j})\\ Z_a^{j+1} (k3^{-j}+ 3^{-(j+1)}) & = & Z_a^{j} (k3^{-j}) + a \Big( Z_a^{j} ((k+1)3^{-j})- Z_a^{j} (k3^{-j}) \Big) \\ Z_a^{j+1} (k3^{-j}+ 2.3^{-(j+1)}) & = & Z_a^{j} (k3^{-j})+ (1-a) \Big( Z_a^{j} ((k+1)3^{-j})- Z_a^{j} (k3^{-j}) \Big) \\ Z_a^{j+1} ((k+1)3^{-j}) & = & Z_a^{j} ((k+1)3^{-j}).\end{aligned}$$ This simple construction is better explained by the Figure \[fig1\]. It is straightforward to see that the sequence $(Z_a^j)_{j\geq 1}$ converges uniformly to a continuous function $Z_a$ as soon as $0<a<1$. Bourbaki’s function is obtained when $a=2/3$, while Perkin’s function corresponds to $a=5/6$. \[fig1\] ![Iterated construction of $Z_a$, from step $j$ to step $j+1$](figure1 "fig:"){width="10cm" height="5cm"} For $a\leq 1/2$, the function is simply the integral of a trinomial measure of parameters $(a, 1-2a,a)$, hence its singularity spectrum is completely known. We are going to explain why the functions $Z_a$, when $a\geq 1/2$, satisfy our assumptions, and thus can be written as the composition of a monofractal function $g$ (with an exponent $H$ we are going to determine) with an increasing function. We will also deduce from this study the singularity spectrum of $Z_a$. For $a>1/2$, the limit function $Z_a$ is nowhere monotone. Let us compute the oscillations of $Z_a$ on each triadic interval. Remark first that the slope of $Z_a^1$ on $[0,1/3]$ is $3a$, it is $-3(2a-1)$ on $[1/3,2/3]$ and $3a$ on $[2/3,1]$. Iteratively, if $j \geq 1$ and $k\in\{0,...,3^j-1\}$, we write $k3^{-j} = \sum_{p=1}^j \xi_p 3^{-p}$, with $\xi_i \in\{0,1,2\}$. Then the slope of $Z_a^j$ on $[k3^{-j}, (k+1)3^{-j}]$ is simply $$(3a) ^{n_{k,j,0}} (-3(2a-1))^{n_{k,j,1}}(3a) ^{n_{k,j,2}} = 3^j (a) ^{n_{k,j,0}} (-(2a-1))^{n_{k,j,1}}(a) ^{n_{k,j,2}},$$ where $n_{k,j,i}$ is the number of integers $p\in\{1,...,j\}$ such that $\xi_p=i$ (for $i=0,1,2$) in the triadic decomposition of $k3^{-j}$. Let us consider the trinomial measure $\mu_a$ of parameters $(\frac{a}{4a-1},\frac{2a-1}{4a-1},\frac{a}{4a-1})$. Then it is obvious that the absolute value of the slope of $Z_a^j$ on each triadic interval $[k3^{-j}, (k+1)3^{-j}]$ can be written as $ \mu_a([k3^{-j}, (k+1)3^{-j}])3^j (4a-1)^j.$ As a final remark, we also notice that the oscillations of $Z_a$ on each triadic interval $[k3^{-j}, (k+1)3^{-j}]$ is the same as the oscillations of $Z^j_a$ on each triadic interval $[k3^{-j}, (k+1)3^{-j}]$, which is equal to $3^{-j}$ times the slope, i.e. $$\label{eq11} \mu_a([k3^{-j}, (k+1)3^{-j}]) (4a-1)^j.$$ Let $q\in\R$. Let us compute the sum of the oscillations of $Z_a$ at generation $j$. We have $$\label{eq10} \sum_{k=0}^{3^j-1} (\om_{[k3^{-j}, (k+1)3^{-j}]}(Z_a))^{q} =\sum_{k=0}^{3^j-1} (\mu_a([k3^{-j}, (k+1)3^{-j}]))^{q} 3^{qj\log_3 (4a-1)}.$$ Let us explain now how we easily compute the exponent $H_a$ such that (\[defh\]) holds true. For a multinomial measure $\mu_a$ (in fact, for any positive Borel measure), it is very classical in multifractal analysis to introduce the functions $\tau_{\mu_a,j}(q)$ and the scaling function $\tau_{\mu_a}(q)$ defined for $q\in\R$ as (\[deftau\]) but in the triadic basis: $$\begin{aligned} \tau_{\mu_a}(q)= \liminf_{j\ra +\infty} \tau_{\mu_a,j}(q), \ \mbox{ where } \ \tau_{\mu_a,j}(q)= \frac{\log_3 \sum_{k=0}^{3^j-1} (\mu_a([k3^{-j}, (k+1)3^{-j}]))^{q}}{ -j}.\end{aligned}$$ In our simple case, it is easy to see that for every $j\geq 1$ and $q\in\R$ $$\begin{aligned} \tau_{\mu_a,j}(q) \ = \ \tau_{\mu_a}(q) & = & -\log_3\left(\left(\frac{a}{4a-1}\right)^q+\left( \frac{2a-1}{4a-1}\right)^q+\left(\frac{a}{4a-1}\right)^q\right)\\ & = & -\log_3(2(a)^q+(2a-1)^q) + q\log_3 (4a-1). \end{aligned}$$ What matters to us is the value of $q$ for which the sum in (\[eq10\]) equals 1. Let us write this specific value $q$ as $1/H$, for some $H>0$. When this sum is 1, then we have $$3^{-j\tau_{\mu}(1/H)} 3^{ j(\log_3 (4a-1))/H} = \sum_{k=0}^{3^j-1} (\mu_a([k3^{-j}, (k+1)3^{-j}]))^{1/H} 3^{j(\log_3 (4a-1))/H}=1.$$ Let $H_a$ be the solution of the equation $-\tau_{\mu}(1/H_a) + \log_3 (4a-1)/H_a=0$, which is equivalent to $$\label{defha} 2(a)^{1/H_a}+(2a-1)^{1/H_a} =1.$$ This solution is positive, unique, and strictly smaller than 1. Hence, in this case, the monofractal exponent $H_a$ is defined through an implicit formula. In order to get the whole condition [**C2**]{}, it suffices to notice that any rescaled function $(Z_a)_{J,K}$ (as defined in (\[eq00\]), but here with triadic intervals) is actually equal to $Z_a$ (if $Z_a$ is increasing on $[k3^{-j},(k+1)3^{-j}]$) or to $Z_a(1-.)$ (if $Z_a$ is decreasing on $[k3^{-j},(k+1)3^{-j}]$). Hence $H((Z_a)_{J,K})$ is a limit for every $J,K$, and is even constant equal to $H_a$. Thus [**C2**]{} is satisfied. We can then apply Theorem \[maintheo\], and $Z_a$ is the composition of a monofractal function $g_a$ of exponent $H_a$ with an increasing function $Z_a$. For $a=2/3$, we see that $H_a=1/2$ is the solution to (\[defha\]), since $2(2/3)^{2} + (1/3)^{2} =1$. We have plotted in Figure \[fig2\] the Bourbaki’s function $f_{2/3}$, its corresponding time change $F_{2/3}$ and the corresponding monofractal function $g_{2/3}$ of exponent $1/2$ such that $f_{2/3}=g_{2/3}\circ F_{2/3}$. In this case, we can even go further and compute the singularity spectrum of $Z_a$. The trinomial measure satisfy the multifractal formalism for measures, i.e. the singularity spectrum of $\mu$ is given by the Legendre transform of $\tau_{\mu_a}$: $$d_{\mu_a}(\alpha) = (\tau_{\mu_a})^*(\alpha):= \inf_{q\in\R} (q\alpha-\tau_{\mu_a}(q)),$$ for every $\alpha \in [-\log_3 (a/(4a-1)), -\log_3(2a-1)/(4a-1)]$. \[fig2\] ![[**Top:**]{} Bourbaki’s function $f_{2/3}$ on the left, the multifractal time change $F_{2/3}$ in the middle, and on the right the monofractal function $g_{2/3}$ of exponent $1/2$ such that $f_{2/3} = g_{2/3}\circ F_{2/3}$. [**Bottom:**]{} Singularity spectra of $\mu_{2/3}$ on the left, of $f_{2/3}$ on the right.](bourb2 "fig:"){width="14cm" height="5cm"}\ ![[**Top:**]{} Bourbaki’s function $f_{2/3}$ on the left, the multifractal time change $F_{2/3}$ in the middle, and on the right the monofractal function $g_{2/3}$ of exponent $1/2$ such that $f_{2/3} = g_{2/3}\circ F_{2/3}$. [**Bottom:**]{} Singularity spectra of $\mu_{2/3}$ on the left, of $f_{2/3}$ on the right.](specbourb "fig:"){width="12cm" height="4cm"} It is easy to see, using (\[eq11\]), that if $\mu_a$ has a local [Hölder ]{}exponent equal to $\alpha$ at a point $t_0$, then $Z_a$ has at $t_0$ a pointwise [Hölder ]{}exponent equal to $\alpha^{1/H_a}-\log_3(4a-1)$. Hence the multifractal spectrum of $Z_a$ is deduced from the one of $\mu_a$ by the formula $$d_{Z_a}(h) = \tilde d_{\mu_a} \left( (h+\log_3(4a-1))^{1/H}\right)$$ for every $h \in [(-\log_3 (a))^{1/H_a})-\log_3(4a-1), (-\log_3(2a-1))^{1/H_a})-\log_3(4a-1)]$. A more explicit formula is obtained as follows: for every $q\in\R$, if $\alpha = \tau_{\mu_a}$, then $\tilde d_{\mu_a} (\alpha) = q (\tau_{\mu_a})'(q) - \tau_{\mu_a}(q)$. The singularity spectra of $f_{2/3}$ and $\mu_{2/3}$ are given in Figure \[fig2\]. Finally, remark that the maximum of the spectrum is obtained for $\alpha_a=((\tau_{\mu_a})'(0))^{1/H_a}-\log_3(4a-1)$, and $d_{Z_a}(\alpha_a) =1$. After computations, we find $\alpha_a= \frac{-1}{3} ( \log_3 ( a^2(2a-1))$. Let us consider the value of $a_0$ such that $\alpha_{a_0} =1$. Then $a_0^2(2a_0-1)=1/27$, i.e. $54a_0^3-27a_0^2 =1$. When $a>a_0$, the set of points $t$ for which $h_{Z_a}(t)>1$ is of Lebesgue measure 1, hence we recover the main result of [@OKA]: $Z_a$ is differentiable on a set of Lebesgue measure 1. Here we obtain in addition the whole multifractal spectrum of $Z_a$. Acknowledgment {#acknowledgment .unnumbered} -------------- The author thanks Yanick Heurteaux for a discussion on the oscillations of the self-similar functions. [00]{} Barral, J., Mandelbrot, B., Multifractal products of cylindrical pulses, Probab. Theory Relat. Fields [**124**]{}(3), 409–430 (2002) J. Barral and S. Seuret, The singularity spectrum of Lévy processes in multifractal time, Adv. Maths 14 (1), 437-468, 2007. A. Benassi, S. Jaffard, D. Roux, Elliptic Gaussian random processes. Rev. Mat. Iberoamericana, 13(1):19–90 (1997). J. Bertoin, Lévy processes. Cambridge Univ. Press (1998). T. Bousch, Y. Heurteaux, On oscillations of Weierstrass-type functions, Preprint. G. Brown, G. Michon, J. Peyri[è]{}re, On the multifractal analysis of measures, J. Stat. Phys., 66:3–4, 775–790, 1992. R. Cawley, R. D. Mauldin, Multifractal decomposition of Moran fractals, Adv. Maths. 92:196-236, 1992. U. Frisch, G.S. Parisi, Fully developed turbulence and intermittency, Proc. International Summer school Phys., Enrico Fermi, 84–88, North Holland, 1985. S. Jaffard, Multifractal formalism for self-similar functions: Part I and II. SIAM J. Math. Anal. [**28(4)**]{}, 944–997 (1997). S. Jaffard, The multifractal nature of [L]{}évy processes, Probab. Theory Relat. Fields [**114(2)**]{}, (1999) 207–227. S. Jaffard, Beyond [B]{}esov spaces: Part 1, J. Fourier Anal. Appl. , 10(3), 221–246, 2004. Kahane, J.-P., Sur le chaos multiplicatif, Ann. Sci. Math. Québec [**9**]{}, 105–150 (1985) B. Mandelbrot, A. Fischer , L. Calvet, A multifractal model of asset returns, Cowles Foundation Discussion Paper, \#1164, (1997). B. Mandelbrot, Intermittent turbulence in self-similar cascades: divergence of hight moments and dimension of the carrier, J. Fluid. Mech. [**62**]{}, 331–358 (1974) H. Okamoto, A remark on continuous, nowhere differentiable functions. Proc. Japan. Acad. [**81**]{}, Ser. A, 2005. L. Olsen, A multifractal formalism, Adv. Math. 116(1): 82-196, 1995. R. Peltier, J. Lévy Véhel. Multifractional Brownian Motion, Technical Report, 2645, INRIA (1995). Y. Peres, B. Solomyak, Existence of $L^q$ simensions and entropy dimension for self-conformal measures. R. Riedi, Multifractal processes, [Long Range Dependence: Theory and Applications]{}, eds. Doukhan, Oppenheim, Taqqu, (Birkhäuser 2002), pp 625–715.
--- abstract: 'We experimentally find an interesting and unexpected thing: a rubidium cell with long decay time can not be used to generate a non-classical correlated photon pair via the D2 transition of $^{87}$Rb using four-wave mixing configuration \[Opt. Express [**16**]{}, 21708 (2008)\]. In this work, we give a detail theoretical analysis on the EIT of hot $^{87}$Rb with different ground decay time, which shows a probable reason why a rubidium cell with long decay time is not a useful candidate for preparation of a non-classical photon pair via the D2 transition. The simulations agree well with the experimental results. We believe our find is very instructive to such kind of research.' author: - 'Qun-Feng Chen' - 'Xiao-Song Lu' - 'Bao-Sen Shi' - 'Guang-Can Guo' title: 'Is a rubidium cell with long decay time always useful for generating a non-classical photon pair?' --- It is well known that a rubidium cell with cell filled with buffer gas or wall paraffin coated can greatly decrease the decay between the ground states, and in most cases, the decrement of such decay can greatly improve the performance of the system. The cell with paraffin coated or buffer gas filled is extensively used in experiments in atomic field, for example, recently, in the experiments of the generation of non-classical correlated photon pairs[@Wal:2003:196; @Eisaman:PRL:2004:233602; @Eisaman:N:2005:837; @manz:040101]. In these works, a non-classical correlated photon pair is successfully generated using Raman scattering [@Duan:N:2001:413] via the D1 transition of Rb in a cell filled with buffer gas. Very recently, we prepare non-classical correlated photon pairs using non-degenerate four-wave mixing in a rubidium cell[@chen:oe:2008; @chen:053810]. During the experiments, we find an interesting and unexpected thing: a normal rubidium cell is a good candidate for the generation of non-classical non-degenerate photon pairs using both the D1 and D2 transitions of $^{87}$Rb; On the contrary, we could not obtained the photon pairs via the D2 transition if a cell coated with paraffin or filled with buffer gas is used. We try the cells coated with paraffin, and filled with 30 Torr and 8 Torr’s neon respectively in experiments, we could observe the stimulated four-wave mixing in these cells, but can not obtain the correlated photon pairs. We think this counter-intuitive result is very probably caused by the small split of the D2 transition of rubidium combined with the large Doppler broadening. Electromagnetically induced transparency (EIT) is the key part of this kind of experiments[@balic:183601; @Kolchin:PRA:2007:033814]. However, this combination will make the EIT disappeared, if the decay between the ground states is ignorable. The disappearance of EIT makes it impossible to generate a coherence photon. Therefore no correlated photons can be obtained when a cell with ignorable decay between the ground states is used. Our theoretical analysis shows that the decay between the ground states can make the EIT reappear, which makes the generation of a correlated photon pair available. The experiment on the EIT effect of the D2 line of $^{87}$Rb with different kinds of cells supports our calculation. We believe our find is very instructive to such kind of research. We show our theoretical analysis as follows. The energy level diagram of $^{87}$Rb is shown in Fig. \[fig:level\]. The figure shows that the excited levels of $^{87}$Rb are not singlets. The $5P_{3/2}$ level has 4 sublevels, and the $5P_{1/2}$ level has 2 sublevels. Two of the sublevels $F=1$ and $F=2$ can form a $\Lambda$ structure for EIT with the ground states $5S_{1/2}$. This structure can be simplified to a four-level structure as shown in Fig. \[fig:setup\], in which there are two $\Lambda$ -type structures: $|1\rangle-|3\rangle-|2\rangle$ and $|1\rangle-|4\rangle-|2\rangle$ for EIT. If the energy difference between $|3\rangle$ and $|4\rangle$ is not large enough, then these two paths will interfere with each other, and the property of the EIT will be changed, especially when the Doppler broadening is considered. We make a detail analysis by using the master equation. Considering a four level system with two fields $\omega_{p}$ and $\omega_{c}$ as shown in Fig. \[fig:setup\], we treat $\omega_{p}$ as the probe field, which is much weaker than the coupling field $\omega_{c}$. The effective Hamiltonian of the system can be written as $$\setlength\arraycolsep{5pt} H_{\rm int}=-\frac{\hbar}{2}\begin{pmatrix} 0 & 0 & \Omega_{p3} & \Omega_{p4}\\ 0 & 2(\Delta_p-\Delta_c) & \Omega_{c3} & \Omega_{c4} \\ \Omega_{p3} & \Omega_{c3} & 2\Delta_p & 0 \\ \Omega_{p4} & \Omega_{c4} & 0 & 2(\Delta_p-\omega_{43}) \end{pmatrix}, \label{eq:lig:H41}$$ where $\Delta_p=\omega_p-\omega_{31}$, $\Delta_c=\omega_{c}-\omega_{32}$, $\omega_{ij}$ is the frequency difference between levels $\left|i\right>$ and $\left|j\right>$. $\Omega_{pi}=\mu_{i1}E_p/\hbar$ and $\Omega_{ci}=\mu_{i2}E_c/\hbar$ are the Rabi frequencies of the fields with the corresponding transitions, $\mu_{ij}$ is the transition electronic dipole moment of the $\left|i\right>\to\left|j\right>$ transition. Here we suppose all $\Omega_{pi}$ and $\Omega_{ci}$ are real. When a cell filled with buffer gas or coated with paraffin is used, the exchange of the atoms can be ignored, therefore the decay between the ground states is very small and can be ignored. The master equation for the atomic density operator can be written as [@Fleischhauer:2005; @Chen:PRA:2008:013804] ![(Color online) Im\[$\chi(\omega_{p})$\] versus $\delta$ when no Doppler broadening is considered.[]{data-label="fig:chi:nodop"}](chi1_26_nodop.eps){width="8cm"} $$\begin{aligned} \frac{d\rho}{dt}&=&\frac{1}{i\hbar}[H,\rho]+\frac{\Gamma_{31}}{ 2}(2\hat\sigma_{13}\rho\hat\sigma_{31}-\hat\sigma_{33}\rho-\rho\hat\sigma_{33})\nonumber\\ &&+\frac{\Gamma_{32}}{2}(2\hat\sigma_{23}\rho\hat\sigma_{32}-\hat\sigma_{33}\rho-\rho\hat\sigma_{33}) \nonumber \\ &&+\frac{\Gamma_{41}}{ 2}(2\hat\sigma_{14}\rho\hat\sigma_{41}-\hat\sigma_{44}\rho-\rho\hat\sigma_{44})\nonumber\\ &&+\frac{\Gamma_{42}}{ 2}(2\hat\sigma_{24}\rho\hat\sigma_{42}-\hat\sigma_{44}\rho-\rho\hat\sigma_{44})\nonumber\\ &&+\frac{\gamma_{3\rm deph}}{2}(2\hat\sigma_{33}\rho\hat\sigma_{33}-\hat\sigma_{33}\rho-\rho\hat\sigma_{33}) \nonumber \\ &&+\frac{\gamma_{4\rm deph}}{ 2}(2\hat\sigma_{44}\rho\hat\sigma_{44}-\hat\sigma_{44}\rho-\rho\hat\sigma_{44}) \nonumber \\ &&+\frac{\gamma_{\rm2deph}}{ 2}(2\hat\sigma_{22}\rho\hat\sigma_{22}-\hat\sigma_{22}\rho-\rho\hat\sigma_{22}). \label{master}\end{aligned}$$ We numerically solve Eq. (\[master\]) to obtain the linear susceptibility $\chi(\omega_{p})$ concerned with $\omega_{p}$, $\chi(\omega_{p})\propto (\rho_{31}/\Omega_{p3}+\rho_{41}/\Omega_{p4})$, In the calculation, we suppose $\mu_{31}=\mu_{41}=\mu_{32}=-\mu_{42}$ [@Chen:PRA:2008:013804]. The energy difference between $5P_{3/2},F=1$ and $5P_{3/2},F=2$ is 157 MHz, which is about 26 times $\Gamma_{3}=\Gamma_{31}+\Gamma_{32}$ (about 6 MHz). Substituting the data $\Omega_{p3}=\Omega_{p4}=0.001\Gamma_{3}$, $\Omega_{c3}=-\Omega_{c4}=\Gamma_{3}$, $\Delta_{c}=0$ to Eq. (\[master\]), we obtain Im\[$\chi(\omega_{p})$\] versus $\delta=\Delta_{p}-\Delta_{c}$ as shown in Fig. \[fig:chi:nodop\]. This figure shows that the existence of level $|4\rangle$ will slightly affect the EIT spectrum: the EIT signal is not symmetric. Following, we consider the effect of Doppler broadening. The distribution function of the frequency shift with respect to the center frequency $f_0$ can be simplified as $$P(f)\propto \exp \left( -\frac{m\lambda_{0}^{2}(f-f_{0})^{2}}{2kT} \right), \label{eq:dist}$$ where $k$ is the Boltzmann constant, $T$ is the temperature. $f$ is the frequency, $m$ is the mass of $^{87}$Rb, and $\lambda_{0}$ is the wavelength of the corresponding transition. Substituted the data of $^{87}$Rb and $T=320$ K into Eq. (\[eq:dist\]), the imaginary part of the susceptibility after Doppler integration is shown in Fig. \[fig:chi\]. ![(Color online) Imaginary part of the susceptibility with Doppler integration.[]{data-label="fig:chi"}](chi1_26.eps){width="8cm"} This figure clearly shows that the EIT signal has been ruined completely by the Doppler broadening. Instead of the transparency at $\delta=0$, there is an enhanced absorptive peak. This absorptive peak is very small compared with the background, therefore we have not observed it in the experiment yet. The disappearance of the transparency window makes the atomic ensemble opaque to the photon. Therefore coherent photons can not be generated. When a normal cell is used, the exchange of the atoms should be considered, the decay time between the ground state is short. The atoms leaving and entering the light beam can be considered as an effective decay between the ground states, the master equation for a normal cell can be denoted as $$\frac{d\rho}{dt}=M-r\rho+\frac{r}{2}(\hat\sigma_{11}+\hat\sigma_{22}), \label{eq:ma1}$$ or $$\begin{aligned} \frac{d\rho}{dt}&=&M+\frac{\gamma}{2}(2\hat{\sigma}_{12}\rho\hat{\sigma}_{21}-\hat{\sigma}_{22}\rho-\rho\hat{\sigma}_{22})\nonumber\\&&+\frac{\gamma}{2}(2\hat{\sigma}_{21}\rho\hat{\sigma}_{21}-\hat\sigma_{11}\rho-\rho\hat{\sigma}_{11}), \label{eq:ma2}\end{aligned}$$ where $M$ is the right side of Eq. (\[master\]), $r$ is the exchange rate of atoms. $\gamma$ is the effective decay between the ground states caused by the exchange of atoms. Equation (\[eq:ma1\]) gives a direct description of the atoms leaving and entering the field, and Eq. (\[eq:ma2\]) shows the effective decay between the ground states caused by the exchange of atoms. Although these two descriptions are different, they show the similar effect on the EIT caused by the exchange of atoms. The imaginary parts of the susceptibility with Doppler broadening at $r=0.01\Gamma_{3}$ and $\gamma=0.01\Gamma_{3}$ are shown in Fig. \[fig:decay\]. The figure shows that both of the simulations have the similar results: the decay caused by the exchange of atoms can enhance the EIT of the system. This result agrees with our experimental result of the EIT on the D2 transition of $^{87}$Rb, and also agrees with the experimental result about generation of the photon pairs we observed in the experiment. ![(Color online) Im\[$\chi(\omega_{p})$\] versus $\delta$ when decay caused by the atom exchange is considered. Red solid line is the result of Eq. (\[eq:ma1\]), and green dashed line is the result of Eq. (\[eq:ma2\]).[]{data-label="fig:decay"}](chi1_26_decay_j.eps){width="8cm"} The reason why the decay can enhance the EIT is that the decay makes the EIT signal reduced very quickly as the increment of detuning of the coupling. Therefore the interfere between the two EIT paths is small enough and the EIT can be preserved even the Doppler broadening exists. To support this point, we show the numerical result of the comparison of the EIT with and without the decay, which correspond to the cases in which the coupling is resonant with the $|2\rangle\to|3\rangle$ transition and is at the center of the $|2\rangle\to|3\rangle$ and $|2\rangle\to|4\rangle$ transitions. The results are shown in Fig. \[fig:cmp\]. Figure \[fig:rsnt\] shows that the decay does not affect the EIT too much when the coupling is resonant with a transition. Figure \[fig:dtn\] shows that when the coupling is detuned from the transition, the interference of the two paths can cause a large absorptive peak at $\delta\approx0$, which makes the disappearance of EIT after considering Doppler broadening; The existence of the decay between the ground states makes the absorptive peak weakened very quickly as the detuning of the coupling is increased, this makes the EIT preserved even Doppler broadening exists. In the case of the D1 transition of the $^{87}$Rb, because the energy split of $5P_{1/2}$ is large enough, the EIT signal will always exists after integration of Doppler broadening. That is the reason why the work reported in Ref.[@Wal:2003:196; @Eisaman:PRL:2004:233602; @Eisaman:N:2005:837; @manz:040101]can generate non-classical photon pairs successfully. In conclusion. We make a detail theoretical analysis on the EIT at the D2 transition of the hot $^{87}$Rb, which shows the long decay time between the ground states will ruin the EIT. This analysis shows a probable reason why a rubidium cell with long decay time is not a useful candidate for preparation of a non-classical photon pair via the D2 transition. The simulations agree well with the experimental results. We believe our find can give a very useful instruction to such kind of research. We thank Wei Jiang for some useful discussions. This work is funded by National Fundamental Research Program (Grant No. 2006CB921900, 2009CB929601), National Natural Science Foundation of China (Grants No. 10674126, No. 10874171), the Innovation funds from Chinese Academy of Sciences, Program for NCET, and International Cooperate Program from CAS. [11]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , , , , , , , ****, (). , , , , , , ****, (). , , , , , , ****, (). , , , , ****, (). , , , , ****, (). , , , , , ****, (). , , , , ****, (). , , , , , ****, (). , ****, (). , , , ****, (). , , , , ****, ().
--- abstract: 'We study the density of states in monolayer and bilayer graphene in the presence of a random potential that breaks sublattice symmetries. While a uniform symmetry-breaking potential opens a uniform gap, a random symmetry-breaking potential also creates tails in the density of states. The latter can close the gap again, preventing the system to become an insulator. However, for a sufficiently large gap the tails contain localized states with nonzero density of states. These localized states allow the system to conduct at nonzero temperature via variable-range hopping. This result is in agreement with recent experimental observations in graphane by Elias [*et al.*]{}.' address: | $^1$Max-Planck-Institut für Physik Komplexer Systeme,\ Nöthnitzer Str. 38, 01187 Dresden, Germany\ $^2$Institut für Physik, Universität Augsburg author: - 'B. Dóra$^1$ and K. Ziegler$^2$' title: Gaps and tails in graphene and graphane --- Introduction ============ Graphene is a single sheet of carbon atoms that is forming a honeycomb lattice. A graphene monolayer as well as a stack of two graphene sheets (i.e. a graphene bilayer) are semimetals with remarkably good conducting properties [@novoselov05; @zhang05; @geim07]. These materials have been experimentally realized with external gates, which allow for a continuous change in the charge-carrier density. There exists a non-zero minimal conductivity at the charge neutrality point. Its value is very robust and almost unaffected by disorder or thermal fluctuations [@geim07; @tan07; @chen08; @morozov08]. Many potential applications of graphene require an electronic gap to switch between conducting and insulating states. A successful step in this direction has been achieved by recent experiments with hydrogenated graphene (graphane) [@elias08] and with gated bilayer graphene [@ohta06; @oostinga08; @gorbachev08]. These experiments take advantage of the fact that the breaking of a discrete symmetry of the lattice system opens a gap in the electronic spectrum at the Fermi energy. In the case of monolayer graphene (MLG), a staggered potential that depends on the sublattice of the honeycomb lattice plays the role of such symmetry-breaking potential (SBP). For bilayer graphene (BLG) a gate potential that distinguishes between the two graphene layers plays a similar role. With these opportunities one enters a new field in graphene, where one can switch between conducting and insulating regimes of a two-dimensional material, either by a chemical process (e.g. oxidation or hydrogenation) or by applying an external electric field [@castro08]. The opening of a gap can be observed experimentally either by a direct measurement of the density of states (e.g., by scanning tunneling microscopy [@li09]) or indirectly by measuring transport properties. In the gapless case we observe a metallic conductivity $ \sigma\propto \rho D $, where $D$ is the diffusion coefficient (which is proportional to the scattering time) and $\rho$ is the density of states (DOS). This gives typically a conductivity of the order of $e^2/h$. The gapped case, on the other hand, has a strongly temperature-dependent conductivity due to thermal activation of charge carriers [@mott90] $$\sigma(T) = \sigma_0 e^{-T_0/T}$$ with some characteristic temperature scale $T_0$ which depends on the underlying model. A different behavior was found experimentally in the insulating phase of graphane [@elias08]: (T)\_0e\^[-(T\_0/T)\^[1/3]{}]{}   , \[vrh\] which is known as 2D variable-range hopping [@mott69]. This behavior indicates the existence of well-separated localized states, even at the charge-neutrality point, where the parameter $T_0$ depends on the DOS at the Fermi energy $E_F$ as $T_0\propto 1/\rho(E_F)$. The experimental observation of a metal-insulator transition in graphane raises two questions: (i) what are the details that describe the opening of a gap and (ii) what is the DOS in the insulating phase? In this paper we will focus on the mechanism of the gap opening due to a SBP in MLG and BLG. It is crucial for our study that the SBP is not uniform in the realistic two-dimensional material. One reason for the latter is the fact that graphene is not flat but forms ripples [@morozov06; @meyer07; @castroneto07b]. Another reason is the incomplete coverage of a graphene layer with hydrogen atoms in the case of graphane [@elias08]. The spatially fluctuating SBP leads to interesting effects, including a second-order phase transition due to spontaneous breaking of a discrete symmetry and the formation of Lifshitz tails. Model {#sect2b} ===== Quasiparticles in MLG or in BLG are described in tight-binding approximation by a nearest-neighbor hopping Hamiltonian =-[\_[&lt;r,r’&gt;]{}]{}t\_[r,r’]{} c\^\_r c\_[r’]{} +\_r V\_r c\^\_r c\_r +h.c., \[ham00\] where $c_r^\dagger$ ($c_r$) are fermionic creation (annihilation) operators at lattice site $r$. The underlying lattice structure is either a honeycomb lattice (MLG) or two honeycomb lattices with Bernal stacking (BLG) [@castro08; @mccann06b]. We have an intralayer hopping rate $t$ and an interlayer hopping rate $t_\perp$ for BLG. There are different forms of the potential $V_r$, depending on whether we consider MLG or BLG. Here we begin with potentials that are uniform on each sublattice, whereas random fluctuations are considered in subsection \[randomfluct\]. MLG --- $V_r$ is a staggered potential with $V_r=m$ on sublattice A and $V_r=-m$ on sublattice B. This potential obviously breaks the sublattice symmetry of MLG. Such a staggered potential can be the result of chemical absorption of non-carbon atoms in MLG (e.g. oxygen or hydrogen [@elias08]). A consequence of the symmetry breaking is the formation of a gap $\Delta_g=m$: The spectrum of MLG consists of two bands with dispersion E\_k= , where \_k\^2=t\^2\[3+2k\_1+4(k\_1/2)(k\_2/2)\] \[specmlg\] for lattice spacing $a=1$. BLG --- $V_r$ is a biased gate potential that is $V_r=m$ ($V_r=-m$) on the upper (lower) graphene sheet. The potential in BLG has been realized as an external gate voltage, applied to the two layers of BLG [@ohta06]. The spectrum of BLG consists of four bands [@castro08] with two low-energy bands E\_k\^-(m)=  , where $\epsilon_k$ is the monolayer dispersion of Eq. (\[specmlg\]), and two high-energy bands E\_k\^+(m)=  . The spectrum of the low-energy bands has nodes for $m=0$ where $E_k^-(0)$ vanishes in a $(k-K)^2$ manner, where $K$ is the position of the nodes, which are the same as those of a single layer. For small $m\ll t_\perp$, a mexican hat structure develops around $k=K$, with local extremum in the low-energy band at $E_k^-(m)=\pm m$, and a global minimum/maximum in the upper/lower low energy band at $E_k^-(m)=mt_\perp/\sqrt{t_\perp^2+4m^2}$. For small gating potential $V_r=\pm m$ we can expand $E_k^-(m)$ under the square root near the nodes and get $$E_k^-(m)\sim \pm\sqrt{[1-4\epsilon_k^2t_\perp(t_\perp^2+4\epsilon_k^2)^{-1/2}]m^2+E_k^-(0)^2} \ .$$ $t_\perp$ apparently reduces the gap. Very close to the nodes we can approximate the factor in front of $m^2$ by 1 and obtain an expression similar to the dispersion of MLG: $ E_k^-(m)\sim \pm\sqrt{m^2+E_k^-(0)^2}$. Here we notice the absence of the mexican hat structure in this approximation. The resulting spectra for MLG and BLG are shown in Fig. \[figspecmlg\]. \[t\]\[b\]\[1\]\[0\][$|k|-K$]{} \[b\]\[t\]\[1\]\[0\][$E_k$]{} \[\]\[\]\[1\]\[0\][ bilayer]{} \[\]\[\]\[1\]\[0\][ monolayer]{} ![The energy spectra of MLG (blue) and BLG (red) are shown, with and without a gap (dashed and solid line, respectively) for positive energies. Note the characteristic mexican hat structure of gapped BLG. \[figspecmlg\]](specmlg.eps "fig:"){width="7cm" height="7cm"} Low-energy approximation ------------------------ The two bands in MLG and the two low-energy bands in BLG represent a spinor-1/2 wave function. This allows us to expand the corresponding Hamiltonian in terms of Pauli matrices $\sigma_j$ as H=h\_1\_1+h\_2\_2+m\_3  . \[ham01\] Near each node the coefficients $h_j$ read in low-energy approximation [@mccann06] h\_j=i\_j   (MLG),   h\_1=\_1\^2-\_2\^2,  h\_2=2\_1\_2   (BLG)  , \[elements\] where $(\nabla_1,\nabla_2)$ is the 2D gradient. Random fluctuations {#randomfluct} ------------------- In a realistic situation the potential $V_r$ is not uniform, neither in MLG nor in BLG, as discussed in the Introduction. As a result, electrons experience a randomly varying potential $V_r$ along each graphene sheet, and $m$ in the Hamiltonian of Eq. (\[ham01\]) becomes a random variable in space as well. For BLG it is assumed that the gate voltage is adjusted at the charge-neutrality point such that in average $m_r$ is exactly antisymmetric with respect to the two layers: $\langle m_1\rangle_m=-\langle m_2\rangle_m$. At first glance, the Hamiltonian in Eq. (\[ham00\]) is a standard hopping Hamiltonian with random potential $V_r$. This is a model frequently used to study the generic case of Anderson localization [@anderson58]. The dispersion, however, is special in the case of graphene due to the honeycomb lattice: at low energies it consists of two nodes (or valleys) $K$ and $K'$ [@castroneto07b; @mccann06]. It is assumed here that randomness scatters only at small momentum such that intervalley scattering, which requires large momentum at least near the nodal points (NP), is not relevant and can be treated as a perturbation. Then each valley contributes separately to the DOS, and the contribution of the two valleys to the DOS $\rho$ is additive: $ \rho=\rho_K+\rho_{K'} $. This allows us to consider the low-energy Hamiltonian in Eqs. (\[ham01\]), (\[elements\]), even in the presence of randomness for each valley separately. Within this approximation the term $m_r$ is a random variable with mean value $\langle m_r\rangle_m ={\bar m}$ and variance $\langle (m_r-{\bar m})(m_{r'}-{\bar m})\rangle_m=g\delta_{r,r'}$. The following analytic calculations will be based entirely on the Hamiltonian of Eqs. (\[ham01\]),(\[elements\]) and the numerical calculations on the lattice Hamiltonian of Eq. (\[ham00\]). In particular, the average Hamiltonian $\langle H\rangle_m$ can be diagonalized by Fourier transformation and is H\_m = k\_1\_1+k\_2\_2+[|m]{}\_3 for MLG with eigenvalues $E_k=\pm\sqrt{{\bar m}^2+k^2}$. For BGL the average Hamiltonian is H\_m = (k\_1\^2-k\_2\^2)\_1+2k\_1k\_2\_2+[|m]{}\_3 with eigenvalues $E_k=\pm\sqrt{{\bar m}^2+k^4}$. Symmetries {#symmetry000} ---------- Low-energy properties are controlled by the symmetry of the Hamiltonian and of the corresponding one-particle Green’s function $G(i\epsilon)=(H+i\epsilon)^{-1}$. In the absence of sublattice-symmetry breaking (i.e. for $m=0$), the Hamiltonian $H=h_1\sigma_1+h_2\sigma_2$ has a continuous chiral symmetry H e\^[\_3]{} He\^[\_3]{}=H \[contsymmetry\] with a continuous parameter $\alpha$, since $H$ anticommutes with $\sigma_3$. The term $m\sigma_3$ breaks the continuous chiral symmetry. However, the behavior under transposition $h_j^T=-h_j$ for MLG and $h_j^T=h_j$ for BLG in Eq. (\[elements\]) provides a discrete symmetry: H-\_n H\^T\_n =H  , \[discretesymm\] where $n=1$ for MLG and $n=2$ for BLG. This symmetry is broken for the one-particle Green’s function $G(i\epsilon)$ by the $i\epsilon$ term. To see whether or not the symmetry is restored in the limit $\epsilon\to0$, the difference of $G(i\epsilon)$ and the transformed Green’s function $-\sigma_nG^T(i\epsilon)\sigma_n$ must be evaluated: G(i)+\_nG\^T(i)\_n=G(i)-G(-i)  . \[op\] For the diagonal elements this is the DOS at the NP $\rho(E=0)\equiv\rho_0$ in the limit $\epsilon\to0$. Thus the order parameter for spontaneous symmetry breaking is $\rho_0$. According to the theory of phase transitions, the transition from a nonzero $\rho_0$ (spontaneously broken symmetry) to $\rho_0=0$ (symmetric phase) is a second-order phase transition, and should be accompanied by a divergent correlation length at the transition point. Since our symmetry is discrete, such a phase transition can exists in $d=2$ and should be of Ising type. A calculation, using the SCBA of $\rho_0$, gives indeed a second-order transition at the point where $\rho_0$ vanishes with a divergent correlation length $\xi$ for the DOS fluctuations $$\xi\sim \xi_0 (m_c^2-{\bar m}^2)^{-1}$$ for ${\bar m}^2\sim m_c^2$ with a finite coefficient $\xi_0$ [@ziegler97]. Whether or not this transition is an artefact of the SCBA or represents a physical effect due to the appearence of two types of spectra (localized for vanishing SCBA-DOS and delocalized for nonzero SCBA-DOS) is not obvious here and requires further studies. Density of states ----------------- Our focus in the subsequent calculation is on the DOS of MLG and BLG. In the absence of disorder, the DOS of 2D Dirac fermions opens a gap $\Delta\propto {\bar m}$ as soon as a nonzero term ${\bar m}$ appears in the Hamiltonian of Eq. (\[ham01\]), since the low-energy dispersion is $E_k=\pm\sqrt{{\bar m}^2+k^2}$ for MLG and $E_k=\pm\sqrt{{\bar m}^2+k^4}$ for BLG, respectively (cf Fig. \[dosplot\]). Here we evaluate the DOS of MLG and BLG in the presence of a uniform gap. Given the energy spectrum, the DOS is defined as $$\rho(E)=\sum_k\delta(E-E_k).$$ By using the MLG dispersion, this reduces to $$\rho(E)=|E|\Theta(|E|-m),$$ where $\Theta(x)$ is the Heaviside function. For BLG, this gives $$\rho(E)=\frac{|E|}{2\sqrt{E^2-m^2}}\Theta(|E|-m), \label{dosbil}$$ which are shown in Fig. \[dosplot\]. By retaining the full low-energy spectrum for BLG, $E_k^-$, the DOS can still be evaluated in closed form, with the result $$\begin{aligned} \fl \rho(E)=|E|\times \left\{\begin{array}{cc} \frac{(t_\perp^2+4m^2)}{\sqrt{(t_\perp^2+4m^2)E^2-t_\perp^2m^2}} & \textmd{for } m>|E|>\frac{mt_\perp}{\sqrt{t_\perp^2+4m^2}}\\ \left(\frac{(t_\perp^2+4m^2)}{2\sqrt{(t_\perp^2+4m^2)E^2-t_\perp^2m^2}}+1\right) & \textmd{for }|E|>m.\\ \end{array}\right.\end{aligned}$$ In the limit of $t_\perp\gg (E,m)$, this reduces to Eq. (\[dosbil\]) after dividing it by $t_\perp$, which was set to 1 in the low-energy approximation, and the DOS saturates to a constant value after the initial divergence. For finite $t_\perp$, however, the Dirac nature of the spectrum appears again, and the high energy DOS increases linearly even for the BLG, similarly to the MLG case. For $m=0$, and $E\ll t_\perp$, this lengthy expression gives $$\rho(E\ll t_\perp)=\frac{t_\perp}{2}.$$ \[t\]\[b\]\[1\]\[0\][$E/m$]{} \[b\]\[t\]\[1\]\[0\][$m\rho(E)$]{} \[\]\[\]\[1\]\[0\][ BLG]{} \[\]\[\]\[1\]\[0\][ MLG]{} ![Density of states for a uniform symmetry-breaking potential for monolayer graphene and bilayer graphene is shown in the left panel. The density of states for a uniform symmetry-breaking potential for BLG is shown for several values of $t_\perp$. For small $t_\perp$, the mexican hat structure influences the DOS by shifting the gap to lower values, and by developing a kink at $E=m$.[]{data-label="dosplot"}](dosmonobi.eps "fig:"){width="7cm" height="7cm"} \[\]\[\]\[1\]\[0\][ $t_\perp=\infty$]{} \[\]\[\]\[1\]\[0\][ $t_\perp=2m$]{} \[\]\[\]\[1\]\[0\][ $2t_\perp=m$]{} \[t\]\[b\]\[1\]\[0\][$E/m$]{} \[b\]\[t\]\[1\]\[0\][$t_\perp\rho(E)$]{} ![Density of states for a uniform symmetry-breaking potential for monolayer graphene and bilayer graphene is shown in the left panel. The density of states for a uniform symmetry-breaking potential for BLG is shown for several values of $t_\perp$. For small $t_\perp$, the mexican hat structure influences the DOS by shifting the gap to lower values, and by developing a kink at $E=m$.[]{data-label="dosplot"}](dosblg.eps "fig:"){width="7cm" height="7cm"} An interesting question, from the theoretical as well as from the experimental point of view, appears here: What is the effect of random fluctuations around ${\bar m}$? Previous calculations, based on the self-consistent Born approximation (SCBA), have revealed that those fluctuations can close the gap again, even for an average SBP term ${\bar m}\ne0$ [@ziegler09b]. Only if ${\bar m}$ exceeds a critical value $m_c$ (which depends on the strength of the fluctuations), an open gap was found in these calculations. This describes a special transition from metallic to insulating behavior. In particular, the DOS at the Dirac point $\rho_0$ vanishes with ${\bar m}$ like a power law \_0([|m]{})\~  . The exponent 1/2 of the power law is probably an artefact of the SCBA, similar to the critical exponent in mean-field approximations. Self-consistent Born approximation ================================== The average one-particle Green’s function can be calculated from the average Hamiltonian $\langle H\rangle_m$ by employing the self-consistent Born approximation (SCBA) [@suzuura02; @peres06; @koshino06] G(i)\_m(H\_m+i- 2)\^[-1]{} G\_0(i,m\_s)  . \[scba1\] The SCBA is also known as the self-consistent non-crossing approximation in the Kondo and superconducting community. The self-energy $\Sigma$ is a $2\times2$ tensor due to the spinor structure of the quasiparticles: $\Sigma=-(i\eta\sigma_0+m_s \sigma_3)/2$. Scattering by the random SBP produces an imaginary part of the self-energy $\eta$ (i.e. a one-particle scattering rate) and a shift $m_s$ of the average SBP ${\bar m}$ (i.e., ${\bar m}\to m'\equiv {\bar m}+m_s$). $\Sigma$ is determined by the self-consistent equation =-g\_3(H\_m+i-2)\^[-1]{}\_[rr]{}\_3  . \[spe00\] The symmetry in Eq. (\[discretesymm\]) implies that with $\Sigma$ also \_n\_n=-(i\_0-m\_s \_3)/2 is a solution (i.e. $m_s\to -m_s$ creates a second solution). The average DOS at the NP is proportional to the scattering rate: $\rho_0=\eta/2g\pi$. This reflects that scattering by the random SBP creates a nonzero DOS at the NP if $\eta>0$. Now we assume that the parameters $\eta$ and $m_s$ are uniform in space. Then Eq. (\[spe00\]) can be written in terms of two equations, one for the one-particle scattering rate $\eta$ and another for the shift of the SBP $m_s$, as = gI,   m\_s=-[|m]{} gI/(1+gI)  . \[scba2\] $I$ is a function of ${\bar m}$ and $\eta$ and also depends on the Hamiltonian. For MLG it reads with momentum cutoff $\lambda$ $$I_{MLG}= \frac{1}{2\pi}\ln\left[ 1+\frac{\lambda^2}{{\eta}^2 +({\bar m}+m_s)^2}\right] \label{int1}$$ and for BLG $$I_{BLG}\sim \frac{1}{4\sqrt{{\eta}^2+({\bar m}+m_s)^2}}\ \ \ \ (\lambda\sim\infty) \ . \label{int2}$$ A nonzero solution $\eta$ requires $gI=1$ in the first part of Eq. (\[scba2\]), such that $m_s=-{\bar m}/2$ from the second part. Since the integrals $I$ are monotonically decreasing functions for large ${\bar m}$, a real solution with $gI=1$ exists only for $|{\bar m}|\le m_c$. For both, MLG and BLG, the solutions read \^2=(m\_c\^2-[|m]{}\^2)(m\_c\^2-[|m]{}\^2)/4  , \[scattrate\] where the model dependence enters only through the critical average SBP $m_c$: m\_c={ [cc]{} (2/)\~2e\^[-/g]{} & MLG\ g/2 & BLG . . \[gap11\] $m_c$ is much bigger for BGL, a result which indicates that the effect of disorder is much stronger in BLG. This is also reflected by the scattering rate at ${\bar m}=0$ which is $\eta=m_c/2$. A central assumption of the SCBA is a uniform self-energy $\Sigma$. The imaginary part of $\Sigma$ is the scattering rate $\eta$, created by the random fluctuations. Therefore, a uniform $\eta$ means that effectively random fluctuations are densely filling the lattice. If the distribution of the fluctuations is too dilute, however, there is no uniform nonzero solution of Eq. (\[spe00\]). Nevertheless, a dilute distribution can still create a nonzero DOS, as we will discuss in the following: we study contributions to the DOS due to rare events, leading to Lifshitz tails. ![ Schematic shape of the density of states: full curves are the bulk density of states for uniform symmetry-breaking potential, dotted curves represent the broadening by disorder. The broadened density of states can overlap inside the gap for ${\bar m}<m_c$ (a) or not for ${\bar m}>m_c$ (b), depending on the average symmetry-breaking potential ${\bar m}$. $m_c$ is given in Eq. (\[gap11\]). []{data-label="dosplot1"}](gpdos0.eps){width="7cm" height="5cm"} Lifshitz tails {#lifshitztails} ============== In the system with uniform SBP the gap can be destroyed locally by a local change of the SBP $m\to m+\delta m_r$ due to the creation of a bound state. We start with a translational-invariant system and add $\delta m_r$ on site $r$. To evaluate the corresponding DOS from the Green’s function $G=(H+i\epsilon +\delta m\sigma_3)^{-1}$, using the Green’s function $G_0=(H+i\epsilon)^{-1}$ with uniform $m$, we employ the lattice version of the Lippmann-Schwinger equation [@ziegler85] G=G\_0-G\_0T\_SG\_0=([**1**]{}-G\_0T\_[r]{})G\_0 \[id3\] with the $2\times2$ scattering matrix T\_[r]{}=(\_0+m\_[r]{}\_3 G\_[0,rr]{})\^[-1]{}\_3m\_[r]{}  . \[impurity00\] In the case of MLG we have $$\begin{aligned} G_0=\left[(E+i\epsilon)\sigma_0-m\sigma_3\right]\frac{1}{2\pi} \int_0^\lambda \frac{k}{(\epsilon-iE)^2+m^2+k^2}dk\\ \sim (E\sigma_0-m\sigma_3)\frac{1}{4\pi}\log[1+\lambda^2/(m^2-E^2)]+o(i\epsilon) \equiv (g_0+i\epsilon s)\sigma_0+g_3\sigma_3 \ .\end{aligned}$$ (remark: the DOS of BLG has the same form.) Then the imaginary part of the Green’s function reads $$\begin{aligned} Im[G(\eta)]=-\left(\begin{array}{cc} \delta_{\epsilon s}(g_0+g_3+\delta m_{r}) & 0 \\ 0 & \delta_{\epsilon s}(g_0-g_3-\delta m_{r}) \end{array}\right)\end{aligned}$$ with \_[s]{}(x)=  . Thus the DOS is the sum of two Dirac delta peaks \_r\_[s]{}(g\_0+g\_3+m\_[r]{})+ \_[s]{}(g\_0-g\_3-m\_[r]{})  . The Dirac delta peak appears with probability $\propto\exp(-(g_0\pm g_3)^2/g)$ for a Gaussian distribution. This calculation can easily be generalized to $\delta m_r$ on a set of several sites $r$ [@ziegler85]. Then the appearance of the several such Dirac delta peaks decreases exponentially. Moreover, these contributions are local and form localized states. For stronger fluctuations $\delta m_r$ (i.e., for increasing $g$) the localized states can start to overlap. This is a quantum analogue of classical percolation. The localized states in the Lifshitz tails can be taken into account by a generalization of the SCBA to non-uniform self-energies. The main idea is to search for space-dependent solutions $\Sigma_r$ of Eq. (\[spe00\]). In general, this is a diffult problem. However, we have found that this problem simplifies essentially when we study it in terms of a $1/{\bar m}$ expansion. Using a Gaussian distribution, this method gives Lifshitz tails of the form [@villain00]: \_0([|m]{}) \~e\^[-[|m]{}\^2/4g]{}  . Numerical approach ================== To understand to behavior of random gap fluctuations in graphene, and also the limitations of the SCBA, we carried out extensive numerical simulations on the honeycomb lattice, allowing for various random gap fluctuations on top of a uniform gap $m$. These fluctuations are simulated by box and Gaussian distributions. From the SCBA, the emergence of a second-order phase transition at a critical mean $m_c$ is obvious for a given variance. This is best manifested in the behavior of the DOS, which stays finite for $\langle m\rangle <m_c$, and vanishes afterwards, and serves as an order parameter. Does this picture indeed survive, when higher order corrections in the fluctuations are taken into account? To start with, we take a fix random mass configuration with a given variance and the honeycomb lattice (HCL) with the conventional hoppings ($t$), represented by $H_0$. Then, we take a separate Hamiltonian, responsible for the uniform, non-fluctuating gaps, denoted by $H_{gap}$, and study the evolution of the eigenvalues of $H_0+mH_{gap}$ by varying m for a 600x600 lattice. By using Lanczos diagonalization, we focus our attention only to the 200 eigenvalues closest to the NP. Their evolution is shown in Fig. \[rmeigenval600\]. This supports the existence of a finite $m_c$, but since it originates from a single random disorder configuration, rare events can alter the result. As a possible definition of the rigid gap, we also show the maximum of the energy level spacing for these eigenvalues as a function of $m$. As seen, it starts to increase abruptly at a given value of $m$, which can define $m_c$. \[t\]\[b\]\[1\]\[0\][$m/t$]{} \[b\]\[t\]\[1\]\[0\][eigenvalues$/t$]{} \[\]\[\]\[1\]\[0\][$g=0.6^2$]{} \[\]\[\]\[1\]\[0\][$g=0.8^2$]{} \[\]\[\]\[1\]\[0\][$g=1$]{} ![(Color online) The evolution of the 200 lowest eigenvalues is shown for a given random mass configuration with Gaussian distribution (with variance g) on a 600x600 HCL, by varying the uniform gap. The red line denotes the maximum of the level spacing of these eigenvalues, a possible definition of the average gap. \[rmeigenval600\]](rmeigenval600.eps "fig:"){width="7cm" height="10cm"} \[t\]\[b\]\[1\]\[0\][$m/t$]{} \[b\]\[t\]\[1\]\[0\][$\rho(0)t$]{} \[t\]\[b\]\[1\]\[0\][$g$]{} \[b\]\[t\]\[1\]\[0\][exponent ($c$)]{} ![(Color online) The density of states at the NP is plotted for Gaussian random mass for a 200x200 HCL for $g=0.9^2$, 1, $1.1^2$, $1.2^2$ and $1.3^2$ from bottom to top after 400 averages. The symbols denote the numerical data, solid lines are fits using $a\exp(-b m^c)$. The inset shown the obtained exponents, $c$, as a function of $g$, which is close to 1.5. \[rmgauss\]](dosrandommassgauss.eps "fig:"){width="8cm" height="8cm"} To investigate whether a finite critical $m_c$ survives, we take smaller systems and evaluate the averaged DOS directly from many disorder realizations. To achieve this, we take a 200x200 HCL, and evaluate the 200 closest eigenvalues to the NP, and count their number in a given small interval, $\Delta E$ (smaller than the maximal eigenvalues) around zero. This method was found to be efficient in studying other types of randomness [@ziegler08b]. We mention that large values of $\Delta E$ take contribution from higher energy states into account, while too small values are sensitive to the discrete lattice and consequently the discrete eigenvalue structure of the Hamiltonian. For lattices containing a few $10^4-10^5$ sites, $\Delta E/t\sim 10^{-2}-10^{-4}$ are convenient. The resulting DOS is plotted in Figs. \[rmgauss\] and \[rmbox\] for Gaussian (with variance $g$) and box distribution (within $[-W..W]$, variance $g=W^2/3$). This does not indicate a sharp threshold, but rather the development of long Lifshitz tails due to randomness, as we already predicted in the previous section. To analyze them, we fitted the numerical data by assuming exponential tails of the form $$\rho(0)=t\exp\left(-a-b m^c\right)$$ for a Gaussian and $$\rho(0)=t\exp\left(-a-b/|m-W|^c\right)$$ for a box distribution, as suggested by Ref. [@Cardy]. The obtained $c$ values are visualized in the insets of Figs. \[rmgauss\] and \[rmbox\]. Given the good agreement, we believe that the DOS at the NP is made of states that are localized in a Lifshitz tail. We mention that these results are not sensitive to finite size scaling at these values of the disorder and uniform gap, only smaller systems (like the 30x30 HCL) require more averages ($\sim 10^4$), whereas for larger ones (such as the 200x200 with 400 averages) fewer averages are sufficient. In Fig. \[dosenerg\], the energy dependent DOS is shown for Gaussian random mass with $g=1$ and for several uniform gap values. With increasing $m$, the DOS dimishes rapidly at low energies, and develops a pseudogap. The logarithmic singularity at $E=t$ is washed out for $g=1$. We also show the inverse of the DOS, proportional to $T_0$, the characteristic temperature scale of variable range hopping as a function of the carrier density (which is proportional to $E^2$). \[t\]\[b\]\[1\]\[0\][$m/t$]{} \[b\]\[t\]\[1\]\[0\][$\rho(0)t$]{} \[t\]\[b\]\[1\]\[0\][$g$]{} \[b\]\[t\]\[1\]\[0\][exponent ($c$)]{} ![(Color online) The density of states at the NP is plotted for box distributed ($[-W..W]$) randomness for a 200x200 HCL for $W=1.5$, 1.7 and 2 ($g=W^2/3$) from bottom to top after 400 averages. The symbols denote the numerical data, solid lines are fits using $a\exp(-b/|m-W|^c)$. The inset shown the obtained exponents, $c$, as a function of $g$. \[rmbox\]](rmdosbox200.eps "fig:"){width="8cm" height="8cm"} \[t\]\[b\]\[1\]\[0\][$E/t$]{} \[b\]\[t\]\[1\]\[0\][$\rho(E)t$]{} \[t\]\[b\]\[1\]\[0\][$(E/t)^2\sim n$]{} \[b\]\[t\]\[1\]\[0\][$1/\rho(E)t\sim T_0$]{} ![(Color online) The energy dependent density of states is plotted for Gaussian distributed random mass for a 30x30 HCL after $10^4$ averages for $g=1$, $m=2$ (cyan), 1 (blue), 0.5 (red), 0.3 (black), 0.2 (magenta) and 0 (green) in the left panel. The right panel visualizes the inverse of the density of states, being proportional to $T_0$ in the variable range hopping model as a function of the energy squared (proportional to the carrier density). \[dosenerg\]](dosenerg.eps "fig:"){width="7cm" height="7cm"} ![(Color online) The energy dependent density of states is plotted for Gaussian distributed random mass for a 30x30 HCL after $10^4$ averages for $g=1$, $m=2$ (cyan), 1 (blue), 0.5 (red), 0.3 (black), 0.2 (magenta) and 0 (green) in the left panel. The right panel visualizes the inverse of the density of states, being proportional to $T_0$ in the variable range hopping model as a function of the energy squared (proportional to the carrier density). \[dosenerg\]](dosenergt0.eps "fig:"){width="7cm" height="7cm"} Discussion ========== MLG and BLG consist of two bands that touch each other at two nodal points (or valleys). Near the nodes the spectrum of MLG is linear (Dirac-like) and the spectrum of BLG is quadratic. The application of a uniform SBP opens a gap in the DOS for both cases. For a random SPB, however, the situation is less obvious. First of all, it is clear that randomness leads to a broadening of the bands. If we have two separate bands due to a small uniform SPB, randomness can close the gap again due to broadening (cf. Fig. \[dosplot\]a). The broadening of the bands depends on the strength of the fluctuations of the random SBP. In the case of a Gaussian distribution there are energy tails for all energies. Now we focus on the NP, i.e. we consider $E=0$ and $\rho_0$. Then we have two parameters in order to change the gap structure: the average SBP $\langle m\rangle\equiv{\bar m}$ and the variance $g$. ${\bar m}$ allows us to broaden the gap and $g$ has the effect of closing it due to broadening of the two subbands. Previous calculations have shown that at the critical value $m_c(g)$ of Eq. (\[gap11\]) the metallic behavior breaks down for ${\bar m}>m_c(g)$ [@ziegler09b]. On the other hand, Gaussian randomness creates tails at all energies. Consequently, there are localized states for $|{\bar m}|\ge m_c(g)$ at the NP, and there are delocalized states for $|{\bar m}|<m_c(g)$ at the NP. The localized states in the tails are described, for instance, by the Lippmann-Schwinger equation (\[id3\]) . The SCBA with uniform self-energy is not able to produce the localized tails. An extension of the SCBA with non-uniform self-energies provides localized tails though, as an approximation for large ${\bar m}$ has shown [@villain00]. This is also in good agreement with our exact diagonalization of finite systems up to $200\times200$ size. A possible interpretation of these results is that there are two different types of spectra. In a special realization of $m_r$ the tails of the DOS represent localized states. On the other hand, the DOS at the NP $E=0$, obtained from the SCBA with [*uniform*]{} self-energy, comes from extended states [@ziegler09b]. The localized and the delocalized spectrum separate at the critical value $m_c$, undergoing an Anderson transition. [*Conductivity*]{}: Transport, i.e. the metallic regime, is related to the DOS trough the Einstein relation $\sigma\propto \rho D$, where $D$ is the diffusion coefficient. The latter was found in Ref. [@ziegler09b] for $E\sim0$ as D=(m\_c\^2-[|m]{}\^2)  , \[diffcoeff3\] where $a=1$ ($a=2$) for MLG (BLG). Together with the DOS $\rho_0=\eta/2g\pi$ and the scattering rate $\eta$ in Eq. (\[scattrate\]), the Einstein relation gives us at the NP $$\sigma(\omega\sim0)\propto \rho_0 D\frac{e^2}{h} \approx \frac{a}{8\pi^2}\left(1-\frac{{\bar m}^2}{m_c^2}\right)\Theta(m_c^2-{\bar m}^2)\frac{e^2}{h} \ .$$ In the localized regime (i.e. for $|{\bar m}|\ge m_c$) the conductivity is nonzero only for positive temperatures $T>0$. Then we can apply the formula for variable-range hopping in Eq. (\[vrh\]), which fits well the experimental result in graphane of Ref. [@elias08]. The parameter $T_0$ is related to the DOS at the Fermi level as [@mott69] $$k_B T_0\propto\frac{1}{\xi^2 \rho(E_F)} \ ,$$ where $\xi$ is the localization length. $T_0$ has its maximum at the NP $E_F=0$, as shown in Fig. \[dosenerg\] and decreases monotonically with increasing carrier density, as in the experiment on graphane [@elias08]. In conclusion, we have studied the density of states in MLG and BLG at low energies in the presence of a random symmetry-breaking potential. While a uniform symmetry-breaking potential opens a uniform gap, a random symmetry-breaking potential also creates tails in the density of states. The latter can close the gap again, preventing the system to become an insulator at the nodes. However, for a sufficiently large gap the tails contain localized states with nonzero density of states. These localized states allow the system to conduct at nonzero temperature via variable-range hopping. This result is in agreement with recent experimental observations [@elias08]. 0.5cm [**Acknowledgement:**]{} This work was supported by a grant from the Deutsche Forschungsgemeinschaft and by the Hungarian Scientific Research Fund under grant number K72613. [99]{} K.S. Novoselov, A.K. Geim, S.V. Morozov, D. Jiang, M.I. Katsnelson, I.V. Grigorieva, S.V. Dubonos, A.A. Firsov, Nature [**438**]{}, 197 (2005) Y. Zhang, Y.-W. Tan, H.L. Stormer, P. Kim, Nature [**438**]{}, 201 (2005) A.K. Geim and K.S. Novoselov, Nature Materials, [**6**]{}, 183 (2007) Y.-W. Tan, Y. Zhang, K. Bolotin, Y. Zhao, S. Adam, E.H. Hwang, S. Das Sarma, H.L. Stormer, P. Kim, Phys. Rev. Lett. [**99**]{}, 246803 (2007) J.H. Chen, C. Jang, M.S. Fuhrer, E.D. Williams, M. Ishigami, Nature Physics [**4**]{}, 377 (2008) S.V. Morozov, K.S. Novoselov, M.I. Katsnelson, F. Schedin, D.C. Elias, J.A. Jaszczak, A.K. Geim, Phys. Rev. Lett. [**100**]{}, 016602 (2008) D.C. Elias, R.R. Nair, T.M.G. Mohiuddin, S.V.Morozov, P. Blake, M.P.H alsall, A.C. Ferrari, D.W. Boukhvalov, M.I. Katsnelson, A.K. Geim, K.S. and Novoselov, Science [**323**]{}, 610 (2009) O. Taisuke, A. Bostwick, T. Seyller, K. Horn, E. Rotenberg, Science [**18**]{}, Vol. 313, 951 J.B. Oostinga, H.B. Heersche, X. Liu, A.F. Morpurgo, L.M.K. Vandersypen, Nature Materials [**7**]{}, 151 (2008) R.V. Gorbachev, F.V. Tikhonenkoa, A.S. Mayorova, D.W. Horsella and A.K. Savchenkoa, Physica E [**40**]{}, 1360 (2008) E.V. Castro, N.M.R. Peres, J.M.B. Lopes dos Santos, F. Guinea, and A.H. Castro Neto, J. Phys.: Conf. Ser. 129 012002 (2008) G. Li, A. Luican, E.Y. Andrei, Phys. Rev. Lett. [**102**]{}, 176804 (2009) N.F. Mott, [*Metal-Insulator Transitions*]{}, (Taylor & Francis, London, 1990) N.F. Mott, Philos. Mag. [**19**]{}, 835 (1969) S.V. Morozov et al., Phys. Rev. Lett. [**97**]{}, 016801 (2006) J.C. Meyer et al., Nature [**446**]{}, 60 (2007) A.H. Castro Neto, F. Guinea, N.M.R. Peres, K.S. Novoselov, and A.K. Geim, Rev. Mod. Phys. [**81**]{}, 109 (2009) E. McCann and V.I. Fal’ko, Phys. Rev. Lett. [**96**]{}, 086805 (2006); E. McCann, Phys. Rev. B [**74**]{}, 161403(R) (2006) E. McCann et al., Phys. Rev. Lett. [**97**]{}, 146805 (2006) P.W. Anderson, Phys. Rev. [**109**]{}, 1492 (1958) K. Ziegler, Phys. Rev. B [**55**]{}, 10661 (1997) K. Ziegler, Phys. Rev. Lett. [**102**]{}, 126802 (2009); arXiv:0903.0740 H. Suzuura and T. Ando, Phys. Rev. Lett. [**89**]{}, 266603 (2002) N.M.R. Peres, F. Guinea, and A.H. Castro Neto, Phys. Rev. B [**73**]{}, 125411 (2006) M. Koshino and T. Ando, Phys. Rev. B [**73**]{}, 245403 (2006) K. Ziegler, J. Phys. A [**18**]{}, L801 (1985) S. Villain-Guillot, G. Jug, K. Ziegler, Ann. Phys. [**9**]{}, 27 (2000) K. Ziegler, B. Dóra, P. Thalmeier, arXiv:0812.2790 J. L. Cardy, J. Phys. C: Solid State Phys. **11**, L321 (1978).
--- author: - | V.N.Lukash and E.V.Mikheeva\ Profsoyuznaya 84/32, Moscow 117810, Russia title: '$\Lambda$-inflation and CMB anisotropy' --- psfig PACS 98.80.-k, 98.80.Bp, 98.80.Cq, 98.80.Es **Abstract** We explore a broad class of three-parameter inflationary models, called the $\Lambda$-inflation, and its observational predictions: high abundance of cosmic gravitational waves consistent with the Harrison-Zel’dovich spectrum of primordial cosmological perturbations, the non-power-law wing-like spectrum of matter density perturbations, high efficiency of these models to meet current observational tests, and others. We show that a parity contribution of the gravitational waves and adiabatic density perturbations into the large-scale temperature anisotropy, T/S $\sim 1$, is a common feature of $\Lambda$-inflation; the maximum values of T/S (basically not larger than 10) are reached in models where (i) the local spectrum shape of density perturbations is flat or slightly red ($n_S{}_\sim^< 1$), and (ii) the residual potential energy of the inflaton is near the GUT scale ($V_0^{\frac{1}{4}} \sim 10^{16} GeV$). The conditions to find large T/S in the paradigm of cosmic inflation and the relationship of T/S to the ratio of the power spectra, $r$, and to the inflationary $\gamma$ and Hubble parameters, are discussed. We argue that a simple estimate, T/S$\simeq 3r\simeq 12\gamma \simeq \left(\frac{H}{6\times 10^{13}{\rm GeV}}\right)^2$, is true for most known inflationary solutions and allows to relate straightforwardly the important parameters of observational and physical cosmology. Introduction ============ The situation in physical cosmology is currently governed by experiment (observations) which made an increasing progress for the recent years. However, the theory of formation of [*Large Scale Structure*]{} in the Universe leaves something to be desired while a progress is still there: the simplest versions of the dynamical [*Dark Matter*]{} are discarded (e.g. sHDM, sCDM, cosmic strings), the cosmological model has become multiparametrer ($\Omega_{{\rm M}}$, $\Omega_{\Lambda}$, $\Omega_b$, $h$, $n_{{\rm S}}$, T/S, etc.) which hints on a complex nature of the dark matter in the Universe. Hopefully, the ongoing and oncoming measurements of the CMB anisotropy (both ground and space based) as well as the development of median and low $z$ observations will fix the DM/LSS model of the Universe by a few per cent in the nearest future. A theory of the very early Universe which meets most predictions and observational tests is inflation. It prophesys small Gaussian [*Cosmological Density Perturbations*]{} (the [*Scalar*]{} adiabatic mode) responsible for the LSS formation in the observable Universe. The ultimate goal here would be the reconstruction of DM parameters and CDP power spectrum directly from observational data (LSS [*vs*]{} $\Delta T/T$). A drama put in the basis of cosmic inflation is that it provides also a general ground for the fundamental production of [*Cosmic Gravitational Waves*]{} (the [*Tensor*]{} mode) which should contribute along with the S-mode into the $\Delta T/T$ anisotropy at large angular scale[^1]. Hence, a principal question on the way to the S-spectrum restoration remains the T/S problem – the fraction of the variance of the CMB temperature fluctuations on 10 degrees of arc generated by the CGWs: $$\left(\Delta T/T\right)^2 \vert_{10^0} = {\rm S} + {\rm T}.$$ Observational separation between the modes is postponed by the time when polarization measurements of the CMB anisotropy will be available (which require the detector sensitivity ${}_{\sim}^{<} 1\mu$K). Today, we can investigate the T/S problem theoretically. A common suggestion created by the [*Chaotic Inflation*]{} \[5\], that ’T/S [*is usually small*]{} (T/S ${}_{\sim}^{<} 0.1$)’, stems actually from a very specific property of the CI model (it inflates only at high values of the inflaton, $\varphi >1 $). However, in general this is not true: any inflation produces inevitably [*both*]{} pertubation modes, the ratio between them is not limited by unity and sticks to the parameters of a given model[^2]. Nevertheless, people often relate this T/S-CI feature to another basic property of the chaotic inflation with a smooth inflaton potential $V(\varphi)$ – the [*Harrison-Zel’dovich*]{} S-spectrum ($n_S\simeq 1$). Such a mythological statement that ’T/S [*is small when*]{} $n_S\simeq 1$’, has even been strenthened by the power-law inflation \[6\], \[7\] which has displayed that T/S may become large only at the expense of the rejection from the HZ-spectrum in S-mode: T/S ${}^{>}_\sim 1 $ when $n_S\leq 0.8$; obviously, [*vice versa*]{}, when $n_S\rightarrow 1$, the T/S tends to zero in a total accordance with the previous CI-assertion. The analytic approximation for T/S found in this model looks eventually universal for any inflationary dynamics when related to the T-spectrum slope index (estimated in the appropriate scale $k_{COBE}\sim 10^{-3}h/$Mpc)[^3], $$\frac {{\rm T}}{{\rm S}} \simeq -6n_T \simeq 12\gamma.$$ Since a case for the [*red*]{} S-spectra suggested by power-law inflation ($n_S < 1$) has confirmed the above statement of the T/S smallness for HZ-CDPs, we are facing to check the opposite situation – a case for models where the [*blue*]{} spectra ($n_S > 1$) are allowed and the T/S there. An example of the blue S-spectrum is provided by (i) the two-field hybrid inflation \[8\], \[9\], \[10\], \[11\] for a certain range of the model parameters, (ii) a single massive inflaton \[12\] ($V=V_0 + m^2\varphi^2/2$), and (iii) that producing power-law S-spectrum \[13\], \[14\], \[15\]. However, the problem of blue S-spectra is more generic and requires its full investigation. In this paper we present such analysis for the case of a [*single*]{} inflaton field. Below, we start considering the inflationary requirements for the production of blue S-spectra. We introduce a simple natural model of such an inflation with one scalar $\varphi$ field which we call the [*$\Lambda$-inflation*]{}. It proceeds at any values of the inflaton and generates a typical feature in the S-spectrum: a blue branch in short wavelengths (small $\varphi$ values) and a red one in large wavelengths (high $\varphi$ values). Between these two asymptotics the broad-scale transient spectrum region is settled down where the ’T/S [*is close to its highest value (generally not more than 10) when the*]{} S [*spectrum (or the joint*]{} S+T [*metric fluctuation spectrum) is essentially HZ one*]{}’. Futher on, we analyse physical reasons for the latter generic statement (CI is the measure zero in the family of $\Lambda$-inflation models) and its place in the inflationary paradigm. Surprisingly, the phenomena of large T/S and blue S-spectra are two totally disconnected problems: both are realized in $\Lambda$-inflation but at different scales and field values. The large T/S is produced where inflation proceeds only marginally (the subunity $\gamma$-values) which occurs near $\varphi\sim 1$ where the S-spectrum tilt is slightly red, $n_S{}_\sim^< 1$. On the contrary, the blueness ($n_S > 1$) is gained for $\varphi \ll 1$ and has thus a different physical origin. We conclude by discussing the necessary and sufficient conditions for obtaining large T/S from inflation, and argue for a general estimate of T/S based on eq.(2). The $\Lambda$-Inflation ======================= We are looking for the simplest way to get a blue-kind spectrum of density perturbations generated at inflation driven by one scalar field $\varphi$. The minimal coupling of $\varphi$ to geometry is given by the action ($c = \hbar = 8\pi G = 1$): $$W \left[\varphi, g^{ik}\right] = \int \left(L - \frac 12 R\right) \sqrt{-g} \; d^4 x$$ where $g_{ik}$ and $R_{ik}$ are the metric and Ricci tensors respectively, with the signature $(+ - - -)$, $g = det (g_{ik})$, and $R \equiv R^{i}_{i}$. The field Lagrangian is an arbitrary function of two scalars, $$L = L\left(w, \varphi\right),$$ where $w^{2} = \varphi_{,i} \varphi^{,i}$ is the kinetic term of $\varphi$-field. Actually, the latter can be simplified at inflation. Indeed, the inflationary condition (taken in the locally flat Friedmann geometry), $$\gamma \equiv - \frac{\dot{H}}{H^{2}} = \frac{3(\rho + p)}{2\rho} = \frac{3 w^{2} M^{2}}{2(w^{2} M^{2} - L)} < 1,$$ implies generally that $$w^2 M^{2} \equiv \frac{\partial L}{\partial (\ln w)} < - 2L,$$ just telling us on the validity of the Taylor-decomposition of (4) over small $w^{2}$: $$L=L(0,\varphi)+\frac{1}{2}w^{2}M^{2}(0,\varphi)+0(w^{4}),$$ where $\rho\equiv w^2M^2-L$ and $p\equiv L$ are comoving density and pressure of $\varphi$-field, $H=\frac{g_{,i}\varphi^{,i}}{6w g}$ is the local Hubble factor. After redefining the field by a new one, $$\varphi \Rightarrow \int M (0, \varphi) d \varphi,$$ we come to a standard form for the Lagrangian density at inflation which is assumed further on: $$L = - V (\varphi) + \frac{w^{2}}{2}.$$ Here $V = V(\varphi)$ is the potential energy of $\varphi$-field. A simple guess on the condition necessary to arrange inflation with a blue S-spectrum arises when we address an example of the slow-roll approximation. Under this approach the spectrum of created scalar perturbations $q_{k}$ is straightforwardly related to the inflaton potential $V(\varphi) \simeq 3 H^{2}$ at the horizon-crossing: $$q_{k}\simeq \frac{H}{2\pi\sqrt{2\gamma}} = \frac{H^2}{4\pi H^{\prime}_{\varphi}},\;\;\;\; k=aH=\dot{a},$$ where $a$ is the scale factor and dot denotes the time derivative. The wave number $k$ increases with time as $a$ grows faster than $H^{-1}$ in any inflationary expansion (see eq.(5)): $$\left(\ln\left(aH\right)\right)^{.}=\left(1-\gamma\right)H>0.$$ Eq.(7) prompts evidently: while decreasing $H^\prime_{\varphi}$ with $k>k_{cr}$, one gains the power on short scales and, thus, realizes the blue spectrum slope. Without loss of generality, we will assume that $V(\varphi)$ is a function growing with $\varphi(>0)$ and getting its local minimum at $\varphi = 0$. It means that during the process of inflation $\varphi$-field evolves to smaller values. Hence, the necessary condition for a blue spectrum could be any way of flattenning the potential shape at smaller $\varphi< \varphi_{cr}$ to provide for a rise of $H^{2}/H^\prime_{\varphi}$ and keeping the inflation still on ($H^{\prime}_{\varphi} < H/ \sqrt{2}$, cf. eqs.(5), (6)): $$1-n_S\simeq \frac{\gamma}{H} \left(\frac{H^{2}}{H^{\prime}_{ \varphi}} \right)^{\prime}_{\varphi} < 0.$$ The latter equation leads to a broad-brush requirement of the positive potential energy at the local minimum point of $\varphi$-field: $$V_{0}\equiv V(0) > 0,$$ which displays the existence of the effective $\Lambda$-term during the period of inflation dominated by the residual (constant) potential energy: $$V(\varphi<\varphi_{cr})\simeq V_{0}\equiv\Lambda\equiv 3H_0^2,$$ where the characteristic value $\varphi_{cr}$ is determined as follows [^4]: $$V (\varphi_{cr}) = 2 V_{0}.$$ This appearance of the [*de Sitter*]{}-type inflation (for $\varphi < \varphi_{cr}$) results in a drastic difference with CI which has eventually assumed that $V_{0}=0$ just making the inflation at small $\varphi$ in principal impossible. Obviously, the latter hypothesis on vanishing the potential energy at $\varphi=0$ has reduced the CI-model to a very partial case (from the point of view of eq.(9)) restricting the inflation dynamics by only high values of the inflaton ($\varphi >1$). So, we may conclude that the $\Lambda$-inflation based on eq.(9) presents a general class of the fundamental inflationary models. In this sense they are more natural models (the CI being of the measure zero in $V_{0}$-parameter) allowing the inflation also at small $\varphi$-values (less than the Planckian one). Summarising, we see that under condition (9) we have two qualitatively different stages of the inflationary dynamics separated by $\varphi\sim\varphi_{cr}$. We will call them: - the CI stage ($\varphi {}_{\sim }^{>}\varphi_{cr}$), where the evolution is not influenced by the $\Lambda $-term and looks essentially like in standard chaotic inflation, and - the dS stage ($\varphi <\varphi _{cr}$) predominated by the $V_0$-constant. The completion of the full inflation in this model is related to $V_{0}$ -decay which is supposed to happen at some $\varphi^{\ast} < \varphi_{cr}$ [^5]. So, we deel with the three-parameter model $(V_0, \varphi_{cr}, \varphi^{\ast})$ starting as CI ($\varphi >\varphi_{cr}$) and processing by dS-inflation at small $\varphi$ ($\varphi^{\ast} < \varphi < \varphi_{cr}$). As we know from the CI theory smooth $V$-potentials create generally the [*red*]{} $q_{k}$-spectra ($n_S<1$ for $\varphi > \varphi_{cr}$). On the other hand, eq.(9) provides physical grounds for the [*blue*]{} spectra generated at dS period ($n_S>1$, cf. eq.(8)). Recall for comparison, that the spectrum of gravitational waves produced at any inflationary regim is given by the universal formula (here both polarizations are taken into account): $$h_k=\frac H{\pi\sqrt 2}, \;\;\;\; k = a H,$$ which generally ensures the red-like T-spectra as $H$ decreases in time for $\rho+p>0$: $n_T = -2\gamma <0$ (see eq.(5)). A trivial way to maintain eq.(9) is the introduction of an additive $\Lambda$-term in the inflation potential. Keeping in mind only the simplest dynamical terms we easily come to a trivial and rather general potential form: $$V=V_0+\frac 12 m^2\varphi^2+\frac 14\lambda\varphi^4,$$ which may also be understood as a decomposition of $V(\varphi)$ over small $\varphi$. Here, such decomposition is a reasonable approach since the inflation proceeds to small $\varphi \rightarrow 0$. Obviously, eq.(11) can be explicitely reversed in this case: $$\varphi^2_{cr}=\frac{4 V_0}{m^2+\sqrt{m^4+4\lambda V_0}}.$$ Also, we will use later the power-law potential $$V=V_{0}+\frac{\lambda_{\kappa}}{\kappa}\varphi^{\kappa}= V_{0}\left(1+y^{\kappa}\right),$$ where $\kappa$ and $\lambda_{\kappa}$ are positive numbers ($\kappa\ge 2$, $\lambda_2\equiv m$, $\lambda_4\equiv\lambda$), $\varphi_{cr}^{\kappa}=\kappa V_0/\lambda_{\kappa}$, and $y=\varphi/\varphi_{cr}$. Let us turn to the evolution and spectral properties of $\Lambda$-inflation models. The background model ==================== Below, we consider dynamics under the condition (9). The background geometry is classical employing the 6-parametric Friedmann group: $$ds^{2}=dt^{2}-a^{2}d\vec{x}^{2}=a^{2}(d\eta^{2}-d\vec{x}^{2}),$$ The functions of time $a$ and $\varphi$ are found either from the Einstein equations: $$H^{2} = \frac{1}{3} V + \frac{1}{6} \dot{\varphi}^{2},$$ $$\dot{H} = - \frac{1}{2} \dot{\varphi}^{2},$$ or equivalently, from the $\varphi$-field equation (with $H$ taken from eq.(17)): $$\ddot{\varphi} + 3 H \dot{\varphi} + V^{\prime}_{\varphi} = 0.$$ Coming to the dimensionless quantities, $$h \equiv \frac{H}{H_{0}}, \;\;\;\; v = v(y) \equiv \left(\frac{V}{V_{0}} \right)^{1/2},$$ $$y \equiv \frac{\varphi}{\varphi_{cr}}, \;\;\;\; x \equiv H_0 \left(t - t_{cr}\right), \;\;\;\; \epsilon \equiv \frac {2}{\varphi_{cr}},$$ we can derive the first-order-equation for the function $h=h(y)$[^6]: $$h=\frac{v}{\sqrt{1-\gamma/3}},\;\;\;\; \sqrt{2\gamma}=\epsilon \frac{h^{\prime}}{h},$$ and/or the second-order-equation for $y = y(x)$: $$\ddot{y}+3h\dot{y}+\frac{3}{2}\epsilon^{2}vv^{\prime}=0.$$ Eq.(18) yields the relationship between two functions: $$2\dot{y} = - \epsilon^{2} h^{\prime}.$$ The inflation condition (5) allows to find the inflationary solution of eq.(21) via the decomposion over small $\gamma$: $$h=v\left(1+\frac 16{\gamma}+o(\gamma)\right),$$ $$a=-\frac{1}{H\eta}\left(1+\gamma+o(\gamma)\right),$$ where $$\sqrt{2\gamma}=\frac{\epsilon v^{\prime}/v}{1-\vartheta/3},\;\;\;\; \vartheta\equiv\frac{\epsilon\left(\sqrt{\gamma/2} \right)^{\prime}}{1- \gamma/3}=\frac{\left(\sqrt{2\gamma}\right)^{ \prime}_{\varphi}}{1-\gamma/3}.$$ Making use of eqs.(23), (25) we may also present the derivatives of $y$-function over the conformal time, $$\frac{dy}{d\ln\vert\eta\vert} = \epsilon\sqrt{\frac{\gamma}{2}} \left(1+\gamma+o(\gamma)\right),\;\;\;\; \vartheta=\frac{d\ln\sqrt{\gamma}}{ d\ln\vert\eta\vert} \left(1-\frac {2}{3}\gamma+o(\gamma )\right).$$ We will also need for further analysis the $\varphi$-derivatives at the horizon-crossing[^7], $$\frac{d\varphi}{d\ln k}=-\sqrt{2\gamma}\left(1+\gamma+o(\gamma) \right),\;\;\;\; \frac{d\ln\gamma}{d\ln k}=-2\vartheta\left(1+\frac{2}{3} \gamma+ o\left(\gamma\right)\right),$$ and the scattering potentials (cf.eqs.(52)), $$U\equiv\frac{d^{2}\left(a\sqrt{\gamma}\right)}{a\sqrt{\gamma}d \eta^{2}}% =a^{2}H^{2}\left(2-\gamma-3\vartheta\left(1-\frac{ \gamma}{3}\right)^{2} + \frac{1}{4}\epsilon^{2}\gamma^{\prime\prime}\right),$$ $$U^{\lambda}\equiv\frac{d^{2}a}{ad\eta^{2}}=a^{2}H^{2}\left(2- \gamma\right)= \frac{2}{\eta^{2}}\left(1+\frac {3}{2}\gamma+ o(\gamma)\right).$$ Actually, eqs.(24)-(29) are true during the whole period of inflation based on inequality (5); they describe the evolution along the attractor inflationary separatisa towards which any solution of eqs.(17)-(19) tends during the Universe expansion. However, it is an additional to the inflation condition (5) assumption known as the slow-roll approximaion, $$\vert\vartheta\vert < 1,$$ that, when works, simplifys the situation allowing to relate $\gamma$ and $y$ algebraically (see eqs.(26)) and thus to solve eqs.(21), (26) explicitely. Both inequalities (5) and (30) can be rewritten, respectively, as $$\epsilon\frac{v^{\prime}}{v}<1,\;\;\;\;{\rm and}\;\;\;\; \epsilon^{2}\frac{\vert v^{\prime\prime}\vert}v<1.$$ $\Lambda$-inflation proceeds most difficult near $y\sim 1$. Indeed, for the power-law potential (15), $v=\sqrt{1+y^{\kappa} }$, the first inequality (31) meets at the worst point $y\sim y_1 = (\kappa -1)^{\frac{1}{\kappa}} \simeq 1$ only for small $\epsilon$, $$\epsilon<\epsilon_{0}=\frac {2}{\kappa-1},\;\;\;\;or\;\;\;\; \varphi_{cr}\;{}^>_\sim\;(\kappa-1)\ge 1,$$ that we assume hereafter. The second inequality (31) holds at any $y$ unless $\kappa<3$. For the latter case the slow-roll approximation is broken in the field interval $$\exp\left(-\frac{1}{\kappa-2}\right)<y<1,$$ where the left-hand-side keeps constant: $\frac{\epsilon^{2}v^{ \prime \prime}}{v}\sim\epsilon^{2}$ (hence, the slow-roll approximation is restored in the limit $\epsilon\rightarrow 0$). So, for $\kappa=2$, the whole evolution for $y<1$ deviates strongly from the slow-roll approximation. Before coming to it, we write down the evolution for $\kappa\ge 3$. The $\Lambda\lambda$-Inflation ------------------------------ The slow-roll approximation is met for $\kappa\ge 3$; then, under conditions (5) and (30), eq.(23) is integrated explicitely: $$a = \exp\left(-\int\frac{d\varphi}{\sqrt {2\gamma}}\right) \simeq \gamma^{\frac{1}{6}}\exp\left(-\frac{2}{\epsilon^{2}} \int\frac{vdy}{v^{ \prime}}\right).$$ Substituting here $v=\sqrt{1+y^\kappa}$, we have at the horizon-crossing: $$\kappa\ge 3:\;\;\;\; y^{2}\left(1-\left(\frac{y_{2}}{y}\right)^{\kappa} \right)=\Theta,$$ where $y_{2} = \left(\frac{2}{\kappa-2}\right)^{\frac{1}{\kappa}} \simeq 1$, $\Theta = -\frac{\kappa\epsilon^{2}}{2} \ln K = \frac{\kappa-4 }{\kappa-2} - \frac{\kappa\epsilon^{2}}{2} \ln K_{c}$, $K = \frac{a}{\gamma^{\frac{1}{6}}} = \left(\frac {k}{k_{2}} \right) \left( \frac{y_2}{y}\right)^{\frac{\kappa-1}{3}} \left( \frac{\kappa/(\kappa-2)}{1+ y^{\kappa}}\right)^{\frac {1}{6}} \sim \frac{k}{k_{2}}$, $K_{c} = \left(\frac{k}{k_{cr}}\right) \left(\frac{1}{y}\right)^{\frac{ \kappa-1}{3}} \left(\frac{2}{1+y^{\kappa}} \right)^{\frac {1}{6}} \sim \frac{k}{k_{cr}}$. Evidently, $$\frac{d\ln K_{(c)}}{d\ln k} = 1+\gamma + \vartheta /3+o(\gamma) +o(\vartheta)\simeq 1,$$ $$y\simeq\Biggl\{ \begin{array}{lcl} \Theta^{\frac {1}{2}}, & \; & y>y_{2}, \\ \left(\frac{2}{\left(\kappa-2\right)\vert\Theta\vert}\right)^{\frac{1}{ \kappa-2}}, & \; & y<y_{2}. \end{array}$$ The transition period between these two asymptotics, $\vert\Theta \vert^{<}_{\sim}1$, is pretty small in $y$-space, $$\vert y-y_{2}\vert<\frac {1}{\kappa}:\;\;\;\; y\simeq y_{2} + \frac{1}{ \kappa y_{2}}\Theta\simeq 1 - \frac{\epsilon^{2}}{2}\ln K,$$ however, it is big in the correspondent frequency band (cf.eq.(32)): $$\vert\ln K\vert < \frac{2}{\kappa\epsilon^2} \left({}^{>}_{\sim}\frac{1}{ \epsilon}\right).$$ An interesting physical case here is the case with self-interacting field, which we call $\Lambda\lambda$-inflation: $$\kappa=4:\;\;\;\;\;y^{2}\simeq\sqrt{1+(\epsilon^{2}\ln K)^{2}} - \epsilon^{2}\ln K,$$ where $K=K_{c} = \frac{k}{k_{cr} y}(\frac {2}{1+y^{4}})^{\frac{1}{6}}$. Recall that the $\epsilon$-parameter should not exceed unity if we want to keep inflation everywhere. The $\Lambda m$-Inflation ------------------------- The case of massive field ($\kappa=2$, $v=\sqrt{1+y^2}$) violates the slow-roll condition and requires more carefull investigation. The slow-roll approximation works well for $y^{>}_{\sim} 1$, but is broken at small $y$ as $\frac{v^{\prime\prime}}{v}=v^{-4}\sim 1$ for $y < 1$ (see eq.(33)). In the latter case $h\simeq 1$ and eq.(22) turns to linear one presenting the $y$-function as a linear superposition of the [*fast*]{} (+) and [*slow*]{} (-) exponents ($\sim e^{-1.5(1\pm p)x}\sim \vert\eta\vert^{1.5(1\pm p)}$). This allows for a straightforward, i.e. independent of the exponent amplitudes, derivation of the $U$-potential at the dS stage (see eqs.(27),(29)): $$y<1:\;\;\;\; U\equiv\frac{d^{2}(a\sqrt{\gamma})}{a\sqrt\gamma d \eta^{2}} \simeq \frac{d^{3}y}{d\eta^{3}}\left(\frac{dy}{d\eta} \right)^{-1}\simeq \frac{9p^{2}-1}{4\eta^{2}},$$ where $p=\sqrt{1-\frac{2\epsilon^{2}}{3}}$. The inflationary evolution proceeds in a non-oscilatory way for $\varphi < \varphi_{cr}$ if $$0<p<1,\;\;\;\;\varphi_{cr}\;{}^{>}_{\sim}1.6,$$ that we will assume further on. With such a requirement the inflation is garanteed for any $\varphi$ (cf.eqs.(32)). To find the exponent amplitudes for $y(<1)$ we have to match the full inflationary separatrisa at $y\sim 1$. To do it let us exclude the first-derivative term in eq.(22) introducing a new variable $z = z(\eta) \equiv ya$: $$\frac{d^{2}z}{d^{2}\eta^{2}}-\tilde {U} z=0,$$ and then approximate the $\tilde {U}$-function by a simple step-function: $$\tilde {U}\equiv\left(aH\right)^{2}\left(2-\gamma- \frac{3\epsilon^{2}}{ 2h^{2}}\right) \simeq \frac{2}{\eta^{2}} \left(1-\frac{3\epsilon^{2}}{4v^{2}} \right)\simeq\frac{1}{\eta^{ 2}}\Biggl\{ \begin{array}{lcl} 2, & \; & \eta<\eta_{3}, \\ \frac{9p^{2}-1}{4}, & \; & \eta>\eta_{3}, \end{array}$$ where $\eta_{3}\simeq\eta_{cr}$. The solution of eq.(40) is then got explicitely; matching $z$-function and its first derivative at $% \eta=\eta_{3} $ and taking into account that $H_{0}z\eta \rightarrow -1$ for large $y$, we obtain at the dS stage (cf.eqs.(27)): $$\omega>1:\;\;\; y\simeq\omega^{-\frac{3}{2}}\left({\rm ch}\mu +\frac {1}{p}{\rm sh}\mu \right),\;\;\; \sqrt{\frac{\gamma}{2}}\simeq\frac{\epsilon}{p}\omega^{-\frac{3}{ 2}} {\rm sh}\mu,\;\;\; \vartheta \simeq\frac {3}{2}\left(1-p{\rm cth}\mu\right),$$ where $\mu=\frac {3}{2} p\ln\omega^{>}_{\sim} p$, $\omega = \frac{\eta_{3}}{ \eta}\simeq\frac{\eta_{cr}}{\eta}\simeq \frac{k}{k_{cr}}$. The fitting coefficients in eq.(41) describe a part ($y<1$) of the full inflationary separatrisa extending from large to small values of the $\varphi $-field[^8]. We see that at the de Sitter stage the function $\vartheta = \vartheta(\omega)>0$ varies slowly, $$y<1:\;\;\;\;\;\;\;\vartheta\simeq\Biggl\{ \begin{array}{lcl} \frac {3}{2}-\frac{1}{\ln\omega}, &\; &1^<_\sim \ln\omega<\frac{2}{3p} \\ \frac{\epsilon^{2}}{1+p}, & \; & \ln\omega^>_\sim\frac{2}{3p} \end{array},$$ and $$\sqrt{2\gamma}=\frac{\epsilon y}{1-\frac{\vartheta}{3}}\;,\;\;\;\;\; y\simeq\Biggl\{ \begin{array}{lcl} \frac {3}{2}\omega^{-\frac {3}{2}}\ln\omega, & \; & 1_{\sim}^{<}\ln \omega< \frac{2}{3p} \\ \frac{1+p}{2p}\omega^{\frac {3}{2}(p-1)}, & \; & \ln\omega\;^{>}_{ \sim} \frac{2}{3p} \end{array}.$$ The field evolution approaches the slow exponent only for $\ln \omega> \frac{2}{3p}\;\left(y<\frac{\exp\left(-\frac{1}{p}\right) }{p}\right)$: $$y\ll 1:\;\;\;\; y\simeq \frac{1+p}{2p}\omega^{\frac {3}{2}(p-1)}, \;\;\;\; \sqrt{2\gamma}\simeq \frac{\epsilon}{p}\omega^{\frac{3}{2}(p-1)}.$$ For $p\in \left(\frac 23, 1\right)$ the true evolution at the dS stage is presented only by the bottom lines in eqs.(42), (43); this fact is used in the Appendix to restore the whole inflation dynamics for $\epsilon^2<\frac 56$. Comparing eqs.(35) and (44) we see that at the dS stage $y$ decays as $\ln k$ for $\kappa\ge 3$, whereas it is the power-law for $\kappa=2$. For the intermediate case $2<\kappa <3$ the slow-roll approximation is violated only within the limited interval (33) where the solution can be matched by eq.(41) with $p=\sqrt{1 - \frac{\kappa\epsilon^{2}}{3}}$. The generation of primordial perturbations ========================================== Below, we introduce the S and T metric perturbation spectra and find them for $\Lambda$-inflation. The linear perturbations over the geometry (16) can be irreducably represented in terms of the uncoupled Scalar, Vector and Tensor parts \[1\]. The vector perturbations are not induced in our case as scalar fields are not their sources. Under the action (3) we are rest with only the S and T modes, and the new geometry looks as follows: $$ds^{2} = (1+h_{00})\;dt^{2}+2ah_{0\alpha}\;dtdx^{\alpha}-a^{2} (\delta_{\alpha\beta} + h_{\alpha\beta})\;dx^{\alpha}dx^{\beta},$$ $$\frac {1}{2}h_{\alpha\beta}=A\delta_{\alpha\beta}+B_{,\alpha \beta}+G_{\alpha\beta},\;\;\;\; h_{0\alpha}=C_{,\alpha},$$ where $G^{\alpha}_{\alpha}=G^{\beta}_{\alpha,\beta}=0$. The gravitational potentials $h_{00}$, $A$, $B$, $C$ are coupled to the perturbation of scalar field $\delta\varphi$, whereas $G_{ \alpha \beta}$ is the free tensor field. The Lagrangian $L^{(2)}$ of the perturbation sector of the geometry (45) is given by decomposing the integrand (3) up to the second order in the perturbation amplitudes. Our further analysis of the S-sector follows a general theory of the $q$-field (\[4\], \[16\]), the gravitational waves are totally described by the gauge-independent 3D-tensor $G_{\alpha\beta}$ (\[3\], \[17\], \[18\]). Instead of considering gauge-dependent potentials ($h_{00}$, $A$, $B$, $C$, $\delta\varphi$) we introduce the gauge-invariant canonical 4D-scalar $q$ uniquely fixed by the appearence of the S-part of the perturbative Lagrangian $L^{(2)}$ similar to a massless field: $$L^{(2)} = L(q,G_{\alpha\beta}) = \frac {1}{2}\alpha^{2}q_{,i} q^{,i}+ \frac{1}{2}G_{\alpha\beta,\gamma}G^{\alpha\beta,\gamma},$$ where $\alpha^{2}\equiv 2\gamma = \frac{\rho+p}{H^{2}} = \left(\frac{\dot{\varphi }}{H}\right)^{2}$, $\alpha = \frac{\dot{\varphi}}{H}$ (mind the choice of the sign for $\alpha$ that we take coinciding with the sign of $\dot{\varphi}$). The relation of $q$ to the original potentials takes the following form: $$\delta\varphi=\alpha\left(q+A\right),\;\;\;\; a^{2}\dot{B}+C= \frac{\Phi+A}{H},$$ $$\frac{1}{2}h_{00} = \gamma q + \left(\frac{A}{H}\right)^{.}, \;\;\;\; \Phi=\frac{H}{a}\int a\gamma q dt,$$ $$\frac{\delta\rho}{\rho+p} = \frac{\dot{q}}{H} - 3(q+A),\;\;\;\; 4\pi G\delta\rho_{c} \equiv\gamma H\dot{q}=a^{-2}\triangle\Phi,$$ where $a$, $\varphi$, $H$, $\alpha$, $\gamma$, $\rho=\frac{1}{2} w^{2}+V$ and $p=\frac{1}{2}w^{2}-V$ are the background functions of time, $\Phi$ is the “Newtonian” gauge-invariant gravitational potential related non-locally to $q$, $\triangle\equiv\partial^{ 2}/\partial^{2}\vec{x}^{2}$ is spatial Laplacian, ($\triangle=- k^{2}$ in the Fourrier representation, $\delta\rho_{c}$ is the comoving density perturbation). Any two potential taken from the triple $A$, $B$, $C$ are arbitrary functions of all coordinates, which determines the gauge choice. All information on the physical scalar perturbations is contained in the $q=q(t,\vec{x}) $ field, the dynamical 4D-scalar propagating in the unperturbed Friedmann geometry (i.e. independently of any gauge in eq.(45)). The equations of motions of the $q$ and $G_{\alpha\beta}$ fields are two harmonic oscilators: $$\ddot{q}+\left(3H+\frac{\dot{\gamma}}{\gamma}\right)\dot{q}-a^{- 2}\triangle q=0,$$ $$\ddot{G_{\alpha\beta}}+3H\dot{G_{\alpha\beta}}-a^{-2}\triangle G_{\alpha\beta}=0.$$ A standard procedure to find the amplitudes generated is to perform the secondary quantization of the field operators, $$q =\int^{\infty}_{-\infty}d^{3}\vec {k}\left(a_{\vec k}q_{\vec k}+ a^{+}_{\vec{k}}q^{\ast}_{\vec k}\right),$$ $$G_{\alpha\beta}=\sum_{\lambda}\int^{\infty}_{-\infty}d^{3}\vec{k} \left(a^{\lambda}_{\vec k}h^{\lambda}_{\vec{k}\alpha\beta}+ a^{ \lambda +}_{\vec{k}}h^{\lambda \ast}_{\vec {k}\alpha\beta}\right ),$$ where $+/\ast$ denotes Hermit/complex conjugation, index $\lambda =+,\times$ runs two polarizations of gravitational waves with the polarization tensors $c_{\alpha \beta}^{\lambda}(\vec{k})$, and $$q_{\vec{k}}=\frac{\nu_{k}}{\left(2\pi\right)^{\frac{3}{2}}\alpha a}\; e^{i\vec {k}\vec {x}},$$ $$h^{\lambda}_{\vec{k}\alpha\beta} = \frac{\nu_{k}^{\lambda}}{\left(2\pi\right)^{\frac{3}{2}}a}\; e^{i\vec{k}\vec{x}}c^\lambda_{\alpha\beta}\left(\vec{k}\right),$$ $$\delta^{\alpha\beta}c^\lambda_{\alpha\beta}\left(\vec k\right)= k^\alpha c^\lambda_{\alpha\beta}\left(\vec k\right)=0, \;\;\;\; c_{\alpha\beta}^{\lambda}(\vec{k})c^{\alpha\beta\lambda^{\prime}} \left(\vec{k}\right)^\ast = \delta_{\lambda\lambda^{\prime}}.$$ The time-dependent $\nu$-functions satisfy the respective Klein-Gordon equations, $$\frac{d^{2}\nu_{k}^{(\lambda)}}{d\eta^{2}}+\left(k^{2}-U^{( \lambda)}\right)\nu^{(\lambda)}_{k}=0,$$ with $U=U(\eta)\equiv\frac{d^{2}\left(\alpha a\right)}{\alpha ad\eta^{2}}$ for the $q$-field and $U^{\lambda} = U^{\lambda} (\eta)\equiv\frac{d^{2} a}{ad\eta^{2}}$ for each polarization of the gravitational waves $\nu^{\lambda}_{k}$. The standard commutation relations between the annihilation and creation operators, $$\left[ a_{\vec{k}}a^{+}_{\vec{k}^{\prime}}\right]=\delta\left( \vec{k}- \vec{k}^{\prime}\right),\;\;\;\; \left[ a^{\lambda}_{\vec {k}} a^{\lambda^{\prime}+}_{\vec {k}^{ \prime}}\right] = \delta\left(\vec{k} - \vec{k}^{\prime}\right) \delta_{\lambda \lambda^{\prime}},$$ require the following normalization condition for each of the $\nu$-functions: $$\nu_{k}^{(\lambda)}\frac{d\nu^{(\lambda)\ast}_{k}}{d\eta}- \nu_{k}^{(\lambda)\ast}\frac{d\nu_{k}^{(\lambda)}}{d\eta}= i.$$ Eqs.(46)-(52) specify the [*parametric amplification effect*]{}: the production of the perturbations – the phonons for $S$-mode \[4\] and the gravitons for $T$-mode \[3\] – in the process of the Universe expansion (the latter is imprinted in the non-zero scatering potentials $U^{(\lambda)}$ in eqs.(52)). From the inflationary condition (5) one finds always $k\eta \rightarrow -\infty$ for the early inflation (scales inside the horizon); therefore, the microscopic vacua states of the $q$ and $G_{\alpha\beta}$ fields mean the positive frequency choice for the initial $\nu$-functions: $$k\vert\eta\vert\gg 1:\;\;\;\; \nu_{k}^{(\lambda)} = \frac{\exp(-ik\eta)}{\sqrt{2k}}.$$ So, the problem of the spontaneus creation of density perturbations and gravitational waves is finally reduced to solving the eqs.(52), (53) with the effective potentials $U^{(\lambda)}$ taken from the inflationary background regimes considered above. For the late inflation $k\eta\rightarrow 0$ (scales outside the horizon), the perturbations become semiclassical since the fields are getting frozen in time and thus acquire the phase (only the ${\it {growing}}$ solutions of eqs.(48),(49) survive in time)[^9], $$k\vert\eta\vert\ll 1:\;\;\;\; q=q(\vec {x}),\;\;\;\; G_{\alpha\beta} = G_{\alpha\beta}(\vec {x}).$$ One can, therefore, treat these time-independent perturbation fields as realizations of the classical random Gaussian fields with the following power spectra: $$\langle q^{2}\rangle = \int^{\infty}_{0}q^{2}_{k}\frac{dk}{k}, \;\;\;\; \langle G_{\alpha\beta}G^{\alpha\beta}\rangle=\int^{\infty}_{0} h^{2}_{k} \frac{dk}{k},$$ $$q_{k}=\frac{k^{\frac{3}{2}}\vert\nu_{k}\vert}{2\pi a\sqrt{\gamma }},\;\;\;\; h_{k}=\frac{k^{\frac{3}{2}}\sqrt{\vert\nu^{+}_{k}\vert^{2}+\vert \nu_{k}^{\times}\vert^{2}}}{\pi a\sqrt{2}}=\frac{k^{\frac{3}{2}} \vert\nu_{k}^{\lambda}\vert}{\pi a}$$ Here the $\nu$-functions are taken in the limit $\vert\eta\vert\ll k^{-1}$, and the gravitational wave spectra in both polarizations are identical. The local slopes and the ratio of the power spectra are found as follows: $$n_{S}-1 \equiv 2\frac{d\ln q_{k}}{d\ln k},\;\;\;\; n_{T} \equiv 2\frac{d\ln h_{k}}{d\ln k}, \;\;\;\; r \equiv \left(\frac{h_{k}}{q_{k}}\right)^{2} = 4\left(\gamma\vert \frac{\nu_{k}^{\lambda}}{\nu_{k}}\vert^{2}\right)_{k\vert \eta\vert \ll 1}.$$ Note, that the quantities $q_k$, $h_k$, $n_S$, $n_T$, $r$ are the functions of the wavenumber only. For references, we recall also the density perturbation and Newtonian potential linked to the $q$-field in the Friedmann Universe (cf. eqs.(47), (54)), $$k<aH:\;\;\;\;\;\;\;\; \Delta_k=\frac{2}{3}\left(\frac{k}{aH}% \right)^2\Phi_k,\;\;\;\; \Phi_k=\Gamma q_k,$$ where $\Delta_k$, $\Phi_k$ are the dimentionless spectra, respectively, $$\left\langle\left(\frac{\delta\rho_c}{\rho}\right)^2\right\rangle = \int_0^\infty\Delta_k^2\frac{dk}{k},\;\;\;\; \langle \Phi^2\rangle= \int_0^\infty\Phi_k^2\frac{dk}{k},$$ and $\Gamma=\frac{H}{a}\int a\gamma dt = 1-\frac{H}{a}\int a\,dt$ is the function of time ($\Gamma=(1+\beta)^{-1}=$ const for the power-law expansion, $a\sim t^\beta$). The power spectra ================= When it works, the slow-roll approximation allows for simple derivation of the S-spectrum ($U^{(\lambda)}\simeq 2/\eta^2$, cf.eqs.(29)): $$q_{k}\simeq\frac{H}{2\pi\sqrt{2\gamma}},\;\;\;\; h_{k}=\frac{H}{\pi\sqrt{2}},\;\;\;\;k=aH,$$ where $H=H_{0}v$, $\sqrt{2\gamma}\simeq\epsilon\frac{v^{\prime}}{ v}$, $\vartheta\simeq\frac{1}{2}\epsilon^{2}\left(\frac{v^{ \prime}}{v}\right)^{ \prime}$. The spectra ratio and the local slopes are then the following (see eqs.(28), (32), (56)): $$r\simeq -2n_{T} = 4\gamma\simeq \frac{1}{2}\left(\frac{\epsilon \kappa y^{\kappa-1}}{1+y^{\kappa}}\right)^{2} \le r_{max}= \frac{1}{2}\left(\frac{\epsilon\left(\kappa-1\right)}{y_{1}} \right)^2 \simeq 2\left(\frac{\epsilon}{\epsilon_0}\right)^2,$$ $$n_S-1\simeq 2\left(\vartheta-\gamma\right)=f\left(y\right), \;\;\;\; f_{-}\le f\left(y\right)\le f_{+},$$ where $f\left(y\right) = \frac{\kappa}{2}y^{\kappa-2}\left(\frac{\epsilon }{1+ y^{\kappa}}\right)^2\left(\kappa-1-\frac{\kappa+2}{2}y^\kappa \right)$, $y_{\pm}=\left(\kappa-1\mp\kappa\sqrt{\frac{\kappa-1}{\kappa+2}} \right)^{\frac{1}{\kappa}}$, $f_{\pm}=f\left(y_{\pm}\right)=\frac{\left(\kappa-1\right)\left( \kappa+2 \right)}{12}\left(\frac{\epsilon}{y_{\pm}}\right)^2\left( \pm 2\sqrt{\frac{ \kappa-1}{\kappa+2}}-1\right)\simeq\pm\left( \frac{\epsilon}{\epsilon_0} \right)^2$. Eqs.(58) are true for $v=\sqrt{1+y^{\kappa}}$; the T-spectrum deviates maximally from HZ (and the spectrum ratio reaches its maximum) at $y_{1}\simeq (\kappa-1)^{\frac{1}{\kappa}}\simeq 1$; the S-spectrum achieves its minimum and becomes exactly HZ one at $y_{4} = \left(\frac{\kappa-1}{1+\frac{\kappa}{2}}\right)^{\frac{ 1}{ \kappa}}= y_{1}(\frac{2}{\kappa+2})^{\frac {1}{\kappa}}\simeq 1$, it is the most red (blue) at $y_{-}$ ($y_{+}$); the points $y_1$ and $y_4$ lay always inside the interval $\left(y_{+},y_{-}\right)$ while the region (36) resides there only if $\kappa\le 8$. Eq. $f\left(y\right)=$ const $\in\left[f_{-},f_{+}\right]$ has two solutions: one is located within the interval $\left[y_{+},y_{-}\right]$ where $r$ is large, $\frac{r}{r_{max}}^>_\sim\left(\frac{\kappa+1}{3\kappa}\right)^2$ and $r(n_S=1)\simeq r_{max}$; another is outside this interval where $r$ is small, $\frac{r}{r_{max}}<1$ and $r(n_S=1)=0$. So, for $\kappa\ge 3$ we have from eq.(35) the following asymptotics for the power spectra: $$q_{k}^{2}\simeq\left( \frac{H_{0}}{\epsilon\pi\kappa}\right)^{2} \frac{ \left(1+y^{\kappa}\right)^{3}}{y^{2\kappa-2}}\simeq\frac{ \lambda_{\kappa}}{ 12\pi^{2}}\Biggl\{ \begin{array}{lcl} \kappa^{\frac{\kappa-4}{2}}\vert 2\ln K\vert^{\frac{\kappa+2}{2}}, & \; & K<\exp \left(-\frac{2}{\kappa\epsilon^{2}}\right) \\ \left(\frac{V_{0}}{\lambda_{\kappa}}\right)^{\frac{\kappa-4}{ \kappa-2} }\left(\left(\kappa-2\right)\ln K\right)^{\frac{2\kappa -2}{\kappa-2}}, & \; & K>\exp\left(\frac{2}{\kappa\epsilon^{2}} \right) \end{array} ,$$ $$h_{k}^{2}=\frac{H_{0}^{2}}{2\pi^{2}}\left(1+y^{\kappa}\right) = \frac{1}{6\pi^{2}}\Biggl\{ \begin{array}{lcl} \frac{\lambda_{\kappa}}{\kappa}\vert 2\kappa\ln K\vert^{\frac{ \kappa}{2}}, & \; & K<\exp\left(-\frac{2}{\kappa\epsilon^{2}} \right) \\ V_{0}, & \; & K>\exp\left(\frac{2}{\kappa\epsilon^{2}}\right) \end{array} .$$ In the transition region (36) the ratio of the spectra is approximately constant independent of the $\kappa$-index: $r\simeq 2\epsilon^{2}$ (it is a factor $\epsilon^{-2}_0$ less than $r_{max}$). For $\Lambda\lambda$-inflation the spectra are resolved explicitely (see eq.(37)): $$\kappa=4:\;\;\;\; \begin{array}{l} q_{k}\simeq\frac{1}{\pi}\sqrt{\frac{2\lambda}{3}}\left( \epsilon^{-4}+\ln^{2}K\right)^{\frac{3}{4}}, \\ h_{k}=\frac{H_{0}}{\pi}\left(1+\frac{\ln K}{\sqrt{\epsilon^{-4} +\ln^{2}K}} \right)^{-\frac{1}{2}}, \end{array}$$ and $y_{1}=3^{\frac 14}$, $K_1=\exp\left(-\frac{1}{\sqrt{3} \epsilon^2}\right)$, $y_{2}=y_{4}=1$, $y_{-}=\frac{1}{y_{+}}= \left(\sqrt{2}+1\right)^{\frac{1}{2}}$, $K_{\pm}=\exp\left(\mp \frac{1}{\epsilon^2}\right)$. An example of the power spectra for $\epsilon = 0.3$ is shown in Fig.1. Fig.2 clarifys the relation between $r$ and $n_S-1$ for any $\epsilon<1$. We see there is no correlation between the blueness and large $r$: the region of large $r$-values is located in the red and HZ sectors of the S-spectrum. Let us now turn to the case where the slow-roll approximation is broken. For $\Lambda m$-inflation eqs.(57) are true except the blue part of the S-spectrum ($k>k_{cr}$) where it must be corrected. Here eqs.(52), (53) are solved explicitely, $$y<1:\;\;\;\; k^{\frac {3}{2}}\nu_{k}\simeq\frac{ik\sqrt{\pi x} }{2} H^{(1)}_{\frac{3}{2}p} (x)\;\; {}^{\longrightarrow}_{x\ll 1} \;\; \frac{caH_{0}}{\sqrt{2}p}x^{\frac {3}{2}(1-p)},$$ where $H^{(1)}_p(x)$ is the Hankel function, $x=k\vert\eta\vert$, $c=\frac{p}{\sqrt{2\pi}}\Gamma\left(\frac {3}{2}p\right)2^{\frac{3 }{2}p} =\frac{2^{3p/2}}{3\sqrt{\pi/2}}\Gamma(1+\frac{3}{2}p)$. Taking into account the field asymptotics for $y\ll 1$ (see eq.(44)) we obtain the following S-spectrum in the blue range: $$k>k_{cr}:\;\;\;\; q_{k}\simeq\frac{cH_{0}}{2\pi\epsilon}\left(\frac{k}{k_{cr} } \right)^{\frac 32(1-p)},\;\;\;\;n_S^{blue}=4-3p>1.$$ As we see, the spectrum amplitude remains a finite number for $p \rightarrow 0$ ($n_S\rightarrow 4$).[^10] In most applications we usually have $n_S <3\;\;(p>\frac 13)$; in this case the whole spectrum approximation for the $\Lambda m$-inflation looks as follows: $$q_k=\frac{H_0(1+y^2)^{\frac 12}(\tilde c+y^2)}{2\pi\epsilon y},$$ where $\tilde c = \frac{c(1+p)}{p} = \frac{1+p}{\sqrt\pi}\Gamma \left( \frac {3}{2}p\right)2^{\frac 32(p-1)}$ and $y$ is taken at horizon crossing (see eq.(41)). The T/S effect in $\Lambda$-inflation ===================================== A large T/S $\sim 1$ (when $k_{cr}\in (10^{-4}, 10^{-3})$) is an intrinsic property of $\Lambda$-inflation. Below, we demonstrate it straightforwardly for COBE angular scale (see \[19\], \[20\], eq.(1)). Then the S and T are written as follows: $${\rm {S} = \sum_{\ell=2}^{\infty} S_{\ell} \exp\left[- \left(\frac{2\ell + 1}{27}\right)^2\right],\;\;\; {T} = \sum_{\ell=2}^{\infty} T_{\ell} \exp\left[- \left(\frac{2\ell + 1}{27}\right)^2\right],}$$ where $S_{\ell}$, $T_{\ell}$ are the corresponding variances in $\ell$th harmonic component of $\Delta T/T$ on the celestial sphere, $$S_{\ell} = \sum_{m=-\ell}^{\ell} \vert a_{\ell m}^{(S)} \vert^2, \;\;\;\; T_{\ell}=\sum_{m=-\ell}^{\ell}\vert a_{\ell m}^{(T)}\vert^2, \;\;\;\; \frac{\Delta T}{T}\left(\vec e\right)= \sum_{\ell,m,S,T}a_{\ell m}^{(S,T)}Y_{\ell m}\left(\vec e\right).$$ The calculations can be done for the instantaneous recombination, $\eta=\eta_E$ \[2\], $$\frac{\Delta T}{T}\left(\vec e\right) = \left(\frac 14\delta_\gamma - \vec {e}\vec {v}_b+\frac {1}{2} h_{00} \right)_E +\frac12 \int^0_E \frac{\partial h_{ik}}{\partial\eta}e^ie^kdx, \;\; e^i =(1, -\vec e),\; x\equiv \vert\vec{x}\vert=\eta_0-\eta,$$ where the SW-integral makes a dominant contribution on large scale (see eq.(45)), $\delta_\gamma$ and $\vec v_b$ are the photon density contrast and baryon perculiar velocity, respectively. The mean $S_\ell$ and $T_\ell$ values seen by an arbitrary observer in the matter dominated Universe (e.g. \[21\]) are explicitely related with the respective power spectra (see eqs.(55)): $$S_\ell = \frac{2\ell+1}{25}\int^{\infty}_0q_k^2 j^2_\ell\left(\frac{k}{k_0} \right)\frac{dk}{k},$$ $$T_\ell =\frac{9\pi^2}{16}\left(2\ell+1\right) \frac{\left(\ell+ 2\right)!}{ \left(\ell-2\right)!}\int^{\infty}_0h_k^2 I^2_\ell\left(\frac{k}{k_0}\right) \frac{dk}{k},$$ where $k_0=\eta_0^{-1}=\frac{H_0}{2}\simeq 1.6\times 10^{-4}h$ Mpc ${}^{-1}$, $$j_\ell\left(x\right)=\sqrt{\frac{\pi}{2x}}J_{\ell+1/2}\left(x \right),\;\;\;\; I_\ell (x) = \int_0^x\frac{J_{\ell+1/2}\left(x-y\right)}{ \left( x-y\right)^{5/2}}\frac{J_{5/2}\left(y\right)}{y^{3/2}}dy.$$ We have derived T/S for $\Lambda m$-inflation using the appoximation (61). The result is presented in Fig.3 as a function of two parameters of the model: the spectrum index in the blue asymptotics $n_S^{blue}$ (see eq.(60)) and the critical scale $k_{cr}$ (in units $k_0$). A similar behaviour of T/S is met for $\Lambda\lambda$-inflation. Actually, the two-arm structure of T/S is typical for any $\Lambda$ -inflation model: T/S gets its maximum at $k_{cr}\sim k_{ COBE}\sim 10^{-3}h$ Mpc${}^{-1}$ and gradually decays in both, blue ($k_{cr}<k_{COBE}$) and far red ($k_{cr}\gg k_{COBE}$) sectors of the S-mode. To be precise, the T/S-maximum is achieved in the location of $r$-maximum (where $\gamma$ is the largest and thus $\vartheta=0$). There (and anywhere) the S-spectrum slope is pretty close to HZ for $\epsilon\ll \epsilon_0$ (cf.eqs.(32), (58)): $$1-n_S^{(r_{max})}\simeq -n_T^{(r_{max})}=2\gamma_{max}\simeq \frac {1}{2} r_{max}\simeq \left(\frac{\epsilon}{\epsilon_0}\right)^2 \ll 1.$$ It is important that T/S remains large in a broad $k$-region including the point where the S-spectrum is exactly HZ: $\frac{r_{n_S = 1}}{r_{max}}\simeq \frac{4}{9}\left(1+ \frac{\kappa}{2}\right)^{\frac{2}{\kappa}} >\frac 49$. Discussion ========== It seems as a paradox that T/S can be as large as 1 for such a simple model as $\Lambda$-inflation. However, it can be easily understood. In fact, the model recalls a case of double inflation where the large T/S is generated in the intermediate scales between the first and second stages. So, we can assume that it is sufficient to evaluate T/S by the end of the first stage ($\varphi \sim\varphi_{cr}\sim 1$) where the slow-roll condition is marginally applicable. Here (cf.eqs.(2), (32)) $$\frac TS \sim \varphi^{-2}\sim 1.$$ Often T/S is presented as a function of the gravitational-wave-spectrum index $n_T$ or the inflationary $\gamma$-parameter estimated in the given scale, see eq.(2) (e.g. \[22\], \[23\], \[24\], \[25\], and others). We think this formula is universal for most types of cosmic inflation. We can argue it by plainly noting a clear physical equation, $$\frac TS\simeq 3r,$$ where $r$ is taken in scale where the T/S is determined ($k_{COBE} \sim 10^{-3}h$ Mpc ${}^{-1}$). The factor 3[^11] takes into account a higher ability of T-mode to contribute to $\Delta T/T$. We now see from eqs.(52)-(56) that $r$ is a number found in the limit $k\vert\eta \vert\ll 1$: $$k\vert\eta\vert\ll 1:\;\;\;\; r=4\gamma \vert\frac{\nu_k^\lambda}{\nu_{k}}\vert^{2}.$$ Implying that the r.h.s. stays frozen outside the horizon ($k\vert \eta\vert < 1$) we can estimate $r$ as the r.h.s. of eq.(69) at inflation horizon crossing time. Thus, we may conclude that $$r\simeq 4\left(\gamma \vert\frac{\nu_k^\lambda}{\nu_k}\vert^2 \right)_{k\vert\eta\vert=1}\simeq 4\gamma{}_{_{k\vert\eta\vert =1}}.$$ The latter is due to the fact that the functions $\nu_k^\lambda$ and $\nu_k$ are close to each other at the horison crossing: they both start from the same initial conditions (53) and obey the same equations inside the horizon (see eqs.(52))[^12]. Notice this argument is more general than the slow-roll-condition validity: actually, according to eqs.(51), (70) the $r$-number counts just the difference between the phase space volumes of phonons and gravitons. So, we see that large T/S is created each time when the $\gamma$ factor approaches subunity values. It may happen either in the end of inflation (note inflation stops for $\gamma=1$) or in a numerous intermediate periods during inflationary regime where one type of the inflation is changed for another one. Such a transition periods can be caused by many reasons; e.g. due to a functional change of the dynamical potential in the course of inflation (e.g. $\Lambda$-inflation), or a percularity in the potential energy shape (e.g. non-analiticity, a step, plateau, or a break of the first or second derivative of $V(\varphi)$), see e.g. \[26\]), or a change of the inflaton field (e.g. double-inflation), or any type of phase transitions or other evolutionary restructurings of the field Lagrangian that may slower down, terminate, or break up the process of inflation. Obviously, each particular way of inflation leaves its own imprints in the power spectra and requires special investigation. However, the issue of T/S is a matter of the very generic argument: the inflationary ($\gamma$, $H$) and/or spectral ($r$, $n_T$) parameters estimated in the appropriate energy/scale region. It (the T/S value) is totally independent of the local $n_S$ and, thus, has nothing to do with a particular S-spectrum shape produced in a given model. The principal quantity for estimating T/S becomes the energy of inflaton: the Hubble parameter at the inflationary horizon-crossing time, $H$ $[GeV]$. The motivation is the following: as the CGW amplitude is always about $H$ (cf.eq.(57)) and $q_k\sim10^{-5}$ (from LSS originated from the adiabatic S-mode), we have $$\frac{{\rm T}}{{\rm S}}\simeq\frac 16\left(\frac{H}{q_{COBE}}\right)^2 =\left(\frac{H}{6\times 10^{13}GeV}\right)^2\left(\frac{10^{-5}}{ q_{COBE}} \right)^2,$$ where $q_{COBE}\equiv q_{k_{COBE}}$. So, measuring the T/S brings a vital direct information on the physical energy scale where the cosmic perturbations has been created; a cosmologically noticable T/S could be achieved only if the inflation occured at subPlanckian (GUT) energies, $H{}> 10^{13}GeV$. If the CDPs were generated at smaller energies (e.g. during electroweek transition) then T/S would vanish. The point we emphasize in this paper is that for $\Lambda $-inflation. It brings about two distinguished signatures – a wing-like S-spectrum and the possibility for large T/S – under quite a simple and natural assumption on the potential energy of inflaton: the existence in $V(\varphi )$ of a [*metastabe dynamical constant*]{} in addition to an [*arbitrary functional*]{} $\varphi $-dependent term. It is obviously three independent parameters that determine the degrees of freedom of any $\Lambda $-inflation model. They can be, for instance, T/S and the local $n_S$ (at the COBE scale) as well as $k_{cr}$ (the scale where S-spectrum is at minimum) or, alternatively, the $r$-maximum and its position (the $k_1$ scale) as well as $V_0$. In case if T/S is large, we find quite a defenite prediction on the location of the $\Lambda $-inflation parameters near GUT energies (see eq.(15)): $$\frac{{\rm T}}{{\rm S}}>0.1:\;\;\;\; \begin{array}{rcl} \sqrt{V_0} & \in & \left( \frac{\zeta ^{-\frac \kappa 2}}{\sqrt{\kappa/2}}, \zeta \right) \left( \frac{q_{COBE}}{10^{-5}}\right) \left( 7\times 10^{15}GeV\right) ^2 \\ \frac{\sqrt{\lambda _\kappa /3}}{q_{COBE}} & \in & \left( 10^{-\frac \kappa 2},\frac{\kappa}{2}\zeta \right) \left(\kappa-1\right)^{\frac{1-\kappa }2} \left( 2\times 10^{18}GeV\right)^{2-\frac \kappa 2} \end{array} ,$$ where $\zeta \equiv 4\epsilon \left( \kappa -1\right) ^{\frac{\kappa -1} \kappa }\simeq \frac{2(\kappa -1)10^{19}GeV}{\varphi _{cr}}\in (1,10)$; recall these estimates assume only the condition T/S$>0.1$ (cf.eqs.(57), (58), (68)). Conclusions =========== Our conclusions are the following: - We introduce a broad class of elementary inflaton models called the $\Lambda $-inflation. The inflaton in the local-minimum-point has a [*positive residual potential energy*]{}, $V_0>0$. The hybrid inflation model (at the intermediate evolutionary stage) is a partial case of $\Lambda $-inflation; the chaotic inflation is a measure-zero-model in the family of $\Lambda $-inflation models. - The S-perturbation spectrum generated in $\Lambda $-inflation has a non-power-law [*wing-like shape with a broad minimum*]{} where the slope is locally HZ ($n_S=1$); it is blue, $n_S>1$, (red, $n_S<1$) on short (large) scales. The T-perturbation spectrum remains always red with the maximum deviation from HZ at the scale near the S-spectrum-minimum. - The cosmic gravitational waves generated in $\Lambda$-inflation contribute [*maximally*]{} to the SW $\Delta T/T$-anisortopy, (T/S)${}_{max}{}_{\sim }^{<}10$, in scales where the S-spectrum is slightly red or nearly HZ ($k_{\sim }^{<}k_{cr}$). The T/S remains small ($\ll 1$) in both blue ($k>k_{cr}$) and far red ($k\ll k_{cr}$) S-spectrum asymptotics. - [*Three*]{} independent arbitrary parameters determine the fundamental $\Lambda $-inflation; they can be the T/S, $k_{cr}$ (the scale where $n_S=1$), and $\sqrt{V_0}$ (the CDP amplitude at $k_{cr}$ scale; a large value for T/S is expected if $V_0^{\frac 14}\sim 10^{16}GeV$). This brings high capability in fitting various observational tests to the dark matter cosmologies based on $\Lambda $-inflation. [*Acknowledgements*]{} The work was partly supported by the INTAS grant 97-1192 and Swiss National Science Foundation grant 7IP 050163.96/1. APPENDIX: $\Lambda m$-inflation with $\epsilon ^2<0.9$ {#appendix-lambda-m-inflation-with-epsilon-20.9 .unnumbered} ====================================================== Here, we consider the inflation model with $\kappa =2$ and $p>\frac{2}{3}\;(\epsilon ^2<\frac 56)$. Under the latter restriction, $\vartheta \simeq {\rm const}=\frac 32(1-p)$ during the whole dS stage ($y<1$, cf.eq.(42)) and decays as $\vartheta \simeq \frac{3f}4\simeq -\frac{\epsilon ^2}{2y^2}$ for $y>1$. Making use of eqs.(26) we find the following best fit for the whole $y$-evolution (analytically exact in the limit $\epsilon \rightarrow 0$): $$\vartheta \simeq \frac 32\left( 1-\sqrt{1-f}\right) = \frac{1.5f}{1+\sqrt{1-f}},\eqno(A1)$$ $$\sqrt{\frac{\gamma}{2}}\simeq \frac{\epsilon y}{\left( 1+y^2\right) \left( 1+\sqrt{1-f}\right) }.\eqno(A2)$$ where $f\equiv \frac{2\epsilon ^2}3\frac{1-y^2}{(1+y^2)^2}$. The substitution of (A2) into eq.(27) brings about the explicit integration: $$\epsilon ^2\ln \left( \frac {v\eta}{\sqrt{2}\eta_{cr}}\right) \simeq J(\xi ), \eqno(A3)$$ where $\xi \equiv v^2\left( 1+\sqrt{1-f}\right) =v^2+\sqrt{v^4+\left( 1-p^2\right) \left( v^2-2\right) }$, $v^2=1+y^2$, $$J\left( \xi \right) \equiv \int_1\frac{\xi dy}y=\frac \xi 2-2+\frac 12\ln \left[ \left( \frac{\xi -1-p}{3-p}\right) ^{1+p}\left( \frac{\xi -1+p}{3+p} \right) ^{1-p}\left( \frac{2\xi +1-p^2}{9-p^2}\right) ^{\frac{1-p^2}2 }\right] .$$ Obviously, the evolution goes from large $\xi =2y^2\left( 1+\frac{1+\epsilon ^2/6}{y^2}+O\left( \frac 1{y^4}\right) \right) >4$ to small $\xi =\left( 1+p\right) \left( 1+\frac{3-p}{2p}y^2+O\left( y^4\right) \right) <4$, and $\xi _{cr}=4$. Accordingly, we have the following $y$-asymptotics from eq.(A3): $$y^2=\Bigg \{ \begin{array}{lcl} \epsilon ^2\ln \left( \frac{\omega _5\eta}{\eta_{cr}} \right) +\left( 1+p^2 \right)\ln \left( \frac{y_5}y\right) +1, & \; & y>1, \\ y_6^2\left( \frac{\omega _6\eta}{\eta_{cr}} \right) ^{3(1-p)}, & \; & y<1, \end{array} \eqno(A4)$$ where $\omega _5^{-1}=\sqrt{2}\exp \left( \frac 16\right)$, $y_5=\left(\frac{3-p}{2}\right)^{\frac{(1+p)(3-p)}{4(1+p^2)}} \left(\frac{3+p}2\right)^{\frac{(1-p)(3+p)}{4(1+p^2)}}$, $\omega _6=\frac{\eta_{cr}}{\eta_3}=\frac 1{\sqrt{2}}(3+p)^{\frac{3+p}{6(1+p) }}$, $y_6=(2p)^{\frac p{1+p}}(1+p)^{\frac{p-3}4}\exp\left[\frac{3-p}{2(1+p)} \right] $. In the allowed region $p>\frac 23$, the coefficients $\omega _6$ and $y_6$ remain close to unity. In the slow-roll limit $(p\rightarrow 1)$, $\omega _6=2^{\frac 16}$ and $y_6=\sqrt{e}$. References {#references .unnumbered} ========== 1\. E.M. Lifshitz, Zh. Eksp. Teor. Fiz. [**16**]{}, 587 (1946).\ 2. R.K. Sachs, A.M. Wolfe, ApJ [**147**]{}, 73 (1967).\ 3. L.P. Grishchuk, Zh. Eksp. Teor. Fiz. [**67**]{}, 825 (1974).\ 4. V.N. Lukash, Zh. Eksp. Teor. Fiz. [**79**]{}, 1601 (1980).\ 5. A.D. Linde, Phys. Lett. B [**129**]{}, 177 (1983).\ 6. F. Lucchin, S. Matarrese, Phys. Rev. D [**32**]{}, 1316 (1985).\ 7. R.L. Davis, H.M. Hodges, G.F. Smoot, et al., Phys. Rev. Lett. [**69**]{}, 1856 (1992).\ 8. A.D. Linde, Phys. Rev. D [**49**]{}, 748 (1994).\ 9. J. Garcia-Bellido, D. Wands, Phys. Rev. D [**54**]{}, 6040 (1996).\ 10. J. Garcia-Bellido, A. Linde, D. Wands, Phys. Rev. D [**54**]{}, 7181 (1996).\ 11. E.J. Copeland, A.R. Liddle, D.H. Lyth, et al., Phys. Rev. D [**49**]{}, 6410 (1994).\ 12. V.N. Lukash, E.V. Mikheeva, Gravitation and Cosmology [**2**]{}, 247 (1996).\ 13. B.J. Carr, J.H. Gilbert, Phys. Rev. D [**50**]{}, 4853 (1994).\ 14. J. Gilbert, Phys. Rev. D [**52**]{}, 5486 (1995).\ 15. A. Melchiorri, M.S. Sazhin, V.V. Shulga, N. Vittorio, to appear in ApJ, preprint astro-ph/9901220 (1999).\ 16. V.N. Lukash, in: [*Cosmology: The physics of the Universe*]{}, ed. by B.A. Robson et al., World Scientific, Singapore (1996), p.213.\ 17. V.A. Rubakov, M.V. Sazhin, A.M. Veryaskin, Phys. Lett. B [ **115**]{}, 189 (1982).\ 18. A.A. Starobinskii, Zh. Eksp. Teor. Fiz. [**30**]{}, 719 (1979).\ 19. G.F. Smoot, C.L. Bennett, A.Kogut et al., ApJ [**396**]{}, L1 (1992).\ 20. C.L. Bennet, A.J. Banday, K.M.Gorski et al., ApJ [**464**]{}, L1 (1996).\ 21. F. Lucchin, S. Matarrese, S. Mollerach, ApJ [**401**]{}, L49 (1992).\ 22. M.S. Turner, Phys. Rev. D [**48**]{}, 5539 (1993).\ 23. E.W. Kolb, S.L. Vadas, Phys.Rev.D [**50**]{}, 2479 (1994).\ 24. J.E. Lidsey, A.R.Liddle, E.W.Kolb et al., Rev. Mod. Phys. [**69**]{}, 373 (1997).\ 25. A.A. Starobinsky, Pis’ma Astron.Zh. [**11**]{}, 323 (1985).\ 26. G. Lesgourgues, D. Polarski, A.A. Starobinsky, to appear in MNRAS, preprint astro-ph/9807019 (1999). [^1]: Obviously, all three modes of the perturbations of gravitational field – scalar, vector and tensor (see \[1\]) – induce the CMB anisortopy through the SW-effect \[2\]. However, most of the inflationary models considered by now are based on scalar inflaton fields which cannot be a source for the vector mode. A general physical reason for the production of the T and S perturbations in the expanding Universe is the [*parametric amplification effect*]{} \[3\], \[4\]: the spontaneous creation of quantum physical fields in a non-stationary gravitational background. [^2]: There is no fundamental theorem restricting T/S relative to the unity: the inflationary requirement, $\gamma\equiv - \dot{H}/H^2 < 1$, imposes only a wide constraint, T/S ${}_{\sim}^< 10$, obviously insufficient to discriminate the T-mode in the cosmological context (see eqs.(2),(5)). [^3]: It is just because the CGW spectrum created in [*any*]{} inflation is intrinsically akin to the evolution of the Hubble factor at the horizon crossing time: $n_T \simeq - 2\gamma < 0$. Notice the T-spectrum stays always red in the minimally coupled gravity since a systematic decrease in time of the Hubble factor, cf.eq.(12). [^4]: In most applications $\varphi_{cr}\sim 1$, see eq.(32). [^5]: We do not discuss here possible mechanisms for such metastability (it may be the coupling to other physical fields, a way of double- or platoo-like inflations, etc.) and take the $\varphi^{\ast}$ value as an arbitrary parameter of our model (allowing to recalculate $k_{cr}$ in Mpc). Mind that in CI $\varphi^{\ast} \simeq \varphi_{cr}$. [^6]: Hereafter, the prime/dot will denote the derivative over $y/x$, i.e. the normalized $\varphi/t$, respectively. [^7]: Eq.(18) yields $$\frac{d\varphi}{d\ln a}=-\sqrt{2\gamma},\;\;\; \frac{d^{2}\varphi}{d(\ln a)^{2}}=\frac{d\gamma}{d\varphi},$$ the (-) sigh implies that $\varphi$ decreases with time. [^8]: The fitting accuracy is quite satisfactory. Say, in the slow-roll approximation $p \rightarrow 1$: $\frac{\eta_{3}}{ \eta_{cr}}=2^{-\frac {1}{6}}\simeq 1$ and $y\omega^{1.5(1-p)} = \sqrt {e} \sim 1$. See the Appendix for more detail. [^9]: Here, the transfer from the quantum (squeezed) to classical case occurs when one neglects the [*decaying*]{} solutions of eqs.(48),(49) for $\eta\rightarrow 0$: $$k\vert\eta\vert < 1:\;\;\;\; q_{d}\sim\int^{0}\frac{d\eta}{a^{2}\gamma} = \frac{1}{3\gamma} H^{2}\eta^{3}\left(1 + O(\gamma)\right),\;\;\;\; G_d\sim\int^{0}\frac{d\eta}{a^{2}} = \frac {1}{3}H^{2}\eta^{3} \left(1+O(\gamma)\right),$$ and thus is left only with the growing ones (see eq.(54)). This procedure turns the annihilation and creation operators into the $C$-numbers (where the commutators vanish). [^10]: This corrects the wrong statement on the divergence of $q_k$ at $p\rightarrow 0$ made in some previous publications. [^11]: Or somewhat about 3, to be found more accurately by special investigation elsewhere. Eventually, it is proportional to the ratio of the effective numbers of T and S spin projections on given spherical harmonics, see eqs.(64), (65). [^12]: The difference in their evolutions originates only because of various effective potentials $U^{(\lambda)}$ entering eqs.(52); however both potentials vanish for $k\vert\eta\vert>1$.
--- abstract: 'Other than scattering problems where perturbation theory is applicable, there are basically two ways to solve problems in physics. One is to reduce the problem to harmonic oscillators, and the other is to formulate the problem in terms of two-by-two matrices. If two oscillators are coupled, the problem combines both two-by-two matrices and harmonic oscillators. This method then becomes a powerful research tool to cover many different branches of physics. Indeed, the concept and methodology in one branch of physics can be translated into another through the common mathematical formalism. Coupled oscillators provide clear illustrative examples for some of the current issues in physics, including entanglement, decoherence, and Feynman’s rest of the universe. In addition, it is noted that the present form of quantum mechanics is largely a physics of harmonic oscillators. Special relativity is the physics of the Lorentz group which can be represented by the group of by two-by-two matrices commonly called $SL(2,c)$. Thus the coupled harmonic oscillators can therefore play the role of combining quantum mechanics with special relativity. Both Paul A. M. Dirac and Richard P. Feynman were fond of harmonic oscillators, while they used different approaches to physical problems. Both were also keenly interested in making quantum mechanics compatible with special relativity. It is shown that the coupled harmonic oscillators can bridge these two different approaches to physics.' --- [**Harmonic Oscillators as Bridges between Theories: Einstein, Dirac, and Feynman**]{} Y. S. Kim[^1]\ Department of Physics, University of Maryland,\ College Park, Maryland 20742, U.S.A.\ Marilyn E. Noz [^2]\ Department of Radiology, New York University,\ New York, New York 10016, U.S.A.\ Introduction {#intro} ============ Because of its mathematical simplicity, the harmonic oscillator provides soluble models in many branches of physics. It often gives a clear illustration of abstract ideas. In many cases, the problems are reduced to the problem of two coupled oscillators. Soluble models in quantum field theory, such as the Lee model [@sss61] and the Bogoliubov transformation in superconductivity [@fewa71], are based on two coupled oscillators. More recently, the coupled oscillators form the mathematical basis for squeezed states in quantum optics [@knp91]. According to our experience, the present form of quantum mechanics is largely a physics of harmonic oscillators. Since the group $SL(2,C)$ forms the universal covering group of the Lorentz group, special relativity is a physics of two-by-two matrices. Therefore, the coupled harmonic oscillator can provide a concrete model for relativistic quantum mechanics. With this point in mind, Dirac and Feynman used harmonic oscillators to test their physical ideas. In this paper, we first examine Dirac’s attempts to combine quantum mechanics with relativity in his own style: to construct mathematically appealing models. We then examine how Feynman approached this problem. He was insisting on his own style. Observe the experimental world, tell the story of the real world, and then write down mathematical formulas as needed. In this paper, we use coupled harmonic oscillators to build a bridge between the two different attempts made by Dirac and Feynman. The coupled oscillator system not only connects the ideas of these two physicists, but also serves as an illustrative tool for some of the current ideas in physics, such as entanglement and decoherence. Feynman’s rest of the universe is a case in point. We shall show in this paper, using coupled harmonic oscillators, that this concept is a special case of entanglement. In their paper 1999 paper [@hkn99ajp] Han [*et al.*]{} used two coupled harmonic oscillators to interpret what Feynman said in his book. There one oscillator played as the world in which we do physics, and the other oscillator as the rest of the universe. We shall see in this paper that the concept of Feynman’s rest of the universe can be expanded to the concept of entanglement. Since the same coupled oscillators can be used for both illustrating entanglement and for the oscillator-based relativistic quantum mechanics, we are able to extend the concept of entanglement to the Lorentz-covariant world. In so doing, we arrive at the concept of space-time entanglements. Indeed, the space-time entanglement is is one of the essential ingredients in the covariant formulation of relativistic quantum mechanics. In Sec. \[quantu\], we start with the classical Hamiltonian for two coupled oscillators. It is possible to obtain a explicit solution for the Schrödinger equation in terms of the normal coordinates. We then derive a convenient form of this solution from which the concept of entanglement can be studied thoroughly. In Sec. \[frest\], we construct the density matrix using the solution given in Sec. \[quantu\], and explain the effect of the rest of the universe which we are not able to observe. Section \[dirosc\] examines Dirac’s life-long attempt to combine quantum mechanics with special relativity. In Sec. \[adden\], we study some of the problems which Dirac left us to solve. In Sec. \[feyosc\], starting from Dirac’s work, we construct a covariant model of relativistic extended particles by combining Dirac’s oscillators with Feynman’s phenomenological approach to relativistic quark model. It is shown that Feynman’s parton model can be interpreted as a limiting case of one covariant model for a covariant bound-state model. Coupled Oscillators and Entangled Oscillators {#quantu} ============================================= Two coupled harmonic oscillators serve many different purposes in physics. It is well known that this oscillator problem can be formulated into a problem of a quadratic equation in two variables. The diagonalization of the quadratic form includes a rotation of the coordinate system. However, the diagonalization process requires additional transformations involving the scales of the coordinate variables [@hkn99ajp; @arav89]. Indeed, it was found that the mathematics of this procedure can be as complicated as the group theory of Lorentz transformations in a six dimensional space with three spatial and three time coordinates [@hkn95jm]. However, in this paper, we start with a simple problem of two oscillators with equal mass. This contains enough physics for our present purpose. Then the Hamiltonian takes the form $$\label{eq.1} H = {1\over 2}\left\{{1\over m} p^{2}_{1} + {1\over m}p^{2}_{2} + A x^{2}_{1} + A x^{2}_{2} + 2C x_{1} x_{2} \right\}.$$ If we choose coordinate variables $$\begin{aligned} \label{eq.3} &{}& y_{1} = {1\over\sqrt{2}}\left(x_{1} + x_{2}\right) , \nonumber\\[2ex] &{}& y_{2} = {1\over\sqrt{2}}\left(x_{1} - x_{2}\right) ,\end{aligned}$$ the Hamiltonian can be written as $$\label{eq.6} H = {1\over 2m} \left\{p^{2}_{1} + p^{2}_{2} \right\} + {K\over 2}\left\{e^{-2\eta} y^{2}_{1} + e^{2\eta} y^{2}_{2} \right\} ,$$ where $$\begin{aligned} \label{eq.5} &{}& K = \sqrt{A^{2} - C^{2}} , \nonumber \\[.5ex] &{}& \exp(2\eta) =\sqrt{\frac{A - C}{A + C} } ,\end{aligned}$$ The classical eigenfrequencies are $\omega_{pm} = \omega e^{\pm}$ with $$\label{omega} \omega = \sqrt{\frac{K}{m}} .$$ If $y_{1}$ and $y_{2}$ are measured in units of $(mK)^{1/4} $, the ground-state wave function of this oscillator system is $$\label{eq.13} \psi_{\eta}(x_{1},x_{2}) = {1 \over \sqrt{\pi}} \exp{\left\{-{1\over 2}(e^{-\eta} y^{2}_{1} + e^{\eta} y^{2}_{2}) \right\} } ,$$ The wave function is separable in the $y_{1}$ and $y_{2}$ variables. However, for the variables $x_{1}$ and $x_{2}$, the story is quite different, and can be extended to the issue of entanglement. There are three ways to excite this ground-state oscillator system. One way is to multiply Hermite polynomials for the usual quantum excitations. The second way is to construct coherent states for each of the $y$ variables. Yet, another way is to construct thermal excitations. This requires density matrices and Wigner functions [@hkn99ajp]. The key question is how the quantum mechanics in the world of the $x_{1}$ variable is affected by the $x_{2}$ variable. If the $x_{2}$ space is not observed, it corresponds to Feynman’s rest of the universe. If we use two separate measurement processes for these two variables, these two oscillators are entangled. Let us write the wave function of Eq.(\[eq.13\]) in terms of $x_{1}$ and $x_{2}$, then $$\label{eq.14} \psi_{\eta}(x_{1},x_{2}) = {1 \over \sqrt{\pi}} \exp\left\{-{1\over 4}\left[e^{-\eta}(x_{1} + x_{2})^{2} + e^{\eta}(x_{1} - x_{2})^{2} \right] \right\} .$$ When the system is decoupled with $\eta = 0$, this wave function becomes $$\label{eq.15} \psi_{0}(x_{1},x_{2}) = \frac{1}{\sqrt{\pi}} \exp{\left\{-{1\over 2}(x^{2}_{1} + x^{2}_{2}) \right\}} .$$ The system becomes separable and becomes disentangled. As was discussed in the literature for several different purposes [@knp91; @kno79ajp; @knp86], this wave function can be expanded as $$\label{expan} \psi_{\eta }(x_{1},x_{2}) = {1 \over \cosh\eta}\sum^{}_{k} \left(\tanh{\eta \over 2}\right)^{k} \phi_{k}(x_{1}) \phi_{k}(x_{2}) ,$$ where $\phi_{k}(x)$ is the harmonic oscillator wave function for the $k-th$ excited state. This expansion serves as the mathematical basis for squeezed states of light in quantum optics [@knp91], among other applications. In addition, this expression clearly demonstrates that the coupled oscillators are entangled oscillators. Let us look at the expression of Eq.(\[expan\]). If the variable $x_{1}$ and $x_{2}$ are measured separately. In Sec \[dirosc\], we shall see that the mathematics of the coupled oscillators can serve as the basis for the covariant harmonic oscillator formalism where the $x_{1}$ and $x_{2}$ variables are replaced by the longitudinal and time-like variables, respectively. This mathematical identity will leads to the concept of space-time entanglement in special relativity. Feynman’s Rest of the Universe {#frest} ============================== In his book on statistical mechanics [@fey72], Feynman makes the following statement about the density matrix. [*When we solve a quantum-mechanical problem, what we really do is divide the universe into two parts - the system in which we are interested and the rest of the universe. We then usually act as if the system in which we are interested comprised the entire universe. To motivate the use of density matrices, let us see what happens when we include the part of the universe outside the system*]{}. We can use the coupled harmonic oscillators to illustrate what Feynman says in his book. Here we can use $x_{1}$ and $x_{2}$ for the variable we observe and the variable in the rest of the universe. By using the rest of the universe, Feynman does not rule out the possibility of other creatures measuring the $x_{2}$ variable in their part of the universe. Using the wave function $\psi_{\eta}(x_{1},x_{2})$ of Eq.(\[eq.14\]), we can construct the pure-state density matrix $$\rho(x_{1},x_{2};x_{1}',x_{2}') = \psi_{\eta}(x_{1},x_{2})\psi_{\eta}(x_{1}',x_{2}') ,$$ which satisfies the condition $\rho^{2} = \rho $: $$\rho(x_{1},x_{2};x_{1}',x_{2}') = \int \rho(x_{1},x_{2};x_{1}'',x_{2}'') \rho(x_{1}'',x_{2}'';x_{1}',x_{2}') dx_{1}'' dx_{2}'' .$$ If we are not able to make observations on $x_{2}$, we should take the trace of the $\rho$ matrix with respect to the $x_{2}$ variable. Then the resulting density matrix is $$\label{integ} \rho(x_{1}, x_{1}') = \int \rho (x_{1},x_{2};x'_{1},x_{2}) dx_{2} .$$ The above density matrix can also be calculated from the expansion of the wave function given in Eq.(\[expan\]). If we perform the integral of Eq.(\[integ\]), the result is $$\label{dmat} \rho(x,x') = \left({1 \over \cosh(\eta/2)}\right)^{2} \sum^{}_{k} \left(\tanh{\eta \over 2}\right)^{2k} \phi_{k}(x)\phi^{*}_{k}(x') ,$$ where we now use $x$ and $x'$ for $x_{1}$ and $x_{1}'$ for simplicity. The trace of this density matrix is $1$. It is also straightforward to compute the integral for $Tr(\rho^{2})$. The calculation leads to $$Tr\left(\rho^{2} \right) = \left({1 \over \cosh(\eta/2)}\right)^{4} \sum^{}_{k} \left(\tanh{\eta \over 2}\right)^{4k} .$$ The sum of this series is $(1/\cosh\eta)$ which is less than one. This is of course due to the fact that we are averaging over the $x_{2}$ variable which we do not measure. The standard way to measure this ignorance is to calculate the entropy defined as [@wiya63] $$S = - Tr\left(\rho \ln(\rho) \right) ,$$ where $S$ is measured in units of Boltzmann’s constant. If we use the density matrix given in Eq.(\[dmat\]), the entropy becomes $$S = 2 \left\{\cosh^{2}\left({\eta \over 2}\right) \ln\left(\cosh{\eta \over 2}\right) - \sinh^{2}\left({\eta \over 2}\right) \ln\left(\sinh{\eta \over 2} \right)\right\} .$$ This expression can be translated into a more familiar form if we use the notation $$\tanh{\eta \over 2} = \exp\left(-{\hbar\omega \over kT}\right) ,$$ where $\omega$ is given in Eq.(\[omega\]). The ratio $\hbar\omega/kT$ is a dimensionless variable. In terms of this variable, the entropy takes the form $$S = \left({\hbar\omega \over kT}\right) \frac{1}{\exp(\hbar\omega/kT) - 1} - \ln\left[1 - \exp(-\hbar\omega/kT)\right] .$$ This familiar expression is for the entropy of an oscillator state in thermal equilibrium. Thus, for this oscillator system, we can relate our ignorance to the temperature. It is interesting to note that the coupling strength measured by $\eta$ can be related to the temperature variable. Dirac’s Harmonic Oscillators {#dirosc} ============================ Paul A. M. Dirac is known to us through the Dirac equation for spin-1/2 particles. But his main interest was in the foundational problems. First, Dirac was never satisfied with the probabilistic formulation of quantum mechanics. This is still one of the hotly debated subjects in physics. Second, if we tentatively accept the present form of quantum mechanics, Dirac was insisting that it has to be consistent with special relativity. He wrote several important papers on this subject. Let us look at some of his papers on this subject. ![Space-time picture of quantum mechanics. There are quantum excitations along the space-like longitudinal direction, but there are no excitations along the time-like direction. The time-energy relation is a c-number uncertainty relation.[]{data-label="quantum"}](quantum.eps) During World War II, Dirac was looking into the possibility of constructing representations of the Lorentz group using harmonic oscillator wave functions [@dir45]. The Lorentz group is the language of special relativity, and the present form of quantum mechanics starts with harmonic oscillators. Presumably, therefore, he was interested in making quantum mechanics Lorentz-covariant by constructing representations of the Lorentz group using harmonic oscillators. In his 1945 paper [@dir45], Dirac considers the Gaussian form $$\exp\left\{- {1 \over 2}\left(x^2 + y^2 + z^2 + t^2\right)\right\} .$$ We note that this Gaussian form is in the $(x,~y,~z,~t)$ coordinate variables. Thus, if we consider Lorentz boost along the $z$ direction, we can drop the $x$ and $y$ variables, and write the above equation as $$\label{ground} \exp\left\{- {1 \over 2}\left(z^2 + t^2\right)\right\} .$$ This is a strange expression for those who believe in Lorentz invariance. The expression $$\exp\left\{- {1 \over 2}\left(z^2 - t^2\right)\right\} .$$ is invariant, but Dirac’s Gaussian form of Eq.(\[ground\]) is not. On the other hand, this expression is consistent with his earlier papers on the time-energy uncertainty relation [@dir27]. In those papers, Dirac observes that there is a time-energy uncertainty relation, while there are no excitations along the time axis. He called this the “c-number time-energy uncertainty” relation. When one of us (YSK) was talking with Dirac in 1978, he clearly mentioned this word again. He said further that this is one of the stumbling block in combining quantum mechanics with relativity. This situation is illustrated in Fig. \[quantum\]. ![Lorentz boost in the light-cone coordinate system.[]{data-label="licone"}](licone.eps) Let us look at Fig. \[quantum\] carefully. This figure is a pictorial representation of Dirac’s Eq.(\[ground\]), with localization in both space and time coordinates. Then Dirac’s fundamental question would be how to make this figure covariant? This is where Dirac stops. However, this is not the end of the Dirac story. ![Effect of the Lorentz boost on the space-time wave function. The circular space-time distribution in the rest frame becomes Lorentz-squeezed to become an elliptic distribution.[]{data-label="ellipse"}](ellipse.eps) Dirac’s interest in harmonic oscillators did not stop with his 1945 paper on the representations of the Lorentz group. In his 1963 [@dir63] paper, he constructed a representation of the $O(3,2)$ deSitter group using two coupled harmonic oscillators. This paper contains not only the mathematics of combining special relativity with the quantum mechanics of quarks inside hadrons, but also forms the foundations of two-mode squeezed states which are so essential modern quantum optics [@knp91]. Dirac did not know these when he was writing this 1963 paper. Furthermore, the $O(3,2)$ deSitter group contains the Lorentz group $O(3,1)$ as a subgroup. Thus, Dirac’s oscillator representation of the deSitter group essentially contains all the mathematical ingredient of what we are doing in this paper. Addendum to Dirac’c Oscillators {#adden} =============================== In 1949, the Reviews of Modern Physics published a special issue to celebrate Einstein’s 70th birthday. This issue contains Dirac paper entitled “Forms of Relativistic Dynamics” [@dir49]. In this paper, he introduced his light-cone coordinate system, in which a Lorentz boost becomes a squeeze transformation. When the system is boosted along the $z$ direction, the transformation takes the form $$\label{boostm} \pmatrix{z' \cr t'} = \pmatrix{\cosh(\eta/2) & \sinh(\eta/2) \cr \sinh(\eta/2) & \cosh(\eta/2) } \pmatrix{z \cr t} .$$ This is not a rotation, and people still feel strange about this form of transformation. In 1949 [@dir49], Dirac introduced his light-cone variables defined as [@dir49] $$\label{lcvari} u = (z + t)/\sqrt{2} , \qquad v = (z - t)/\sqrt{2} ,$$ the boost transformation of Eq.(\[boostm\]) takes the form $$\label{lorensq} u' = e^{\eta/2 } u , \qquad v' = e^{-\eta/2 } v .$$ The $u$ variable becomes expanded while the $v$ variable becomes contracted, as is illustrated in Fig. \[licone\]. Their product $$uv = {1 \over 2}(z + t)(z - t) = {1 \over 2}\left(z^2 - t^2\right)$$ remains invariant. In Dirac’s picture, the Lorentz boost is a squeeze transformation. If we combine Fig. \[quantum\] and Fig. \[licone\], then we end up with Fig. \[ellipse\]. In mathematical formulae, this transformation changes the Gaussian form of Eq.(\[ground\]) into $$\label{eta} \psi_{\eta }(z,t) = \left({1 \over \pi }\right)^{1/2} \exp\left\{-{1\over 2}\left(e^{-\eta }u^{2} + e^{\eta}v^{2}\right)\right\} .$$ Let us go back to Sec. \[quantu\] on the coupled oscillators. The above expression is the same as Eq.(\[eq.13\]). The $x_{1}$ variable now became the longitudinal variable $z$, and the $x_{2}$ variable became the time like variable $t$. We can use the coupled harmonic oscillators as the starting point of relativistic quantum mechanics. This allows us to translate the quantum mechanics of two coupled oscillators defined over the space of $x_{1}$ and $x_{2}$ into the quantum mechanics defined over the space time region of $z$ and $t$. This form becomes (\[ground\]) when $\eta$ becomes zero. The transition from Eq.(\[ground\]) to Eq.(\[eta\]) is a squeeze transformation. It is now possible to combine what Dirac observed into a covariant formulation of harmonic oscillator system. First, we can combine his c-number time-energy uncertainty relation described in Fig. \[quantum\] and his light-cone coordinate system of Fig. \[licone\] into a picture of covariant space-time localization given in Fig. \[ellipse\]. In addition, there are two more homework problems which Dirac left us to solve. First, in defining the $t$ variable for the Gaussian form of Eq.(\[ground\]), Dirac did not specify the physics of this variable. If it is going to be the calendar time, this form vanishes in the remote past and remote future. We are not dealing with this kind of object in physics. What is then the physics of this time-like $t$ variable? ![Wigner in Einstein’s world. Einstein formulates special relativity whose energy-momentum relation is valid for point particles as well as particles with internal space-time structure. It was Wigner who formulated the framework for internal space-time symmetries by introducing his little groups whose transformations leave the four-momentum of a given particle invariant.[]{data-label="dff22"}](dff22.eps) The Schrödinger quantum mechanics of the hydrogen atom deals with localized probability distribution. Indeed, the localization condition leads to the discrete energy spectrum. Here, the uncertainty relation is stated in terms of the spatial separation between the proton and the electron. If we believe in Lorentz covariance, there must also be a time-separation between the two constituent particles, and an uncertainty relation applicable to this separation variable. Dirac did not say in his papers of 1927 and 1945, but Dirac’s “t” variable is applicable to this time-separation variable. This time-separation variable will be discussed in detail in Sec. \[feyosc\] for the case of relativistic extended particles. Second, as for the time-energy uncertainty relation. Dira’c concern was how the c-cnumber time-energy uncertainty relation without excitations can be combined with uncertainties in the position space with excitations. Dira’s 1927 paper was written before Wigner’s 1939 paper on the internal space-time symmetries of relativistic particles. Both of these questions can be answered in terms of the space-time symmetry of bound states in the Lorentz-covariant regime. In his 1939 paper, Wigner worked out internal space-time symmetries of relativistic particles. He approached the problem by constructing the maximal subgroup of the Lorentz group whose transformations leave the given four-momentum invariant. As a consequence, the internal symmetry of a massive particle is like the three-dimensional rotation group. If we extend this concept to relativistic bound states, the space-time asymmetry which Dirac observed in 1927 is quite consistent with Einstein’s Lorentz covariance. The time variable can be treated separately. Furthermore, it is possible to construct a representations of Wigner’s little group for massive particles [@knp86]. As for the time-separation, it is also a variable governing internal space-time symmetry which can be linearly mixed when the system is Lorentz-boosted. Feynman’s Oscillators {#feyosc} ====================== Quantum field theory has been quite successful in terms of Feynman diagrams based on the S-matrix formalism, but is useful only for physical processes where a set of free particles becomes another set of free particles after interaction. Quantum field theory does not address the question of localized probability distributions and their covariance under Lorentz transformations. In order to address this question, Feynman [*et al.*]{} suggested harmonic oscillators to tackle the problem [@fkr71]. Their idea is indicated in Fig. \[dff33\]. ![Feynman’s roadmap for combining quantum mechanics with special relativity. Feynman diagrams work for running waves, and they provide a satisfactory resolution for scattering states in Einstein’s world. For standing waves trapped inside an extended hadron, Feynman suggested harmonic oscillators as the first step.[]{data-label="dff33"}](dff33.eps) Before 1964 [@gell64], the hydrogen atom was used for illustrating bound states. These days, we use hadrons which are bound states of quarks. Let us use the simplest hadron consisting of two quarks bound together with an attractive force, and consider their space-time positions $x_{a}$ and $x_{b}$, and use the variables $$X = (x_{a} + x_{b})/2 , \qquad x = (x_{a} - x_{b})/2\sqrt{2} .$$ The four-vector $X$ specifies where the hadron is located in space and time, while the variable $x$ measures the space-time separation between the quarks. According to Einstein, this space-time separation contains a time-like component which actively participates as in Eq.(\[boostm\]), if the hadron is boosted along the $z$ direction. This boost can be conveniently described by the light-cone variables defined in Eq(\[lcvari\]). Does this time-separation variable exist when the hadron is at rest? Yes, according to Einstein. In the present form of quantum mechanics, we pretend not to know anything about this variable. Indeed, this variable belongs to Feynman’s rest of the universe. What do Feynman [*et al.*]{} say about this oscillator wave function? In their classic 1971 paper [@fkr71], Feynman [*et al.*]{} start with the following Lorentz-invariant differential equation. $$\label{osceq} {1\over 2} \left\{x^{2}_{\mu} - {\partial^{2} \over \partial x_{\mu }^{2}} \right\} \psi(x) = \lambda \psi(x) .$$ This partial differential equation has many different solutions depending on the choice of separable variables and boundary conditions. Feynman [*et al.*]{} insist on Lorentz-invariant solutions which are not normalizable. On the other hand, if we insist on normalization, the ground-state wave function takes the form of Eq.(\[ground\]). It is then possible to construct a representation of the Poincaré group from the solutions of the above differential equation [@knp86]. If the system is boosted, the wave function becomes given in Eq.(\[eta\]). ![Lorentz-squeezed space-time and momentum-energy wave functions. As the hadron’s speed approaches that of light, both wave functions become concentrated along their respective positive light-cone axes. These light-cone concentrations lead to Feynman’s parton picture.[]{data-label="parton"}](parton.eps) This wave function becomes Eq.(\[ground\]) if $\eta$ becomes zero. The transition from Eq.(\[ground\]) to Eq.(\[eta\]) is a squeeze transformation. The wave function of Eq.(\[ground\]) is distributed within a circular region in the $u v$ plane, and thus in the $z t$ plane. On the other hand, the wave function of Eq.(\[eta\]) is distributed in an elliptic region with the light-cone axes as the major and minor axes respectively. If $\eta$ becomes very large, the wave function becomes concentrated along one of the light-cone axes. Indeed, the form given in Eq.(\[eta\]) is a Lorentz-squeezed wave function. This squeeze mechanism is illustrated in Fig. \[ellipse\]. There are many different solutions of the Lorentz invariant differential equation of Eq.(\[osceq\]). The solution given in Eq.(\[eta\]) is not Lorentz invariant but is covariant. It is normalizable in the $t$ variable, as well as in the space-separation variable $z$. It is indeed possible to construct Wigner’s $O(3)$-like little group for massive particles [@wig39], and thus the representation of the Poincaré group [@knp86]. Our next question is whether this formalism has anything to do with the real world. ![Parton distribution function. Theory and experiment.[]{data-label="hussar"}](hussar.eps) In 1969, Feynman observed that a fast-moving hadron can be regarded as a collection of many “partons” whose properties appear to be quite different from those of the quarks [@fey69]. For example, the number of quarks inside a static proton is three, while the number of partons in a rapidly moving proton appears to be infinite. The question then is how the proton looking like a bound state of quarks to one observer can appear different to an observer in a different Lorentz frame? Feynman made the following systematic observations. - The picture is valid only for hadrons moving with velocity close to that of light. - The interaction time between the quarks becomes dilated, and partons behave as free independent particles. - The momentum distribution of partons becomes widespread as the hadron moves fast. - The number of partons seems to be infinite or much larger than that of quarks. Because the hadron is believed to be a bound state of two or three quarks, each of the above phenomena appears as a paradox, particularly b) and c) together. In order to resolve this paradox, let us write down the momentum-energy wave function corresponding to Eq.(\[eta\]). If we let the quarks have the four-momenta $p_{a}$ and $p_{b}$, it is possible to construct two independent four-momentum variables [@fkr71] $$P = p_{a} + p_{b} , \qquad q = \sqrt{2}(p_{a} - p_{b}) ,$$ where $P$ is the total four-momentum. It is thus the hadronic four-momentum. The variable $q$ measures the four-momentum separation between the quarks. Their light-cone variables are $$\label{conju} q_{u} = (q_{0} - q_{z})/\sqrt{2} , \qquad q_{v} = (q_{0} + q_{z})/\sqrt{2} .$$ The resulting momentum-energy wave function is $$\label{phi} \phi_{\eta }(q_{z},q_{0}) = \left({1 \over \pi }\right)^{1/2} \exp\left\{-{1\over 2}\left(e^{\eta}q_{u}^{2} + e^{-\eta}q_{v}^{2}\right)\right\} .$$ Because we are using here the harmonic oscillator, the mathematical form of the above momentum-energy wave function is identical to that of the space-time wave function. The Lorentz squeeze properties of these wave functions are also the same. This aspect of the squeeze has been exhaustively discussed in the literature [@knp86; @kn77par; @kim89]. When the hadron is at rest with $\eta = 0$, both wave functions behave like those for the static bound state of quarks. As $\eta$ increases, the wave functions become continuously squeezed until they become concentrated along their respective positive light-cone axes. Let us look at the z-axis projection of the space-time wave function. Indeed, the width of the quark distribution increases as the hadronic speed approaches that of the speed of light. The position of each quark appears widespread to the observer in the laboratory frame, and the quarks appear like free particles. The momentum-energy wave function is just like the space-time wave function, as is shown in Fig. \[parton\]. The longitudinal momentum distribution becomes wide-spread as the hadronic speed approaches the velocity of light. This is in contradiction with our expectation from non-relativistic quantum mechanics that the width of the momentum distribution is inversely proportional to that of the position wave function. Our expectation is that if the quarks are free, they must have their sharply defined momenta, not a wide-spread distribution. However, according to our Lorentz-squeezed space-time and momentum-energy wave functions, the space-time width and the momentum-energy width increase in the same direction as the hadron is boosted. This is of course an effect of Lorentz covariance. This indeed is the key to the resolution of the quark-parton paradox [@knp86; @kn77par]. After these qualitative arguments, we are interested in whether Lorentz-boosted bound-state wave functions in the hadronic rest frame could lead to parton distribution functions. If we start with the ground-state Gaussian wave function for the three-quark wave function for the proton, the parton distribution function appears as Gaussian as is indicated in Fig. \[hussar\]. This Gaussian form is compared with experimental distribution also in Fig. \[hussar\]. For large $x$ region, the agreement is excellent, but the agreement is not satisfactory for small values of $x$. In this region, there is a complication called the “sea quarks.” However, good sea-quark physics starts from good valence-quark physics. Figure \[hussar\] indicates that the boosted ground-state wave function provides a good valence-quark physics. Feynman’s parton picture is one of the most controversial models proposed in the 20th century. The original model is valid only in Lorentz frames where the initial proton moves with infinite momentum. It is gratifying to note that this model can be produced as a limiting case of one covariant model which produces the quark model in the frame where the proton is at rest. Concluding Remarks {#concl .unnumbered} ================== The major strength of the coupled oscillator system is that its classical mechanics is known to every physicist. Not too well known is the fact that this simple device can serve as an analog computer for many of the current problems in physics. This oscillator system was very useful in illustrating Feynman’s rest of the universe [@hkn99ajp]. In this report, we have shown first that the coupled oscillator system can server as an illustrative example of the concept of entanglement, and that Feynman’s rest of the universe is a special case of entanglement. Conversely, the the rest of the universe can be extended to the concept of entanglement. It was also noted that the coupled-oscillator system provides the mathematical basis for the covariant harmonic oscillators. It can also translate the problems of entanglement to the space and time variables. It is well known that harmonic oscillators provide bridges between theories. In this paper, we have seen that the coupled harmonic oscillators can serve as a bridge between Dirac and Feynman, and a bridge between coupled oscillators and harmonic oscillators in the Lorentz-covariant world. Acknowledgments {#acknowledgments .unnumbered} =============== We would like to thank G. S. Agarwal, H. Hammer, and A. Vourdas for helpful discussion on the precise definition of the word “entanglement” applicable to coupled systems. [99]{} S. S. Schweber, [*An Introduction to Relativistic Quantum Field Theory*]{} (Row-Peterson, Elmsford, New York, 1961). A. L. Fetter and J. D. Walecka, [*Quantum Theory of Many Particle Systems*]{} (McGraw-Hill, New York, 1971). Y. S. Kim and M. E. Noz, [*Phase Space Picture of Quantum Mechanics*]{} (World Scientific, Singapore, 1991). D. Han, Y. S. Kim, Am. J. Phys. [**67**]{}, 61 (1999). P. K. Aravind, Am. J. Phys. [**57**]{}, 309 (1989). D. Han, Y. S. Kim, and M. E. Noz, J. Math. Phys. [**36**]{}, 3940 (1995). Y. S. Kim, M. E. Noz, and S. H. Oh, Am. J. Phys. [**47**]{}, 892 (1979). Y. S. Kim and M. E. Noz, [*Theory and Applications of the Poincaré Group*]{} (Reidel, Dordrecht, 1986). R. P. Feynman, [*Statistical Mechanics*]{} (Benjamin/Cummings, Reading, MA, 1972). E. P. Wigner and M. M. Yanase, [**49**]{}, 910 (1963). See also J. von Neumann, [*Die mathematische Grundlagen der Quanten-mechanik*]{} (Springer, Berlin, 1932). See also J. von Neumann, [*Mathematical Foundation of Quantum Mechanics*]{} (Princeton University, Princeton, 1955). P. A. M. Dirac, Proc. Roy. Soc. (London) [**A183**]{}, 284 (1945). P. A. M. Dirac, Proc. Roy. Soc. (London) [**A114**]{}, 234 and 710 (1927). P. A. M. Dirac, J. Math. Phys. [**4**]{}, 901 (1963). P. A. M. Dirac, Rev. Mod. Phys. [**21**]{}, 392 (1949). R. P. Feynman, M. Kislinger, and F. Ravndal, Phys. Rev. D [**3**]{}, 2706 (1971). M. Gell-Mann, Phys. Lett. [**13**]{}, 598 (1964). E. Wigner, Ann. Math. [**40**]{}, 149 (1939). R. P. Feynman, [*The Behavior of Hadron Collisions at Extreme Energies*]{}, in [*High Energy Collisions*]{}, Proceedings of the Third International Conference, Stony Brook, New York, edited by C. N. Yang [*et al.*]{}, Pages 237-249 (Gordon and Breach, New York, 1969). Y. S. Kim and M. E. Noz, Phys. Rev. D [**15**]{}, 335 (1977). Y. S. Kim, Phys. Rev. Lett. [**63**]{}, 348 (1989). [^1]: electronic address: yskim@physics.umd.edu [^2]: electronic address: noz@nucmed.med.nyu.edu
--- abstract: 'Shokurov conjectured that the set of all log canonical thresholds on varieties of bounded dimension satisfies the ascending chain condition. In this paper we prove that the conjecture holds for log canonical thresholds on smooth varieties and, more generally, on locally complete intersection varieties and on varieties with quotient singularities.' address: - 'Department of Mathematics, University of Utah, 155 South 1400 East, Salt Lake City, UT 48112-0090, USA' - 'Department of Mathematics, University of Illinois at Chicago, 851 South Morgan Street (M/C 249), Chicago, IL 60607-7045, USA' - 'Department of Mathematics, University of Michigan, Ann Arbor, MI 48109, USA' author: - Tommaso de Fernex - Lawrence Ein - Mircea Mustaţă title: 'Shokurov’s ACC Conjecture for log canonical thresholds on smooth varieties' --- Introduction ============ Let $k$ be an algebraically closed field of characteristic zero. Log canonical varieties are varieties with mild singularities that provide the most general context for the Minimal Model Program. More generally, one considers the log canonicity condition on pairs $(X,\fra^t)$, where $\fra$ is a proper ideal sheaf on $X$ (most of the times, it is the ideal of an effective Cartier divisor), and $t$ is a nonnegative real number. Given a log canonical variety $X$ over $k$, and a proper nonzero ideal sheaf $\fra$ on $X$, one defines the [*log canonical threshold*]{} $\operatorname{lct}(\fra)$ of the pair $(X,\fra)$. This is the largest number $t$ such that the pair $(X,\fra^t)$ is log canonical. One makes the convention $\operatorname{lct}(0) = 0$ and $\operatorname{lct}(\O_X) = \infty$. The log canonical threshold is a fundamental invariant in birational geometry, see for example [@Kol2], [@EM2], or Chapter 9 in [@positivity]. Shokurov’s ACC Conjecture [@Sho] says that the set of all log canonical thresholds on varieties of any fixed dimension satisfies the ascending chain condition, that is, it contains no infinite strictly increasing sequences. This conjecture attracted considerable interest due to its implications to the Termination of Flips Conjecture (see [@Birkar] for a result in this direction). The first unconditional results on sequences of log canonical thresholds on smooth varieties of arbitrary dimension have been obtained in [@dFM], and they were subsequently reproved and strengthened in [@Kol1]. The main goal of this paper is to prove Shokurov’s ACC Conjecture for log canonical thresholds on smooth varieties and, more generally, on varieties that are locally complete intersection (l.c.i. for short). Our first result deals with the smooth case. \[thm:intro:T\_n\^sm\] For every $n$, the set $$\cT_n^{\rm sm} := \{\operatorname{lct}(\fra)\mid \text{$X$ is smooth, $\dim X = n$, $\fra\subsetneq\cO_X$} \}$$ of log canonical thresholds on smooth varieties of dimension $n$ satisfies the ascending chain condition. As we will see, every log canonical threshold on a variety with quotient singularities can be written as a log canonical threshold on a smooth variety of the same dimension. Therefore for every $n$ the set $$\cT_n^{\rm quot} := \{\operatorname{lct}(\fra)\mid \text{$X$ has quotient singularities, $\dim X = n$, $\fra\subsetneq\cO_X$} \}$$ is equal to $\cT_n^{\rm sm}$, and thus the ascending chain condition also holds for log canonical thresholds on varieties with quotient singularities. In order to extend the result to log canonical thresholds on locally complete intersection varieties, we consider a more general version of log canonical thresholds. Given a variety $X$ and an ideal sheaf $\frb$ on $X$ such that the pair $(X,\frb)$ is log canonical, for every nonzero ideal sheaf $\fra\subsetneq\cO_X$ we define the *mixed log canonical threshold* $\operatorname{lct}_{(X,\frb)}(\fra)$ to be the largest number $c$ such that the pair $(X,\frb\cdot\fra^c)$ is log canonical. Note that when $\frb=\cO_X$, this is nothing but $\operatorname{lct}(\fra)$. Again, one sets $\operatorname{lct}_{(X,\frb)}(0) = 0$ and $\operatorname{lct}_{(X,\frb)}(\cO_X)=\infty$. The following is our main result. \[thm:intro:M\_n\^lci\] For every $n$, the set $$\cM_n^{\rm l.c.i.} := \{\operatorname{lct}_{(X,\frb)}(\fra)\mid \text{$X$ is l.c.i., $\dim X = n$, $\fra,\frb\subseteq\cO_X$, $\fra \ne \cO_X$, $(X,\frb)$ log canonical\,} \}$$ of mixed log canonical thresholds on l.c.i. varieties of dimension $n$ satisfies the ascending chain condition. By restricting to the case $\frb = \O_X$, we obtain the following immediate corollary. \[cor:intro:T\_n\^lci\] For every $n$, the set $$\cT_n^{\rm l.c.i.} := \{\operatorname{lct}(\fra)\mid \text{$X$ is log canonical and l.c.i., $\dim X = n$, $\fra\subsetneq\cO_X$} \}$$ of log canonical thresholds on log canonical l.c.i. varieties of dimension $n$ satisfies the ascending chain condition. We will use Inversion of Adjunction (in the form treated in [@EM3]) to reduce Theorem \[thm:intro:M\_n\^lci\] to the analogous statement in which $X$ ranges over smooth varieties. More precisely, we show that all sets $$\cM_n^{\rm sm} := \{\operatorname{lct}_{(X,\frb)}(\fra)\mid \text{$X$ is smooth, $\dim X = n$, $\fra,\frb\subseteq\cO_X$, $\fra \ne \cO_X$, $(X,\frb)$ log canonical} \}$$ satisfy the ascending chain condition. It follows by Inversion of Adjunction that every mixed log canonical threshold of the form $\operatorname{lct}_{(X,\frb)}(\fra)$, with $\fra$ and $\frb$ ideal sheaves on an l.c.i. variety $X$, can be expressed as a mixed log canonical threshold on a (typically higher dimensional) smooth variety. This is the step that requires us to work with mixed log canonical thresholds. The key observation that makes this approach work is that if $X$ is an l.c.i. variety with log canonical singularities, then $\dim_kT_xX\leq 2\dim X$ for every $x\in X$. This implies that the above reduction to the smooth case keeps the dimension of the ambient variety bounded. The proofs of the above results use a general method of associating to a sequence of ideals of polynomials over a field $k$, an ideal of power series over a field extension of $k$. The original construction considered in [@dFM] is a standard application of nonstandard methods, and relies on the use of ultrafilters. This construction was subsequently replaced in [@Kol1] by a purely algebro-geometric construction, that gives a *generic limit* by using a sequence of $\frm$-adic approximations and field extensions. The two constructions are similar in nature, and either construction can be employed to prove the results of this paper. We chose to present the proofs using the second construction, which is geometrically more explicit. A key ingredient is the following effective $\frm$-adic semicontinuity property for log canonical thresholds (that we will only use in the case when $X = \A^n$ and $E$ lies over a point of $\A^n$). \[thm:intro:m-adic-semicont:ideals\] Let $X$ be a log canonical variety, and let $\fra \subsetneq \O_X$ be a proper ideal. Suppose that $E$ is a prime divisor over $X$ computing $\operatorname{lct}(\fra)$, and consider the ideal sheaf $\frq := \{ h \in \O_X \mid \operatorname{ord}_E(h) > \operatorname{ord}_E(\fra) \}$. If $\frb\subseteq\cO_X$ is an ideal such that $\frb+\frq=\fra+\frq$, then after possibly restricting to an open neighborhood of the center of $E$, we have $\operatorname{lct}(\frb)=\operatorname{lct}(\fra)$. This result (for principal ideals) was first proven by Kollár in [@Kol1] using deep results in the Minimal Model Program from [@BCHM] and a theorem on Inversion of Adjunction from [@Kawakita]. We give an elementary proof of Theorem \[thm:intro:m-adic-semicont:ideals\] which only uses the Connectedness Theorem of Shokurov and Kollár (see Theorem 7.4 in [@Kol2]). We note that in the case of a divisor $E$ with zero-dimensional center, Kollár’s proof extends to cover also ideals in a power series ring, and this fact is important for his approach. In fact, as we will see, this version can be formally deduced from the statement of Theorem \[thm:intro:m-adic-semicont:ideals\] (see Corollary \[formal\_case\]). It is interesting to observe how, in the end, all the results of this paper only rely on basic facts in birational geometry, such as Resolution of Singularities and the Connectedness Theorem and, for the l.c.i. case, on Inversion of Adjunction. We expect however that new ideas and more sophisticated techniques will be necessary to tackle the ACC Conjecture in its general formulation. [**Acknowledgment**]{}. We are grateful to Shihoko Ishii and Angelo Vistoli for useful discussions and correspondence, and to János Kollár for his comments and suggestions on previous versions of our work. Furthermore, as we have already mentioned, two key ideas we use in this paper come from Kollár’s article [@Kol1]. Generalities on log canonical thresholds ======================================== Let $k$ be a field of characteristic zero. In what follows $X$ will be either a normal and $\QQ$-Gorenstein variety over $k$, or $\operatorname{Spec}\left(k{[\negthinspace[}x_1,\ldots,x_n{]\negthinspace]}\right)$. We recall the definition of log canonical threshold in a slightly more general version, and discuss some of the properties that will be needed later. For the basic facts about log canonical pairs in the setting of algebraic varieties, see [@Kol2] or Chapter 9 in [@positivity], while for the case of the spectrum of a formal power series ring we refer to [@dFM]. The key point is that by [@Temkin], log resolutions exist also in the latter case, and therefore the usual theory of log canonical pairs carries through. Suppose that $X$ is as above. Let $\fra$ and $\frb$ be nonzero coherent sheaves of ideals on $X$ with $\fra \ne \cO_X$, and assume that the pair $(X,\frb)$ is log canonical. We consider the following relative version of the definition of log canonical threshold (there is an analogous definition in the language of $\Q$-divisors that is broadly used in the literature): we define the *mixed log canonical threshold* of $\fra$ with respect to the pair $(X,\frb)$ to be $$\operatorname{lct}_{(X,\frb)}(\fra):=\sup\{c\geq 0\mid \text{$(X,\frb\.\fra^c)$ is log canonical}\}.$$ Whenever the ambient variety $X$ is understood, we drop it from the notation, and simply write $\operatorname{lct}_\frb(\fra)$. Observe that in the case $\frb = \cO_X$, the mixed log canonical threshold $\operatorname{lct}_{\cO_X}(\fra)$ is nothing else than the usual [*log canonical threshold*]{} $\operatorname{lct}(\fra)$ of $\fra$. We make the convention $\operatorname{lct}_\frb(0) = 0$ and $\operatorname{lct}_\frb(\cO_X) = \infty$. The fact that log canonicity can be checked on a log resolution allows us to describe the mixed log canonical threshold in terms of any such resolution. Suppose that $\pi\colon Y\to X$ is a log resolution of $\fra\cdot\frb$, and write $\fra\cdot\cO_Y=\cO(-\sum_ia_iE_i)$, $\frb\cdot\cO_Y=\cO(-\sum_ib_iE_i)$, and $K_{Y/X}=\sum_ik_iE_i$. Still assuming that $\fra$ and $\frb$ are nonzero ideals, $\fra \ne \cO_X$, and $(X,\frb)$ is log canonical (that is, $\operatorname{lct}(\frb) \ge 1$), it follows from the characterization of log canonicity in terms of a log resolution that $$\label{eq0} \operatorname{lct}_{\frb}(\fra) =\min\left\{\frac{k_i+1-b_i}{a_i}\mid a_i>0\right\}.$$ We see from the above formula that the mixed log canonical threshold is a rational number. Note also that it is zero if and only if there is $i$ such that $a_i>0$ and $b_i = k_i+1$ (in other words, if $(X,\frb)$ is not Kawamata log terminal and there is a non-klt center contained in the zero-locus of $\fra$). It is convenient to use also a local version of the (mixed) log canonical threshold. For every point $p\in V(\fra)$ such that the pair $(X,\frb)$ is log canonical in some neighborhood of $p$, if in (\[eq0\]) we take the minimum only over those $i$ such that $p\in \pi(E_i)$, we get the *mixed log canonical threshold at $p$*, denoted $\operatorname{lct}_{(X,\frb),p}(\fra)$. This is the maximum of $\operatorname{lct}_{\frb\vert_U}(\fra\vert_U)$, when $U$ ranges over the open neighborhoods of $p$. When $\frb=\cO_X$, we simply write $\operatorname{lct}_p(\fra)$. \[rem0\] It follows from the description in terms of a log resolution that if $X=U_1\cup\ldots\cup U_r$, with $U_j$ open, then $\operatorname{lct}_{\frb}(\fra)=\min_j\operatorname{lct}_{\frb\vert_{U_j}}(\fra\vert_{U_j})$. \[rem1\] If $\frb$ and $\fra$ are as above and $c: =\operatorname{lct}_{\frb}(\fra)$, then $\operatorname{lct}(\frb\cdot \fra^{c})=1$ (where, of course, $\operatorname{lct}(\frb\cdot\fra^c)$ is the largest nonnegative $q$ such that the pair $(X,\frb^q\cdot\fra^{qc})$ is log canonical). Indeed, by assumption the pair $(X,\frb\cdot\fra^c)$ is log canonical, and for every $\alpha>1$ the pair $(X,(\frb\cdot\fra^c)^{\alpha})$ is not log canonical since $(X,\frb\cdot\fra^{c\alpha})$ is not. Note however that the converse of this property does not hold: in fact, if $\operatorname{lct}(\frb) = 1$ and the zero-locus of $\fra$ does not contain any non-klt center of $(X,\frb)$, then $c = \operatorname{lct}_\frb(\fra) > 0$ and $\operatorname{lct}(\frb\.\fra^t) = 1$ for every $0 < t \le c$. \[remark3\] Suppose that $X$, $\fra$ and $\frb$ are as above, with $X$ smooth. For every $p\in V(\fra)$, we have $\operatorname{lct}_{(X,\frb),p}(\fra)=\operatorname{lct}_{(X',\frb')}(\fra')$, where $X'=\operatorname{Spec}(\widehat{\cO_{X,p}})$, and $\fra'$, $\frb'$ are the pull-backs of the ideals $\fra$ and, respectively, $\frb$ to $X'$. The argument follows as in the case $\frb=\cO_X$, for which we refer to [@dFM Proposition 2.9]. We will adopt the following terminology. Let $X$ and $\fra,\frb \subseteq \O_X$ be as above. We say that a prime divisor $E$ over $X$ [*computes $\operatorname{lct}_\frb(\fra)$*]{} if there is a log resolution $\pi \colon Y \to X$ such that, with the above notation, $E$ induces the same valuation as a divisor $E_i$ on $Y$ for which $a_i > 0$ and the minimum in is achieved for this $i$. Suppose now that $k$ is algebraically closed. For every $n \ge 0$, we consider the sets $\cT_n^{\rm sm}$, $\cT_n^{\rm quot}$, $\cT_n^{\rm l.c.i}$, $\cM_n^{\rm sm}$ and $\cM_n^{\rm l.c.i.}$ defined in the Introduction. Note that for $n=0$ all these sets are equal to $\{0\}$. It is convenient to extend the definition to $n < 0$ by declaring all these sets to be empty in this range. We will use the basic fact (cf. [@dFM Proposition 3.3]) that for every $n\geq 1$, $$\cT_n^{\rm sm} = \{\operatorname{lct}_0(\fra) \mid \fra \subseteq (x_1,\ldots,x_n)\subset k[x_1,\ldots,x_n] \}.$$ Similarly, for every $n\geq 1$ we have $$\cM_n^{\rm sm} = \{\operatorname{lct}_{({\mathbf A}^n,\frb),0}(\fra) \mid \fra,\frb \subseteq k[x_1,\dots,x_n], \fra\subseteq (x_1,\ldots,x_n), \operatorname{lct}_0(\frb) \ge 1 \}.$$ The proof is analogous to the non-mixed case, and is left to the reader. Effective $\frm$-adic semicontinuity of log canonical thresholds ================================================================ Let $X$ be a log canonical variety defined over an algebraically closed field of characteristic zero $k$. We start by proving Theorem \[thm:intro:m-adic-semicont:ideals\] in the special case of principal ideals. \[thm:m-adic-semicont\] Let $E$ be a divisor over $X$, computing $\operatorname{lct}(f)$ for some $f\in\cO(X)$. If $g\in\cO(X)$ is such that $\operatorname{ord}_E(f-g)>\operatorname{ord}_E(f)$, then after possibly replacing $X$ by an open neighborhood of the center of $E$, we have $\operatorname{lct}(f)=\operatorname{lct}(g)$. The interesting inequality is $\operatorname{lct}(g)\geq\operatorname{lct}(f)$, the reverse one being trivial. Note that if the center of $E$ on $X$ is equal to a point $p\in X$, then whenever $\operatorname{mult}_p(f-g) > \operatorname{ord}_E(f)$, we have $\operatorname{ord}_E(f-g)>\operatorname{ord}_E(f)$, and the theorem gives $\operatorname{lct}_p(g)=\operatorname{lct}_p(f)$. As already explained in the Introduction, a proof of the theorem was given in [@Kol1] relying on deep results in the Minimal Model Program and on Inversion of Adjunction. We give an elementary proof, only using the Connectedness Theorem. The inequality $\operatorname{lct}(f)\geq\operatorname{lct}(g)$ is easy. Indeed, since $\operatorname{ord}_E(f-g)>\operatorname{ord}_E(f)$, we have $\operatorname{ord}_E(g)=\operatorname{ord}_E(f)$, and therefore, if $Y$ is the model over $X$ on which $E$ lies, then $$\operatorname{lct}(g)\leq \frac{\operatorname{ord}_E(K_{Y/X})+1}{\operatorname{ord}_E(g)} =\frac{\operatorname{ord}_E(K_{Y/X})+1}{\operatorname{ord}_E(f)}=\operatorname{lct}(f).$$ The first step in the proof of the reverse inequality is to reduce to the case when $\operatorname{ord}_F(f-g)>\operatorname{ord}_F(f)$ for *all* divisors $F$ that compute $\operatorname{lct}(f)$ on some log resolution of $fg$. In order to do this, let us choose a log resolution $\pi \colon Y \to X$ of $fg(f-g)$ such that the divisor $E$ appears on $Y$. Let $E_1,\dots,E_t$ be the irreducible components of the divisor $K_{Y/X} + \pi^*(\operatorname{div}(fg(f-g)))$. After relabelling the indices, we may assume that $E=E_1$. In the following, we denote $$a_i := \operatorname{ord}_{E_i}(f), \quad b_i := \operatorname{ord}_{E_i}(g), \quad \text{and}\quad k_i := \operatorname{ord}_{E_i}(K_{Y/X}).$$ In order to prove the theorem, it is enough to show that for every $q\in \pi(E)$ we have $\operatorname{lct}_q(g)\geq\operatorname{lct}_q(f)$ (note that $\operatorname{lct}_q(f)=\operatorname{lct}(f)$). Fix such $q$. After possibly replacing $X$ by an open neighborhood of $q$, we may assume that $q\in\pi(E_i)$ for every $i$. For every $m\geq 1$, we consider $f_m:=f^mh$ and $g_m:=g^mh$, where $h=f-g$. Note that by assumption $\pi$ is a log resolution for both $f_m$ and $g_m$. \[lem:f\_m-g\_m\] If $m\gg 1$, then 1. $E_i$ computes $\operatorname{lct}(f_m)$ if and only if it computes $\operatorname{lct}(f)$ and, in addition, $$\frac{\operatorname{ord}_{E_i}(f)}{\operatorname{ord}_{E_i}(h)}=\min\left\{\frac{\operatorname{ord}_{E_j}(f)}{\operatorname{ord}_{E_j}(h)}\mid E_j\,\text{computes}\,\operatorname{lct}(f)\right\}.$$ 2. For every $i$ such that $E_i$ computes $\operatorname{lct}(f_m)$, we have $\operatorname{ord}_{E_i}(f_m-g_m)>\operatorname{ord}_{E_i}(f_m)$. We put $c_i=\operatorname{ord}_{E_i}(h)$. Since $m\gg 1$, we have $$\frac{k_i+1}{a_i+\frac{c_i}{m}}\leq\frac{k_j+1}{a_j+\frac{c_j}{m}}$$ if and only if $\frac{k_i+1}{a_i}\leq\frac{k_j+1}{a_j}$, and either this inequality is strict, or $\frac{k_i+1}{c_i}\leq \frac{k_j+1}{c_j}$. This shows that every divisor $E_i$ that computes $\operatorname{lct}(f_m)$ also computes $\operatorname{lct}(f)$. Furthermore, if $E_i$ computes $\operatorname{lct}(f)$, then it computes $\operatorname{lct}(f_m)$ if and only if $\frac{k_i+1}{c_i}\leq\frac{k_j+1}{c_j}$ for every $j$ such that $E_j$ computes $\operatorname{lct}(f)$. Note that this holds if and only if $\frac{a_i}{c_i}\leq \frac{a_j}{c_j}$ (since $k_i+1=\operatorname{lct}(f)a_i$ and $k_j+1=\operatorname{lct}(f)a_j$), hence i). Suppose now that $E_i$ computes $\operatorname{lct}(f_m)$. It follows from i) and our hypothesis that $\frac{a_i}{c_i}\leq\frac{a_1}{c_1}<1$. Since $f_m-g_m=(f^m-g^m)h$, in order to prove ii) it is enough to show that $\operatorname{ord}_{E_i}(f^m-g^m)>m\cdot\operatorname{ord}_{E_i}(f)$. Note that $a_i<c_i$ implies $\operatorname{ord}_{E_i}(f)=\operatorname{ord}_{E_i}(g)$ (recall that $g=f-h$). We write $$f^m-g^m=(g+h)^m-g^m=\sum_{\ell=1}^m{{m}\choose {\ell}}h^{\ell}g^{m-\ell}.$$ For every $\ell\geq 1$ we have $\operatorname{ord}_{E_i}(h^{\ell}g^{m-{\ell}})>m\cdot\operatorname{ord}_{E_i}(f)$, hence $\operatorname{ord}_{E_i}(f^m-g^m)>m\cdot\operatorname{ord}_{E_i}(f)$. This completes the proof of the lemma. Observe that $\operatorname{lct}(f)=\lim_{m\to\infty} m\cdot\operatorname{lct}(f_m)$ and $\operatorname{lct}(g) =\lim_{m\to \infty} m\cdot\operatorname{lct}(g_m)$. Indeed, it follows from definition that $$\operatorname{lct}(f_m)=\min_i\frac{k_i+1}{ma_i+c_i} =\frac{1}{m}\cdot\min_i\frac{k_i+1}{a_i+\frac{c_i}{m}},$$ which gives the first equality, and the second one follows in the same way. Thus, if we can prove the theorem for $f_m$ and $g_m$ in place of $f$ and $g$, for all $m \gg 1$, then we deduce the statement for $f$ and $g$. Therefore, by Lemma \[lem:f\_m-g\_m\], we are reduced to proving Theorem \[thm:m-adic-semicont\] in the case when there is a log resolution $\pi\colon Y\to X$ for $fg$ such that for all divisors $E_i$ on $\pi$ that compute $\operatorname{lct}(f)$ we have $\operatorname{ord}_{E_i}(f-g)>\operatorname{ord}_{E_i}(f)$. We shall thus assume that this is the case. We keep the notation previously introduced, so that in particular $a_i = \operatorname{ord}_{E_i}(f)$ and $b_i = \operatorname{ord}_{E_i}(g)$ for every $i$. Recall also that we may assume $q\in \pi(E_i)$ for all $i$. \[lem:E\_i-E\_j\] Under the above assumptions, if $E_i$ is a divisor computing $\operatorname{lct}(f)$, then $\operatorname{ord}_{E_j}(f)=\operatorname{ord}_{E_j}(g)$ for every $j$ such that $E_i\cap E_j\neq\emptyset$. Let $p \in E_i \cap E_j$ be a general point, and let $y_i, y_j \in \O_{Y,p}$ be part of a regular system of parameters, and generating the images in $\cO_{Y,p}$ of the ideals defining $E_i$ and $E_j$, respectively. We have in $\cO_{Y,p}$ $$\pi^*(f)=u y_i^{a_i} y_j^{a_j} \quad\text{and}\quad \pi^*(g)=v y_i^{b_i} y_j^{b_j},$$ where $u,v \in \O_{Y,p}$ are invertible elements. By assumption, $\pi^*(f-g)=y_i^{a_i+1}w$ for some $w \in \O_{Y,p}$. This has two consequences. The first is that $b_i=a_i$. Furthermore, we see that $y_i^{-a_i}\pi^*(f)$ and $y_i^{-a_i}\pi^*(g)$ have the same restriction to $E_i$. This implies that $b_j = a_j$, which is the assertion in the lemma. We can now finish the proof of Theorem \[thm:m-adic-semicont\]. Let $c=\operatorname{lct}(f)$, and for every $i$ let $$\alpha_i := ca_i - k_i \quad\text{and}\quad \beta_i := cb_i - k_i.$$ Note that $\alpha_i \le 1$ for every $i$, and equality holds precisely for those $i$ such that $E_i$ computes $\operatorname{lct}(f)$. The above lemma says that for every $i$ such that $\alpha_i=1$, we have $\beta_i=1$, and more generally $\alpha_j = \beta_j$ for every $j$ such that $E_i\cap E_j\neq\emptyset$. To finish, we apply the main ingredient of the proof, namely, the Connectedness Theorem of Shokurov and Kollár (see Theorem 7.4 in [@Kol2]), which in our case says that the union $\cup_{\beta_j\geq 1}E_j$ is connected in the neighborhood of $\pi^{-1}(q)$. Since $q\in\pi(E_i)$ for every $i$, this implies that $\cup_{\beta_j\geq 1}E_j$ is connected. Let us look at an arbitrary divisor $E_i$ that computes $\operatorname{lct}(f)$, so that $\alpha_i=1$. We have seen that in this case $\beta_i=1$. If $E_j$ is any other divisor that meets $E_i$ and such that $\beta_j\geq 1$, then we have $1 \geq \alpha_j = \beta_j\geq 1$ by Lemma \[lem:E\_i-E\_j\], and therefore $\alpha_j = \beta_j = 1$. This implies by induction on $s$ that for every sequence of divisors $E_i,E_{j_1},\ldots,E_{j_s}$ such that any two consecutive divisors intersect, and such that $\beta_{j_{\ell}}\geq 1$ for all $\ell$, we have $\alpha_{j_{\ell}}=\beta_{j_{\ell}}=1$ for every $\ell$. Since the set $\cup_{\beta_j\geq 1}E_j$ is connected, we conclude that $\beta_{j}\leq 1$ for every $j$, and thus $\operatorname{lct}(g)\geq c$. This completes the proof of Theorem \[thm:m-adic-semicont\]. The above proof also gives the following statement. Suppose that $f$ and $g$ are as in Theorem \[thm:m-adic-semicont\], such that for *all* divisors $E_i$ over $X$ computing $\operatorname{lct}(f)=c$, we have $\operatorname{ord}_{E_i}(f-g)>\operatorname{ord}_{E_i}(f)$ (it is easy to see that it is enough to check this condition only on the divisors on a fixed log resolution of $f$). By the theorem, after restricting to an open neighborhood of the non-klt locus of $(X,f^c)$ (this is the union of the centers of the divisors $E_i$ computing $\operatorname{lct}(f)$), we have $\operatorname{lct}(g)=c$. In addition, the proof shows that every divisor over $X$ that computes $\operatorname{lct}(g)$ also computes $\operatorname{lct}(f)$. Theorem \[thm:m-adic-semicont\] can easily be extended to ideals, as stated in Theorem \[thm:intro:m-adic-semicont:ideals\], as follows. We may assume that $X$ is affine. Again, it is immediate to see that the hypothesis implies that $\operatorname{lct}(\frb) \le \operatorname{lct}(\fra)$. In order to prove the reverse inequality, let $N$ be an integer larger than $\operatorname{lct}(\fra)$, and choose $N$ general linear combinations $f_1,\dots,f_N$ of a fixed set of generators of $\fra$. Note in particular that $\operatorname{ord}_E(f_i)= \operatorname{ord}_E(\fra)$ for all $i$. Moreover, if $f:=f_1\dots f_N$, then $\operatorname{lct}(f)=\operatorname{lct}(\fra)/N$ and $E$ computes $\operatorname{lct}(f)$ (see, for example, [@positivity Proposition 9.2.26]). By assumption, we can write $f_i=g_i+h_i$, with $g_i\in\frb$ and $h_i\in \frq$. Note that we have $\operatorname{ord}_E(h_i)>\operatorname{ord}_E(\fra)$, and hence $\operatorname{ord}_E(g_i)=\operatorname{ord}_E(\fra)$, for every $i$. If $g:=g_1 \dots g_N$, then we can write $$f-g=h_1f_2\dots f_N + g_1h_2f_3\dots f_N + \dots + g_1g_2\dots g_{N-1}h_N.$$ Since all terms in the above sum have order along $E$ larger than $\operatorname{ord}_E(f)$, we conclude by Theorem \[thm:m-adic-semicont\] that after possibly replacing $X$ by an open neighborhood of the center of $E$, we have $\operatorname{lct}(g)\geq \operatorname{lct}(f)$. Since $g\in\frb^N$, it follows that $\operatorname{lct}(\frb)\geq\operatorname{lct}(\fra)$. \[formal\_case\] Let $X=\operatorname{Spec}(R)$, where $R=k{[\negthinspace[}x_1,\ldots,x_n{]\negthinspace]}$, and let $\fra$ and $\frb$ proper ideals in $R$. Suppose that $E$ is a divisor over $X$ with center equal to the closed point, such that $E$ computes $\operatorname{lct}(\fra)$. If $\frb+\frq=\fra+\frq$, where $\frq=\{h\in R\mid\operatorname{ord}_E(h)>\operatorname{ord}_E(\fra)\}$, then $\operatorname{lct}(\frb)=\operatorname{lct}(\fra)$. It is enough to show that $\operatorname{lct}(\frb+\frm^N)=\operatorname{lct}(\fra+\frm^N)$ for all $N\gg 0$, where $\frm$ denotes the maximal ideal in $R$ (we use the fact that $\operatorname{lct}(\frb)=\lim_{N\to\infty}\operatorname{lct}(\frb+\frm^N)$ and $\operatorname{lct}(\fra)=\lim_{N\to\infty}\operatorname{lct}(\fra+\frm^N)$, see [@dFM Proposition 2.5]). Since the center of $E$ is equal to the closed point, there is a divisor $F$ over ${\mathbf A}^n$ with center the origin such that $E$ is obtained from $F$ by base-change with respect to $\operatorname{Spec}(R)\to {\mathbf A}^n$. If $\widetilde{\fra}_N:=(\fra+\frm^N)\cap k[x_1,\ldots,x_n]$ and $\widetilde{\frb}_N:=(\frb+\frm^N)\cap k[x_1,\ldots,x_n]$, then $\fra+\frm^N=\widetilde{\fra}_N\cdot R$ and $\frb+\frm^N=\widetilde{\frb}_N\cdot R$. Hence $\operatorname{lct}(\fra+\frm^N)=\operatorname{lct}_0(\widetilde{\fra}_N)$ and $\operatorname{lct}(\frb+\frm^N)=\operatorname{lct}_0(\widetilde{\frb}_N)$ (see, for example, [@dFM Corollary 2.8]). On the other hand, we have $\operatorname{lct}(\fra+\frm^N)\geq \operatorname{lct}(\fra)$ for every $N$, and $\operatorname{lct}(\fra+\frm^N)\leq\operatorname{lct}(\fra)$ for $N>\operatorname{ord}_E(\fra)$. It follows that for such $N$ we have $\operatorname{lct}(\fra+\frm^N)=\operatorname{lct}(\fra)$, and furthermore, $E$ computes $\operatorname{lct}(\fra+\frm^N)$. Therefore $F$ computes $\operatorname{lct}_0(\widetilde{\fra}_N)$. If $N>\operatorname{ord}_E(\fra)$, then $\operatorname{ord}_F(\widetilde{\fra}_N)=\operatorname{ord}_E(\fra)$, and $$(x_1,\ldots,x_n)^N\subseteq\widetilde{\frq}:=\{h\in k[x_1,\ldots,x_n]\mid \operatorname{ord}_F(h)>\operatorname{ord}_F(\widetilde{\fra}_N)\}=\frq\cap k[x_1,\ldots,x_n].$$ We deduce that $\widetilde{\frb}_N+\widetilde{\frq}=\widetilde{\fra}_N+\widetilde{\frq}$, hence by Theorem \[thm:intro:m-adic-semicont:ideals\] we have $\operatorname{lct}_0(\widetilde{\frb}_N)=\operatorname{lct}_0(\widetilde{\fra}_N)$. We conclude that $\operatorname{lct}(\frb+\frm^N)=\operatorname{lct}(\fra+\frm^N)$ for all $N\gg 0$, and therefore $\operatorname{lct}(\frb)=\operatorname{lct}(\fra)$. Generic limits of sequences of ideals {#sect:gen-limits} ===================================== In this section we review the construction from [@Kol1], extending it from sequences of power series to sequences of ideals. In fact, we will need a version dealing with several such sequences. The goal is to associate to these sequences of ideals in a fixed polynomial ring or ring of power series, corresponding “limit" ideals through a sequence of $\frm$-adic approximations and field extensions. For the sake of notation we only treat the case of two sequences. This is the only case needed in the paper. It will be however clear that the construction can be carried out for any given number of sequences. We also note that by taking the second sequence to be constant, the construction given below reduces in particular to a construction of generic limits for just one sequence. Furthermore, the assertions in Proposition \[prop:gen-limit:lct-E\] and Corollary \[cor:lct=lim\] below reduce to statements about one sequence by taking $q=0$. Let $R = k {[\negthinspace[}x_1,\dots,x_n{]\negthinspace]}$ be the ring of formal power series in $n$ variables with coefficients in an algebraically closed field $k$, and let $\frm$ be its maximal ideal. If $k\subset K$ is a field extension, then we put $R_K := K {[\negthinspace[}x_1,\dots,x_n{]\negthinspace]}$ and $\frm_K:= \frm\. R_K$. For every $d \ge 1$, we consider the quotient homomorphism $R \to R/\frm^d$. We identify the ideals in $R/\frm^d$ with the ideals in $R$ containing $\frm^d$. Let $\cH_d$ be the Hilbert scheme parametrizing the ideals in $R/\frm^d$, with the reduced scheme structure. Since $\dim_k(R/\frm^d)<\infty$, $\cH_d$ is an algebraic variety. Note that for every field extension $K$ of $k$, the $K$-valued points of $\cH_d\times\cH_d$ correspond to pairs of ideals in $R_K$ containing $\frm_K^d$. Mapping a pair of ideals in $R/\frm^{d}$ to the pair consisting of their images in $R/\frm^{d-1}$ gives a surjective map $t_d\colon\cH_d\times\cH_d\to \cH_{d-1}\times\cH_{d-1}$. This is not a morphism. However, by Generic Flatness we can cover $\cH_d\times\cH_d$ by disjoint locally closed subsets such that the restriction of $t_d$ to each of these subsets is a morphism. In particular, for every irreducible closed subset $Z\subseteq\cH_d\times\cH_d$, the map $t_d$ induces a rational map $Z\rat \cH_{d-1}\times\cH_{d-1}$. Suppose now that $(\fra_i)_{i \in I_0}$ and $(\frb_i)_{i\in I_0}$ are sequences of ideals in $R$ indexed by the set $I_0 = \Z_+$. We consider sequences of irreducible closed subsets $Z_d\subseteq \cH_d\times\cH_d$ for $d\geq 1$ such that 1. For every $d\geq 1$, the projection $t_{d+1}$ induces a dominant rational map ${\varphi}_{d+1}\colon Z_{d+1}\rat Z_{d}$. 2. For every $d\geq 1$, there are infinitely many $i$ with $(\fra_i+\frm^d,\frb_i+\frm^d)\in Z_d$, and the set of such $(\fra_i+\frm^d,\frb_i+\frm^d)$ is dense in $Z_d$. Given such a sequence $(Z_d)_{d\geq 1}$, we define inductively nonempty open subsets $Z^\o_d\subseteq Z_d$, and a nested sequence of infinite subsets $$I_0\supseteq I_1\supseteq I_2\supseteq\cdots,$$ as follows. We put $Z^\o_1=Z_1$ and $I_1=\{i\in I_0\mid(\fra_i+\frm,\frb_i+\frm)\in Z_1^{\o}\}$. For $d\geq 2$, let $Z^\o_d={\varphi}_d^{-1}(Z^\o_{d-1}) \subseteq {\rm Domain}({\varphi}_d)$ and $I_d=\{i\in I_0\mid (\fra_i+\frm^d, \frb_i+\frm^d)\in Z^\o_d\}$. It follows by induction on $d$ that $Z^\o_d$ is open in $Z_d$, and condition $(\star\star)$ implies that each $I_d$ is infinite. Furthermore, it is clear that $I_d\supseteq I_{d+1}$. Sequences $(Z_d)_{d\geq 1}$ satisfying $(\star)$ and $(\star\star)$ can be constructed as follows. We first choose a minimal irreducible closed subset $Z_1 \subseteq \cH_1\times\cH_1$ with the property that it contains $(\fra_i + \frm,\frb_i+\frm)$ for infinitely many indices $i \in I_0$. We set $J_1 = \{ i \in I_0 \mid (\fra_i + \frm,\frb_i+\frm) \in Z_1 \}$. By construction, $J_1$ is an infinite set and $Z_1$ is the closure of $\{(\fra_i+\frm,\frb_i+\frm)\mid i\in I_1\}$. Next, we choose a minimal closed subset $Z_2 \subseteq \cH_2\times\cH_2$ that contains $(\fra_i + \frm^2,\frb_i+\frm^2)$ for infinitely many $i$ in $J_1$ (note that by minimality, $Z_2$ is irreducible). By construction, the set $J_2=\{i\in J_1\mid (\fra_i+\frm^2,\frb_i+\frm^2)\in Z_2\}$ is infinite, and $Z_2$ is the closure of $\{(\fra_i+\frm^2,\frb_i+\frm^2)\mid i\in J_2\}$. As we have seen, $t_2$ induces a rational map ${\varphi}_2\colon Z_2 \rat Z_1$. Note that by the minimality in the choice of $Z_1$, the rational map ${\varphi}_2$ is dominant. Repeating this process we select a sequence $(Z_d)_{d\geq 1}$ that satisfies $(\star)$ and $(\star\star)$ above. Suppose now that we have a sequence $(Z_d)_{d\geq 1}$ with these two properties. The rational maps ${\varphi}_d$ induce a nested sequence of function fields $k(Z_d)$. Let $K:=\bigcup_{d \ge 1} k(Z_d)$. Each morphism $\operatorname{Spec}(K)\to Z_d\subseteq\cH_d\times\cH_d$ corresponds to a pair of ideals $\fra'_d$ and $\frb'_d$ in $R_K$ containing $\frm_K^d$, and the compatibility between these morphisms implies that there are (unique) ideals $\fra$ and $\frb$ in $R_K$ such that $\fra'_d=\fra+\frm_K^d$ and $\frb'_d=\frb+\frm_K^d$ for all $d$. With the above notation, we say that the pair of ideals $(\fra,\frb)$ is [*a generic limit*]{} of the sequence of pairs of ideals $(\fra_i,\frb_i)_{i \ge 1}$. The reader may compare the above construction with a similar one that can be used to show that every sequence $(x_i)_{i \geq 1}$, with all $x_i$ in a closed bounded interval $L_0=[a,b]$, contains a convergent subsequence. In that case, one also constructs by induction closed bounded intervals $L_d=[u_d,w_d]$ with $L_d\subseteq L_{d-1}$ and $(w_d-u_d)<{\varepsilon}_d$ (for some sequence ${\varepsilon}_d$ converging to zero), and infinite subsets $I_d\subseteq I_{d-1}\subseteq I_0=\Z_+$, such that $x_i\in L_d$ for all $i\in I_d$. With this notation, it is then clear that $(x_i)_{i\geq 1}$ contains a subsequence converging to $\sup_du_d=\inf_dw_d$. We list in the next lemma some easy properties of generic limits. The proof is straightforward, so we omit it. \[lemma:gen-limit:basic-properties\] Let $(\fra_i)_{i\geq 1}$ and $(\frb_i)_{i\geq 1}$ be sequences of ideals in $R$, and let $(\fra,\frb)$ be a generic limit as above, with $\fra,\frb\subseteq R_K$. 1. If $\frb_i = \frc$ for every $i$, where $\frc \subseteq R$ is a fixed ideal, then $\frb=\frc\cdot R_K$. 2. If $q\geq 1$ is such that $\fra_i \subseteq \frm^q$ for every $i$, then $\fra \subseteq \frm^q_K$. 3. If $q\geq 1$ is such that $\fra_i\not\subseteq\frm^q$ for every $i$, then $\fra\not\subseteq\frm_K^q$. 4. If $\fra=(0)$, then for every $q\geq 1$ there are infinitely many $d$ such that $\fra_d\subseteq\frm^q$. In the following proposition we keep the notation used in the definition of generic limits. Recall that we have also defined the nested sequence of infinite sets $(I_d)_{d\geq 1}$. \[prop:gen-limit:lct-E\] Let $\fra,\frb \subseteq R_K$ be such that $(\fra,\frb)$ is a generic limit of the sequence $(\fra_i, \frb_i)_{i\geq 1}$ of pairs of ideals in $R$. Assume that $\fra_i,\frb_i \ne R$ for all $i$. For every $d$ there is an infinite subset $I_d^\o \subseteq I_d$ such that for all nonnegative integers $p$ and $q$ $$\operatorname{lct}((\fra + \frm_K^d)^p\cdot (\frb+\frm_K^d)^q) = \operatorname{lct}((\fra_i + \frm^d)^p\cdot(\frb_i+\frm^d)^q) \quad\text{for every $i \in I_d^\o$.}$$ Moreover, if $E$ is a divisor over $\operatorname{Spec}(R_K)$, with center at the closed point and computing $\operatorname{lct}(\fra^p\cdot\frb^q)$ for some nonnegative integers $p$ and $q$, then there is an integer $d_E$ such that for every $d \ge d_E$ the following holds: there is an infinite subset $I_d^E \subseteq I_d^\o$, and for every $i\in I_d^E$ a divisor $E_i$ over $\operatorname{Spec}(R)$ computing $\operatorname{lct}((\fra_i+\frm^d)^p\cdot (\frb_i+\frm^d)^q)$, such that $\operatorname{ord}_E(\fra+\frm_K^d)=\operatorname{ord}_{E_i}(\fra_i+\frm^d)$ and $\operatorname{ord}_{E}(\frb+\frm_K^d)= \operatorname{ord}_{E_i}(\frb_i+\frm_K^d)$. In the second assertion in the proposition, both $d_E$ and the sets $I_d^E$ also depend on $p$ and $q$, while $E_i$ also depends on $d$. Note that every ideal of the form $\frc+\frm^d$ can be considered as the ideal of a scheme on $\AAA^n_k$ supported at the origin, and the log canonical threshold computed in $\operatorname{Spec}(R)$ is the same as when computed in $\AAA_k^n$ (cf. [@dFM Corollary 2.8]). Of course, the same holds if we replace $k$ by $K$. Whenever we can, we adopt this alternative point of view, since base change works better in this setting (by base change an affine space becomes another affine space). On $\cH_d\times\AAA_k^n$ we have the universal family of ideals $\cI$. Pulling this back via the two projections $\cH_d\times\cH_d\times\AAA_k^n\to \cH_d\times\AAA_k^n$, and then restricting to $Z_d\times\AAA_k^n$ gives the ideals $\cI_d$ and $\cJ_d$ on $Z_d\times\AAA_k^n$. Let $\mu_d \colon Y_d \to Z_d \times \AAA_k^n$ be a log resolution of the product $\cI_d\cdot\cJ_d$, and let ${\mathcal E}$ be the relevant simple normal crossings divisor on $Y_d$. By Generic Smoothness, there is a nonempty open subset $U_d \subseteq Z_d$ such that the induced map $Y_d\to Z_d$ is smooth over $U_d$, and furthermore, ${\mathcal E}$ has relative simple normal crossings over $U_d$. In this case, the fiber of $Y_d\to Z_d$ over a point in $U_d$ corresponding to a pair of ideals $(\frc_1,\frc_2)$ gives a log resolution of the ideal $\frc_1\cdot \frc_2$ in $\AAA^n_k$. It follows that for every $p$ and $q$, the log canonical threshold $\operatorname{lct}(\frc_1^p\cdot\frc_2^q)$ is independent of the point $(\frc_1,\frc_2)\in U_d$. Moreover, it is equal to the similar log canonical threshold computed for the pair of ideals parametrized by the generic point of $Z_d$. These are ideals in $k(Z_d)[x_1,\ldots,x_n]$ whose extensions to $K[x_1,\ldots,x_n]$ are $\fra+\frm^d_K$ and $\frb+\frm_K^d$. We thus take $I_d^\o \subset I_d$ to consist of those $i$ for which $(\fra_i + \frm^d,\frb_i+\frm^d)$ is in $U_d$. Condition $(\star\star)$ on the sequence $(Z_d)_{d\geq 1}$ implies that $I_d^\o$ is an infinite set. For the second assertion in the proposition, observe first that since $E$ has center equal to the closed point, there is a divisor $F$ over $\AAA_K^n$ with center at the origin, such that $E$ is obtained from $F$ by base-change with respect to $\operatorname{Spec}(R_K)\to\AAA_K^n$. Given an ideal $\frc+\frm_K^d\subset R_K$, the divisor $E$ computes the log canonical threshold of this ideal if and only if $F$ computes the log canonical threshold of the corresponding ideal in $K[x_1,\ldots,x_n]$. Note that the divisor $F$, a priori defined over $K$, is in fact defined over a subextension $L$ of $K/k$, of finite type over $k$. Let $d_E>\operatorname{ord}_E(\fra^p\cdot\frb^q)$ be an integer such that $F$ is defined over $k(Z_{d_E})$. For $d\geq d_E$, we have $\operatorname{lct}((\fra+\frm_K^d)^p\cdot (\frb+\frm_K^d)^q)= \operatorname{lct}(\fra^p\cdot\frb^q)$, and $E$ computes both these log canonical thresholds: for this one argues as in the beginning of the proof of Theorem \[thm:m-adic-semicont\], observing that in this case we have $\operatorname{lct}(\fra^p\cdot\frb^q)\leq\operatorname{lct}((\fra+\frm_K^d)^p\cdot (\frb+\frm_K^d)^q)$ due to the inclusion $\fra^p\cdot\frb^q\subseteq(\fra+\frm_K^d)\cdot(\frb+\frm_K^d)^q$. On the other hand, for every such $d$ we can find a nonempty open subset $W_d \subseteq Z_d$ and a log resolution $\nu_d\colon Y_d'\to W_d\times \AAA_k^n$ of the restriction of $\cI_d\cdot\cJ_d$ to $W_d\times\AAA^n_k$, such that $F$ is obtained from a divisor ${\mathcal F}'$ on $Y_d'$ by base-change with respect to the composition $$\AAA_K^n\to \AAA_{k(Z_d)}^n \to W_d\times\AAA_k^n.$$ Arguing as in the first part of the proof, we see that after possibly replacing $W_d$ by a smaller open subset, we may assume that $Y_d'$ is smooth over $W_d$, and furthermore, that the relevant divisor ${\mathcal E}'$ has relative simple normal crossings over $W_d$. Note that ${\mathcal F}'$ is a component of ${\mathcal E}'$. Let $I_d^E:=\{i \in I_d^\o \mid (\fra_i + \frm^d,\frb_i+\frm^d) \in W_d\}$. Again, condition $(\star\star)$ on the sequence $(Z_d)_{d\geq 1}$ implies that $I_d^E$ is infinite. Since $F$ computes the log canonical threshold of the (extension to $K[x_1,\ldots,x_n]$ of the) suitable product corresponding to the pair of ideals parametrized by the generic point of $W_d$, it follows that if $i\in I_d^E$, and $F_i$ is a connected component of the fiber of ${\mathcal F}'$ over the point in $W_d$ representing $(\fra_i+\frm^d,\frb_i+\frm^d)$, then $F_i$ computes $\operatorname{lct}((\fra_i+\frm^d)^p\cdot (\frb_i+\frm^d)^q)$. Moreover, we have $\operatorname{ord}_F(\fra+\frm_K^d)=\operatorname{ord}_{F_i}(\fra_i+\frm^d)$ and $\operatorname{ord}_F(\frb+\frm_K^d)=\operatorname{ord}_{F_i}(\frb_i+\frm^d)$. If $E_i$ is obtained from $F_i$ by base-change via $\operatorname{Spec}(R)\to \AAA^n_k$, then $E_i$ satisfies the requirement in the proposition. \[cor:lct=lim\] With the above notation, for every sequence $(i_d)_{d \ge 1}$ with $i_d \in I_d^\o$, we have $\operatorname{lct}(\fra^p\cdot \frb^q) = \lim_{d \to \infty} \operatorname{lct}(\fra_{i_d}^p\cdot\frb_{i_d}^q)$ for all nonnegative integers $p$ and $q$. In particular, if the sequence $(\operatorname{lct}(\fra_i^p\cdot\frb_i^q))_{i \ge 1}$ is convergent, then it converges to $\operatorname{lct}(\fra^p\cdot\frb^q)$. Recall the following basic fact: if $\frc$ and $\frc'$ are proper ideals in $R$, with $\frc+\frm^d=\frc'+\frm^d$, then $$|\operatorname{lct}(\frc)-\operatorname{lct}(\frc')|\leq \frac{n}{d}$$ (see [@dFM Corollary 2.10]). Note that this equality also holds when $\frc$ or $\frc'$ are zero. Of course, a similar result holds for ideals in $R_K$. It follows from Proposition \[prop:gen-limit:lct-E\] that for every $d\geq 1$ we have $$|\operatorname{lct}(\fra^p\cdot \frb^q)-\operatorname{lct}(\fra_{i_d}^p\cdot\frb_{i_d}^q)| \leq |\operatorname{lct}(\fra^p\cdot\frb^q)-\operatorname{lct}((\fra+\frm_K^d)^p\cdot (\frb+\frm_K^d))|$$ $$+ |\operatorname{lct}((\fra_{i_d}+\frm^d)^p\cdot (\frb_{i_d}+\frm^d)^q)-\operatorname{lct}(\fra_{i_d}^p\cdot\frb_{i_d}^q)| \leq\frac{2n}{d}.$$ The assertion in the proposition is an immediate consequence. It is clear that both the construction and the above results generalize in an obvious way to any finite number of sequences of ideals. Log canonical thresholds on smooth varieties ============================================ This section is devoted to the proof of Theorem \[thm:intro:T\_n\^sm\]. For completeness, we also include the proof of the smooth case of Kollár’s Accumulation Conjecture [@Kol2], which is already known by the results in [@dFM; @Kol1]: the case of limits of decreasing sequences was first treated in [@dFM], and the proof was completed in [@Kol1] where the the case of (potential) limits of increasing sequences was also treated. \[thm:T\_n\^sm\] For every $n$, the set $\cT_n^{\rm sm}$ satisfies the ascending chain condition, and its set of accumulation points is $\cT_{n-1}^{\rm sm}$. We start with an easy lemma that can be used to replace an ideal by another ideal with the same log canonical threshold, and such that this log canonical threshold is computed by a divisor having a zero-dimensional center. Let $\fra$ be an ideal contained in the maximal ideal $\frm_K$ of $K{[\negthinspace[}x_1,\ldots,x_n{]\negthinspace]}$. We put $q := \max\{ t \ge 0 \mid \operatorname{lct}(\fra\.\frm_K^t) = \operatorname{lct}(\fra) \}$. 1. We have $q\in\QQ_{\geq 0}$. 2. If we write $q=r/s$, for nonnegative integers $r$ and $s$, then $\operatorname{lct}(\fra^s\cdot\frm_K^r)=\frac{\operatorname{lct}(\fra)}{s}$, and this log canonical threshold is computed by a divisor with center equal to the closed point. 3. We have $q=0$ if and only if $\operatorname{lct}(\fra)$ is computed by a divisor with center over the closed point. Let $\pi\colon Y\to X=\operatorname{Spec}\left(K{[\negthinspace[}x_1,\ldots,x_n{]\negthinspace]}\right)$ be a log resolution of $\fra\cdot\frm_K$, and write $\fra\cdot\cO_Y=\cO(-\sum_ia_iE_i)$, $\frm_K\cdot\cO_Y=\cO_Y(-\sum_ib_iE_i)$, and $K_{Y/X}=\cO_Y(-\sum_ik_iE_i)$. Let $I$ denote the set of those $i$ for which $E_i$ has center equal to the closed point, that is, such that $b_i>0$. Let $c=\operatorname{lct}(\fra)$. Note that we have $\operatorname{lct}(\fra\cdot\frm_K^t)\leq c$ for every $t\geq 0$. Furthermore, $\operatorname{lct}(\fra\cdot\frm_K^t)\geq c$ if and only if $$k_i+1\geq c(a_i+tb_i)$$ for all $i$. If $i\not\in I$, then $b_i=0$ and this inequality holds for all $t$. We conclude that $$q=\min\left\{\frac{k_i+1-ca_i}{cb_i}\mid i\in I\right\}.$$ This shows that $q\in\QQ$. Moreover, if $i\in I$ is such that this minimum is achieved, then $E_i$ computes $\operatorname{lct}(\fra^s\cdot\frm_K^r)$, and $E_i$ has center equal to the closed point. The assertion in iii) is clear. Let $(c_i)_{i \ge 1}$ be a strictly monotone sequence with terms in $\cT_n^{\rm sm}$, and let $c = \lim_{i \to \infty} c_i$ (the limit is finite, since $\cT_n^{\rm sm}$ is bounded above by $n$). For every $i$ we can select an ideal $\widetilde{\fra}_i \subseteq (x_1,\ldots,x_n)\subset k[x_1,\dots,x_n]$ with $\operatorname{lct}_0(\widetilde{\fra}_i) = c_i$. Let $\fra_i=\widetilde{\fra}_i\cdot k{[\negthinspace[}x_1,\ldots,x_n{]\negthinspace]}$. Consider a generic limit $(\fra,\frb)$ of the sequence of pairs of ideals $(\fra_i,\frm)_{i\geq 1}$, constructed as in Section \[sect:gen-limits\], with $\fra,\frb\subseteq K{[\negthinspace[}x_1,\ldots,x_n{]\negthinspace]}$. Note that by Lemma \[lemma:gen-limit:basic-properties\], we have $\fra\subseteq\frm_K$, and $\frb=\frm_K$. Since $\operatorname{lct}(\fra_i)=\operatorname{lct}_0(\widetilde{\fra}_i)$ (see, for example, [@dFM Proposition 2.9]), it follows from Corollary \[cor:lct=lim\] that $\operatorname{lct}(\fra) = c$. If $c=0$, then the sequence $(c_i)_{i\geq 1}$ can’t be strictly increasing. Furthermore, we have $0\in \cT_{n-1}^{\rm sm}$ (note indeed that $n>0$), hence this case is clear, and we may assume that $c>0$. In particular, $\fra\neq (0)$. Let $q$ be the rational number attached to $\fra$ as in the lemma, and write $q=r/s$, with $r$ and $s$ nonnegative integers. By construction, we have $$\operatorname{lct}(\fra^s\.\frm_K^r) = \frac 1s \operatorname{lct}(\fra).$$ On the other hand, we certainly have $$\operatorname{lct}(\fra_i^s\cdot\frm^r) \leq \frac 1s \operatorname{lct}(\fra_i) \quad\text{for every $i$}.$$ Note in particular that if $(c_i)_{i \ge 1}$ is a strictly increasing sequence, then $\operatorname{lct}(\fra_i^s\cdot\frm^r) < \operatorname{lct}(\fra^s\cdot\frm^r)$ for every $i$. By the choice of $q$, $\operatorname{lct}(\fra^s\cdot\frm_K^r)$ is computed by a divisor $E$ which lies over the closed point of $\operatorname{Spec}(K{[\negthinspace[}x_1,\dots,x_n{]\negthinspace]})$. Fix any $d \geq d_E$, with $d_E$ associated to $E,s,r$ and to the sequence $(\fra_i,\frm)_{i\geq 1}$ by Proposition \[prop:gen-limit:lct-E\]. As in the proof of that proposition, we may and will assume that $d_E>\operatorname{ord}_E(\fra^s\cdot\frm_K^r)$, so that for all $d\geq d_E$ we have $\operatorname{lct}(\fra^s\cdot\frm_K^r)=\operatorname{lct}((\fra+\frm_K^d)^s\cdot \frm_K^r)$, and $E$ computes both log canonical thresholds. By Proposition \[prop:gen-limit:lct-E\], there is an infinite set $I_d^E \subseteq \Z_+$ such that for every $i\in I_d^E$ we have $\operatorname{lct}((\fra+\frm_K^d)^s\cdot\frm_K^r) = \operatorname{lct}((\fra_i+ \frm^d)^s\cdot\frm^r)$, and moreover, there is a divisor $E_i$ over ${\rm Spec}\left(k{[\negthinspace[}x_1,\ldots,x_n{]\negthinspace]}\right)$ computing $\operatorname{lct}((\fra_i+\frm^d)^s\cdot\frm^r)$, and such that $$\operatorname{ord}_{E_i}((\fra_i+\frm^d)^s\cdot\frm^r)=\operatorname{ord}_E((\fra+\frm_K^d)^s\cdot\frm_K^r)= \operatorname{ord}_E(\fra^s\cdot\frm_K^r).$$ Since $E_i$ is a divisor computing $\operatorname{lct}((\fra_i+\frm^d)^s\cdot\frm^r)$, its center is equal to the closed point. Furthermore, by our condition on $d$ we have $$\operatorname{ord}_{E_i}(\frm^d)\geq d>\operatorname{ord}_E(\fra^s\cdot\frm^r)=\operatorname{ord}_{E_i}((\fra_i+\frm^d)^s\cdot\frm^r),$$ hence Corollary \[formal\_case\] gives for every $i\in I_d^E$ $$\operatorname{lct}(\fra_i^s\cdot\frm^r)=\operatorname{lct}((\fra_i+\frm^d)^s\cdot\frm^r)=\operatorname{lct}((\fra+\frm_K^d)^s\cdot\frm_K^r)=\operatorname{lct}(\fra^s\cdot\frm_K^r).$$ It follows from the above discussion that $(c_i)_{i \ge 1}$ cannot be a strictly increasing sequence, which proves that $\cT_n^{\rm sm}$ satisfies the ascending chain condition. By exclusion, $(c_i)_{i \ge 1}$ has to be a strictly decreasing sequence. Since the sequence $(\operatorname{lct}(\fra^s_i\cdot\frm^r))_{i \ge 1}$ has repeating terms, we deduce that $q > 0$. Equivalently, $\operatorname{lct}(\fra)$ is not computed by any divisor with center at the closed point. Therefore, if $F$ is a divisor over $\operatorname{Spec}(K{[\negthinspace[}x_1,\dots,x_n{]\negthinspace]})$ computing $\operatorname{lct}(\fra)$, then the center of $F$ in $\operatorname{Spec}(K{[\negthinspace[}x_1,\dots,x_n{]\negthinspace]})$ is positive dimensional, and hence, after localizing at its generic point, we see that $\operatorname{lct}(\fra) \in \cT_{n-1}^{\rm sm}$ (cf. [@dFM Propositions 2.11 and 3.1]). As it is easy and well-known that, conversely, every element in $\cT_{n-1}^{\rm sm}$ is an accumulation point of $\cT_n^{\rm sm}$, we conclude that $\cT_{n-1}^{\rm sm}$ is equal to the set of accumulation points of $\cT_n^{\rm sm}$. The following proposition allows us to reduce log canonical thresholds on varieties with quotient singularities to log canonical thresholds on smooth varieties. We say that a variety $X$ has [*quotient singularities*]{} at $p\in X$ if there is a smooth variety $U$, a finite group $G$ acting on $U$, and a point $q\in V=U/G$ such that the two completions $\widehat{\cO_{X,p}}$ and $\widehat{\cO_{V,q}}$ are isomorphic as $k$-algebras. We say that $X$ has quotient singularities if it has quotient singularities at every point. In the above definition, one can assume that $U$ is an affine space and that the action of $G$ is linear. Furthermore, one can assume that $G$ acts with no fixed points in codimension one (otherwise, we may replace $G$ by $G/H$ and $U$ by $U/H$, where $H$ is generated by all pseudoreflections in $G$, and by Chevalley’s theorem [@Chevalley], the quotient $U/H$ is again an affine space). Using Artin’s approximation results (see Corollary 2.6 in [@Artin]), it follows that there is an étale neighborhood of $p$ that is also an étale neighborhood of $q$. In other words, there is a variety $W$, a point $r\in W$, and étale maps ${\varphi}\colon W\to X$ and $\psi\colon W\to V$, such that $p={\varphi}(r)$ and $q=\psi(r)$. After replacing ${\varphi}$ by the composition $$W\times_VU\to W\overset{{\varphi}}\to X,$$ we may assume that in fact we have an étale map $U/G\to X$ containing $p$ in its image, with $U$ smooth, and such that $G$ acts on $U$ without fixed points in codimension one. This reinterpretation of the definition of quotient singularities seems to be well-known to experts, but we could not find an explicit reference in the literature. \[reduction\_quotient\] Let $X$ be a variety with quotient singularities, and let $\fra$ be a proper nonzero ideal on $X$. For every $p$ in the zero-locus $V(\fra)$ of $\fra$, there is a smooth variety $U$, a nonzero ideal $\frb$ on $U$, and a point $q$ in $V(\frb)$ such that $\operatorname{lct}_p(X,\fra)=\operatorname{lct}_q(U,\frb)$. Let us choose an étale map ${\varphi}\colon U/G\to X$ with $p\in {\rm Im}({\varphi})$, where $U$ is a smooth variety, and $G$ is a finite group acting on $U$ without fixed points in codimension one. Let $\widetilde{{\varphi}}\colon U\to X$ denote the composition of ${\varphi}$ with the quotient map. Since $G$ acts without fixed points in codimension one, $\widetilde{{\varphi}}$ is étale in codimension one, hence $K_U=\widetilde{{\varphi}}^*(K_X)$. It follows from Proposition 5.20 in [@KM] that if $\frb=\fra\cdot\cO_U$, then the pair $(X,\fra^t)$ is log canonical if and only if the pair $(U,\frb^t)$ is log canonical (actually the result in *loc. cit.* only covers the case when $\fra$ is locally principal, but one can easily reduce to this case, by taking a suitable product of general linear combinations of the local generators of $\fra$). We conclude that there is a point $q\in V(\frb)$ such that $\operatorname{lct}_p(X,\fra)=\operatorname{lct}_q(U,\frb)$. It follows that $\cT_n^{\rm quot} = \cT_n^{\rm sm}$ for every $n$, and therefore we deduce by Theorem \[thm:T\_n\^sm\] that Shokurov’s ACC Conjecture and Kollár’s Accumulation Conjecture hold for log canonical thresholds on varieties with quotient singularities. \[quotient\] For every $n$, the set $\cT_n^{\rm quot}$ satisfies the ascending chain condition and its set of accumulation points is equal to $\cT_{n-1}^{\rm quot}$. \[usual\_definition\] At least over the complex numbers, one usually says that $X$ has quotient singularities at $p$ if the germ of analytic space $(X,x)$ is isomorphic to $M/G$, where $M$ is a complex manifold, and $G$ is a finite group acting on $M$. It is not hard to check that in this context this definition is equivalent with the one we gave above. Log canonical thresholds on l.c.i. varieties ============================================ In this section we prove that the ACC Conjecture holds for log canonical thresholds (and mixed log canonical thresholds) on l.c.i. varieties, and prove Theorem \[thm:intro:M\_n\^lci\]. We start with the case of mixed log canonical thresholds on smooth varieties. \[thm:M\_n\^sm\] For every $n$, the set $\cM_n^{\rm sm}$ satisfies the ascending chain condition. Suppose that $\cM_n^{\rm sm}$ contains a strictly increasing sequence $(c_i)_{i \ge 1}$. Let $c=\lim_{i \to \infty} c_i$ (which is finite, since $\cM_n^{\rm sm}$ is bounded above by $n$). We can find ideals $\widetilde{\fra}_i,\,\widetilde{\frb}_i\subseteq k[x_1,\dots,x_n]$, with $\widetilde{\fra}_i \subseteq (x_1,\ldots,x_n)$ and $\operatorname{lct}_0(\widetilde{\frb}_i) \ge 1$, such that $c_i=\operatorname{lct}_{({\mathbf A}^n,\widetilde{\frb}_i),0} (\widetilde{\fra_i})$. If $\fra_i$ and $\frb_i$ are the ideals generated by $\widetilde{\fra}_i$ and, respectively, $\widetilde{\frb}_i$ in $k{[\negthinspace[}x_1,\ldots,x_n{]\negthinspace]}$, then $c_i=\operatorname{lct}_{\frb_i}(\fra_i)$ by Remark \[remark3\]. Consider a generic limit $(\fra,\frb)$ of the sequence $(\fra_i,\frb_i)_{i\geq 1}$, constructed as in Section \[sect:gen-limits\], with $\fra,\frb\subseteq K{[\negthinspace[}x_1,\ldots,x_n{]\negthinspace]}$. By Corollary \[cor:lct=lim\], $\operatorname{lct}(\frb)$ is a limit point of the sequence $(\operatorname{lct}(\frb_i))_{i\geq 1}$, hence $\operatorname{lct}(\frb)\geq 1$. Therefore $c':=\operatorname{lct}_\frb(\fra)$ is well defined. Consider first any positive integers $p$ and $q$ such that $p/q<c$. By assumption, we have $c_i>p/q$ for all $i\gg 1$. Let $X=\operatorname{Spec}\left(k{[\negthinspace[}x_1,\ldots,x_n{]\negthinspace]}\right)$. The pair $(X,\frb_i\.\fra_i^{p/q})$ is log canonical, hence $\operatorname{lct}(\frb_i^q\.\fra_i^p)\geq 1/q$, for all $i\gg 1$. It follows from Corollary \[cor:lct=lim\] that there is a sequence $(i_d)_{d \ge 1}$ in $\Z_+$ such that $$\operatorname{lct}(\frb^q\.\fra^p) = \lim_{d \to \infty} \operatorname{lct}(\frb_{i_d}^q\.\fra_{i_d}^p).$$ This implies in particular that $\operatorname{lct}(\frb^q\.\fra^p) \geq 1/q$, and therefore $c'\geq p/q$. As this holds for every $p/q<c$, we conclude that $c'\geq c$. On the other hand, since $c'\in\QQ$, we may write $c'=r/s$ for positive integers $r$ and $s$. It follows from Remark \[rem1\] that $\operatorname{lct}(\frb\.\fra^{r/s})=1$, and thus $\operatorname{lct}(\frb^s\.\fra^r)=1/s$. Applying again Corollary \[cor:lct=lim\], we find a sequence $(j_d)_{d \ge 1}$ in $\Z_+$ such that $$\operatorname{lct}(\frb^s\.\fra^r) = \lim_{d \to \infty} \operatorname{lct}(\frb_{j_d}^s\.\fra_{j_d}^r).$$ The fact that $\cT_n^{\rm sm}$ satisfies the ascending chain condition (cf. Theorem \[thm:T\_n\^sm\]) implies that there are infinitely many $d$ such that $\operatorname{lct}(\frb_{j_d}^s\.\fra_{j_d}^r)\geq 1/s$, hence $\operatorname{lct}_{\frb_{j_d}}(\fra_{j_d})\geq r/s$. For any such $d$ we have $$c' \geq c > c_{j_d}\geq \frac{r}{s}=c',$$ which is a contradiction. In order to extend the above result to the case of ambient varieties with l.c.i. singularities, we use the following application of Inversion of Adjunction. This is the key tool that allows us to replace mixed log canonical thresholds on locally complete intersection varieties with the similar type of invariants on ambient smooth varieties. \[inversion\] Let $A$ be a smooth irreducible variety over $k$, and $X\subset A$ a closed subvariety of pure codimension $e$, that is normal and locally a complete intersection. Suppose that $\frb$ and $\fra$ are ideals on $A$, with $\fra\neq\cO_A$, and such that $X$ is not contained in the union of the zero-loci of $\frb$ and $\fra$. 1. The pair $(X,\frb\vert_X)$ is log canonical if and only if for some open neighborhood $U$ of $X$, the pair $(U,\frb\cdot \frp^e\vert_U)$ is log canonical, where $\frp$ is the ideal defining $X$ in $A$. 2. If $(X,\frb\vert_X)$ is log canonical, and if $X$ intersects the zero-locus of $\fra$, then for some open neighborhood $V$ of $X$ we have $$\operatorname{lct}_{\frb\vert_X}(X,\fra\vert_X)=\operatorname{lct}_{\frb\vert_V\cdot \frp^e\vert_V} (V, \fra\vert_V).$$ Both assertions follow from Inversion of Adjunction (see Corollary 3.2 in [@EM3]), as this says that for every nonnegative $q$, the pair $(X,(\frb\.\fra^q)\vert_X)$ is log canonical if and only if the pair $(A,\frb\.\fra^q\.\frp^e)$ is log canonical in some neighborhood of $X$. The next fact, which must be well-known to the experts, allows us to control the dimension of the ambient variety in the process of replacing a mixed log canonical threshold on an l.c.i. variety by one on a smooth variety. Given a closed point $x\in X$, we denote by $T_xX$ the Zariski tangent space of $X$ at $x$. \[bound\] Let $X$ be a locally complete intersection variety. If $X$ is log canonical, then $\dim_kT_xX\leq 2\dim X$ for every $x\in X$. Fix $x\in X$, and let $N=\dim\,T_xX$. After possibly replacing $X$ by an open neighborhood of $x$, we may assume that we have a closed embedding of $X$ in a smooth irreducible variety $A$, of codimension $e$, with $\dim A=N$. If $X=A$, then $N=\dim X$ and we are done. Suppose now that $e\geq 1$. Since $X$ is locally a complete intersection, it follows from Inversion of Adjunction (see Corollary 3.2 in [@EM3]) that the pair $(A,\frp^e)$ is log canonical, where $\frp$ is the ideal of $X$ in $A$. In particular, if $E$ is the exceptional divisor of the blow-up $A'$ of $A$ at $x$, and ${\rm ord}_E$ is the corresponding valuation, then we have $$N=1+\operatorname{ord}_E(K_{A'/A})\geq e\cdot \operatorname{ord}_E(\frp)\geq 2e=2(N-\dim X).$$ This gives $N\leq 2\dim X$. We are now ready to prove Theorem \[thm:intro:M\_n\^lci\], and hence Corollary \[cor:intro:T\_n\^lci\]. By Theorem \[thm:M\_n\^sm\], we know that $\cM_n^{\rm sm}$ satisfies the ascending chain condition for every $n$. Then it is clear that in order to prove that $\cM_n^{\rm l.c.i.}$ also satisfies the ascending chain condition for every $n$, it suffices to show that $$\cM_n^{\rm l.c.i.}\subseteq\cM_{2n}^{\rm sm}.$$ Suppose that $(X,\frb)$ is log canonical, with $X$ locally a complete intersection of dimension $ n$, and let $c=\operatorname{lct}_{\frb}(\fra)$. Let $x\in X$ be any point in the center of a divisor computing $\operatorname{lct}_{\frb}(\fra)$. For every open neighborhood $U$ of $x$ we have $\operatorname{lct}_{\frb\vert_U}(U,\fra\vert_U)=c$. Since $X$ is log canonical, it follows from Proposition \[bound\] that $\dim_kT_xX\leq 2n$. After replacing $X$ by a suitable neighborhood of $x$, we may assume that there is a closed embedding $X\hookrightarrow A$, where $A$ is a smooth variety of dimension $2n$. Proposition \[inversion\] implies that after possibly replacing $A$ by a neighborhood of $X$, we have $c=\operatorname{lct}_{\frb_1\cdot\frp^{e}}(\fra_1)$, where $\frp$ is the ideal defining $X$ in $A$, $e$ is the codimension of $X$ in $A$, and $\frb_1$ and $\fra_1$ are ideals in $A$ whose restrictions to $X$ give, respectively, $\frb$ and $\fra$. Thus $c \in \cM_{2n}^{\rm sm}$. It follows by Theorem \[thm:intro:M\_n\^lci\], since $\cT_n^{\rm l.c.i} \subseteq \cM_n^{\rm l.c.i}$. [BCHM]{} M. Artin, Algebraic approximation of structures over complete local rings, Inst. Hautes Études Sci. Publ. Math. **36** (1969), 23–58. C. Birkar, Ascending chain condition for log canonical thresholds and termination of log flips, Duke Math. J. **136** (2007), 173–180. C. Birkar, P. Cascini, C. Hacon and J. M$^{\rm c}$Kernan, Existence of minimal models for varieties of log general type, preprint available at [arXiv:math/0610203]{}. C. Chevalley, Invariants of finite groups generated by reflections, Amer. J. Math. **77** (1955), 778–782. T. de Fernex and M. Mustaţă, Limits of log canonical thresholds, Ann. Sci. École Norm. Sup. (4) **42** (2009), 493–517. L. Ein and M. Mustaţă, Inversion of adjunction for local complete intersection varieties, Amer. J. Math. **126** (2004), 1355–1365. L. Ein and M.  Mustaţă, Invariants of singularities of pairs, in *International Congress of Mathematicians*, Vol. II, 583–602, Eur. Math. Soc., Zürich, 2006. M. Kawakita, Inversion of adjunction on log canonicity, Invent. Math. [**167**]{} (2007), 129–133. J. Kollár, Singularities of pairs, in *Algebraic geometry, Santa Cruz 1995*, 221–286, Proc. Symp. Pure Math. 62, Part 1, Amer. Math. Soc., Providence, RI, 1997. J. Kollár, Which powers of holomorphic functions are integrable?, preprint available at [arXiv:0805.0756]{}. J. Kollár and S. Mori, *Birational geometry of algebraic varieties*, Cambridge Tracts in Mathematics 134, Cambridge University Press, Cambridge, 1998. R. Lazarsfeld, *Positivity in algebraic geometry* II, Ergebnisse der Mathematik und ihrer Grenzgebiete 49, Springer-Verlag, Berlin, 2004. V. V. Shokurov, Three-dimensional log perestroikas. With an appendix in English by Yujiro Kawamata, Izv. Ross. Akad. Nauk Ser. Mat. **56** (1992), 105–203, translation in [Russian Acad. Sci. Izv. Math.]{} **40** (1993), 95–202. M. Temkin, Desingularization of quasi-excellent schemes in characteristic zero, Adv. Math. **219** (2008), 488–522.
--- abstract: 'We investigate an integrated optical chip immersed in atomic vapor providing several waveguide geometries for spectroscopy applications. The narrow-band transmission through a silicon nitride waveguide and interferometer is altered when the guided light is coupled to a vapor of rubidium atoms via the evanescent tail of the waveguide mode. We use grating couplers to couple between the waveguide mode and the radiating wave, which allow for addressing arbitrary coupling positions on the chip surface. The evanescent atom-light interaction can be numerically simulated and shows excellent agreement with our experimental data. This work demonstrates a next step towards miniaturization and integration of alkali atom spectroscopy and provides a platform for further fundamental studies of complex waveguide structures.' author: - Ralf Ritter - Nico Gruhler - Wolfram Pernice - Harald Kübler - Tilman Pfau - Robert Löw title: Atomic vapor spectroscopy in integrated photonic structures --- Over the past decades, alkali atoms were not only subject to a broad spectrum of research fields but also found their way into technological applications. Unlike solid state systems, the dispersion free properties of atoms are ideal for sensing and referencing tasks. While ultra cold atomic gases are best suited for ultra precise measurements and atomic clocks [@Ye2012; @Muller2012; @Croin2009; @Bresson2013], they usually require a large apparatus to do so. In contrast, devices based on thermal atomic vapors offer less precision, but have been successfully miniaturized and integrated for applications e.g. in magnetometry [@Weis2009; @Walker2012; @Sheng2013], frequency referencing [@Schawlow1971; @Millerioux1994], or atomic clocks [@Knappe2004]. Several approaches exist to achieve atom-light interaction on a microscopic scale such as atoms in hollow core fibers [@Epple2014; @Gaeta2010], nanofibers [@Rauschenbeutel2011; @Franson2010; @Shariar2008], micro- and nanocells [@Kuebler2010; @Keaveney2012], or anti-resonant reflecting optical waveguides (ARROW) on a chip [@Schmidt2007; @Schmidt2010]. Recently, an important step towards integration was made by surrounding solid core optical waveguides on a chip by a rubidium vapor cladding [@Levy2013], where the evanescent tail of the light mode interacts with the atomic vapor in close vicinity to the waveguide. With the existing technology of photonic integrated circuits on an industrial scale, this approach offers ideal conditions for using atomic vapors in e.g. sensing and communication applications. Light sources, interconnections, photonic devices and detectors can all be contained in a single chip, allowing for complex network and multiplexing designs, potentially even at the single photon level. Due to the small mode area of the evanescent field efficient atom light coupling is achieved and saturation can be reached already at low power. ![Experimental setup and layout of the structures. (a) Schematic of the experimental setup. For details see main text. (b) Microscopy image of the waveguide and (c) MZI structures. (d) Sketch of the MZI design and layer composition. The coverage of the front coupler is depicted sliced to reveal the grating coupler.[]{data-label="fig:fig1"}](fig1) In this letter we increase the level of complexity compared to previous works [@Levy2013] by adding Bragg couplers, curved waveguides and beam splitters to our silicon nitride photonic structures. We show evanescent atom-light coupling by means of a simple waveguide strip as well as a Mach Zehnder type interferometer (MZI), providing both transmission and phase shift information. Our experimental data can be well described by a theoretical model stemming from total internal reflection spectroscopy [@Nienhuis1988]. The substrate of our optical chip consists of a 4 mm thick 1.5 inch diameter fused silica vacuum window, covered with a 180 nm thick layer of silicon nitride (Si$_3$N$_4$). The photonic structures are created in this layer by electron-beam lithography and subsequent dry etching. Details on the fabrication process can be found in our previous paper[@Gruhler2013]. Focusing grating couplers are used for in- and out-coupling of light. This type of coupler can be placed arbitrarily on the two dimensional chip surface, therefore allowing a larger number of individual devices on a single chip and more flexibility as compared to e.g. butt coupling from the side of the chip where only a one dimensional distribution is possible. All devices are completely covered with a silicon oxide (SiO$_2$) layer, except for the regions where we want the atoms to interact with the light field. Additionally, we deposit a 100 nm thick opaque layer of aluminum on top of the SiO$_2$ layer above each grating coupler to avoid leakage of light through the coupler and the detection of fluorescence light from the atoms inside the chamber volume. This layer also acts as a local mirror and therefore increases the coupling efficiency. An overview of the layer composition is shown in figure \[fig:fig1\](d). The chip is mounted into a custom made CF flange via a metal seal (Helicoflex) and connected to a UHV chamber with the structures facing the inside of the chamber. After pumping and baking the chamber to a pressure $<10^{-8}$ mbar, a rubidium ampule inside a bellow connected to the chamber is broken. We control the rubidium vapor pressure by the temperature of the bellow (reservoir), whereas we keep the chamber temperature always at a higher level (typically $\Delta T = 20$ K) to avoid condensation on the chip surface. In our experiments, we focus a 780 nm laser through the substrate onto the input grating coupler of a specific device (see figure \[fig:fig1\](a)). The output coupler of this device is imaged onto a 100 $\rm \mu{}m$ pinhole to suppress any background light which is coming from the vicinity of this port. After the pinhole we detect the signal with a photo multiplier tube (PMT). For the chip used here we utilized standard couplers which are not yet optimized for mode matching between a focused Gaussian beam and the waveguide mode but rather for direct coupling from a fiber tip. With direct coupling a transmission of up to 6 % is achieved for the presented structures, whereas a record coupling efficiency of -0.62 dB has been reported for grating couplers [@Berroth2014]. Using a focused Gaussian beam the overall efficiency is $2.5\times10^{-3}$ of the light which is sent to the chip and detected at the PMT for the simple waveguide. The waveguide quality was measured in the NIR and a propagation loss as low as 21 dB/m was obtained [@Gruhler2013]. In the visible regime the losses are usually slightly increased, however, they are still negligibly small especially compared to the absorption due to the atomic vapor. ![Absorptive features for the simple waveguide. (a) Absorption spectra of the Rb D$_2$ line for various atomic densities $n_g = 1.8\times10^{12}\textrm{cm}^{-3}$ (blue), $6.8\times10^{12}\textrm{cm}^{-3}$ (green), $1.3\times10^{13}\textrm{cm}^{-3}$ (red). The traces are normalized to the off-resonant transmission. The lines in darker color show the fit of the theory to the data. (b) Optical depth as a function of reservoir temperature and detuning calculated from our model. Dashed lines indicate the positions of the data from (a). At a temperature of $122^\circ$C an optical depth of 1 is achieved for the Rb$^{85}$ 5S$_{1/2}$, F$=3\rightarrow 5$P$_{3/2}$ transition.[]{data-label="fig:fig2"}](fig2) ![image](fig3) For a first characterization of our system and to examine solely its absorptive properties, we investigated a simple waveguide as shown in figure \[fig:fig1\](b) with a width of 1.1 $\rm \mu{}m$ and a height of 180 nm. By numerical simulation of the mode profile for this geometry, we infer that approximately 17% of the TE mode interact with the atomic vapor, which is the preferably guided mode for our waveguide design. The uncovered length of the waveguide which is exposed to the atoms is $\sim$1.2 mm. In figure \[fig:fig2\](a) we show the normalized transmission spectrum of such a device while scanning over the Rb D$_2$ line for different atomic densities. For a reservoir temperature of $113^\circ$C we already achieve an optical density of 0.5. In principle, higher densities can be achieved by simply increasing the reservoir temperature. However, as described later, we observed additional losses in the devices, possibly due to some deposition of Rb on the waveguide surface. Therefore we kept the reservoir temperature well below the chip temperature to reduce the amount of condensation. The line shape of the spectrum exhibits some distinct deviation from the well known conventional Rb D$_2$ spectrum. This is caused by the enhanced Doppler broadening due to the 1.6 times larger wave vector in the waveguide compared to free space propagation, and by the limited transit time of the atoms traveling through the evanescent field. To model our experimental data, we start with a finite-element analysis (COMSOL) to obtain the mode profile and propagation constant of the waveguide. With this information we can calculate the effective susceptibility $\chi_{\textrm{eff}}$ of the atoms surrounding the waveguide, as described in previous works [@Levy2013; @Nienhuis1988; @Guo1994]. In this calculation we neglect any saturation effects, but include Doppler broadening and transit time broadening as well as self broadening. Next, we add a cladding material with the complex refractive index $n_{\textrm{Rb}}=\sqrt{1+\chi_{\textrm{eff}}}$ to the waveguide in the COMSOL simulation. By running a frequency sweep with the detuning $\Delta$ around the center of the D$_2$ line, we obtain the complex propagation constant $\beta_{\textrm{Rb}}$. The transmission spectrum of the waveguide can then be calculated as $$T = I_0 \times e^{-2\operatorname{Im}(\beta_{\textrm{Rb}}) L}, \label{eq:trans}$$ where $I_0$ is the intensity of the in-coupled light, and $L$ is the length of the interaction region. As shown in figure \[fig:fig2\](a), this model fits well to our experimental data, where the frequency scaling, the center of the detuning and the Rb density are free fit parameters. In figure \[fig:fig2\](b) we use our model to extrapolate the optical depth (O.D.) for a range of temperatures. From this follows that we can reach an O.D. of 1 at $122^\circ$C for the Rb$^{85}$ 5S$_{1/2}$, F$=3\rightarrow 5$P$_{3/2}$ transition and a waveguide with a length of 1.2 mm. The second type of devices we investigated is a sub-mm Mach Zehnder interferometer as shown in figure \[fig:fig1\](c) and (d). Here the in-coupled light is split by means of a 50/50 Y-branch into two arms with a path difference of 2.2 mm. The shorter arm (1) is completely covered with SiO$_2$, whereas the longer arm (2) is only partially covered, therefore offering a 2 mm long region for the guided mode to interact with the atoms. The modes from both arms are combined with a second Y-branch and guided to an output coupler. Besides the phase difference due to the unequal arm lengths, the light in the uncovered arm is picking up some additional phase caused by the real part of the susceptibility of the surrounding atoms. Also some of the light in the uncovered arm gets absorbed on resonance due to the imaginary part of the susceptibility. In total, this leads to a dispersive modulation of the MZI transmission as shown in figure \[fig:fig3\] for different atomic densities. The transmission of the MZI can be calculated as $$T_{\textrm{MZI}} = \left|U_1 e^{i(\beta_1l_1+\phi_0)}+U_2 e^{i(\beta_1(l_2-l_{\textrm{I}})+\beta_{\textrm{Rb}}l_{\textrm{I}})}\right|^2, \label{eq:MZI}$$ which describes the interference at the combining Y-branch. Here $U_1$, $l_1$ and $U_2$, $l_2$ are the light amplitudes and lengths of arm 1 and arm 2, respectively. The length of the interaction region is denoted by $l_{\textrm{I}}$. The propagation constant for a waveguide with SiO$_2$ cladding, as it is the case for arm 1, is denoted with $\beta_1$, whereas $\beta_{\textrm{Rb}}$ is the complex propagation constant for a Rb cladding, as described earlier. With $\phi_0$ we account for a phase offset due to a temperature dependent change of the arm lengths. A fit of this model to our data is also plotted in figure \[fig:fig3\](b) and shows excellent agreement with the Rb density, the amplitudes and the phase offset as the only free fit parameters. Figure \[fig:fig3\](a) shows the MZI transmission for a larger detuning range with and without contribution from the atoms. From the fitted curves we deduce, that the amplitude in arm 2 is approximately ten times smaller than the amplitude in arm 1 and decreasing during the course of the experiment, thus causing a smaller visibility than expected from a 50/50 beam splitter. We attribute this behavior to the larger length of arm 2 and therefore higher propagation losses on the one hand. On the other hand, it seems that some of the Rb atoms stick to the uncovered waveguide surface, increasing the losses in this arm additionally. A similar behavior was also found with the simple waveguide structures where we observed decreasing transmission over time. The most intriguing feature of a Mach Zehnder interferometer is of course its ability to measure phase shifts. ![Additional phase shift in the MZI. (a) The bright traces show the phase shift extracted from the data for atomic densities $n_g = 2.4\times10^{13}\textrm{cm}^{-3}$ (red), $1.4\times10^{13}\textrm{cm}^{-3}$ (green), $0.6\times10^{13}\textrm{cm}^{-3}$ (blue). The dark curves are the corresponding calculated phase shifts for the parameters obtained from the fits in figure \[fig:fig3\]. (b) Calculated additional phase shift by the atoms modulo $2\pi$ as a function of detuning and reservoir temperature. Dashed lines indicate the positions of the data from (a).[]{data-label="fig:fig4"}](fig4) We now can extract this phase shift $\Delta\varphi$ due to the presence of the atoms from our data transforming equation \[eq:MZI\] and subtracting the phase shift of the bare MZI: $$\begin{aligned} \Delta\varphi&=&\cos^{-1}\left(\frac{T_{\textrm{MZI}}-\left|U_1\right|^2-\left|U_2\right|^2\exp\left(-2\operatorname{Im}(\beta_{\textrm{Rb}})l_{\textrm{I}}\right)}{2U_1U_2\exp\left(-\operatorname{Im}(\beta_{\textrm{Rb}})l_{\textrm{I}}\right)}\right)\nonumber\\ & &-\left[\beta_1l_1-(\beta_1(l_2-l_{\textrm{I}})+\beta_0l_{\textrm{I}})\right], \label{eq:phase}\end{aligned}$$ where $\beta_0$ is the propagation constant in the waveguide without cladding (vacuum). Figure \[fig:fig4\](a) shows the phase shifts corresponding to the data in figure \[fig:fig3\]. For the data with the highest atomic density of $n_g = 2.4\times10^{13}\textrm{cm}^{-3}$, the light experiences an additional phase shift of up to $0.15\times\pi$. In figure \[fig:fig4\](b) the calculated atomic phase shift for this particular device is shown in dependence of the reservoir temperature. Again we can extrapolate from our model, that an additional phase of $\pi$ is reached at a temperature of 160$^\circ$C, corresponding to an atomic density of $1.7\times10^{14}\textrm{cm}^{-3}$. Naturally a higher density is accompanied with strong absorption, but since the absolute value of the real part of the susceptibility is largest at the wings of the absorption lines, the off-resonant phase shift is still present, without much attenuation. The phase sensitivity of these devices can easily be increased by lengthening the uncovered arm or by decreasing the mode confinement. In conclusion, we have presented a hybrid system consisting of thermal alkali vapor and integrated photonic structures on a chip. Our optical chip houses several devices featuring grating couplers for flexible addressing of the individual devices. In future chip designs we are going to match the design of these couplers to our experimental conditions to improve the coupling efficiency. The transmission spectra of the simple strip waveguide revealed absorption of the evanescent tail for various atomic densities and showed line broadening due to the Doppler effect and the short transit time of the atoms through the evanescent field. To reach more narrow lines, two photon spectroscopy (e.g. electromagnetically induced transparency, EIT) can be utilized to cancel the Doppler shift, whereas the transit time broadening could be reduced by adding buffer gas. In addition to absorptive measurements we performed phase sensitive measurements using an integrated Mach Zehnder interferometer and could extract the atomic phase shift from our data for different Rb densities. By numerically simulating the light propagation in a waveguide surrounded by an atomic vapor cladding with a complex effective refractive index, we could reproduce the experimental data for both types of devices with excellent agreement. Over time we witnessed some degradation of the waveguide structures, possibly due to some build up of a Rb layer on their surface. This process appears to be partially reversible: after we cool down the Rb reservoir but keep the chip at $\sim 200^\circ$C, the transmission increases again. For future chip generations we will investigate alternative materials and protective coatings to increase the lifetime of the devices. Additionally we aim to include the optical chips in a vapor cell using anodic bonding [@Daschner2014], which is a further step towards integration and miniaturization and also allows for a better temperature control, therefore reducing the risk of Rb condensation on the waveguides. This hybrid system opens the door for future experiments with various waveguide geometries. The small mode area of the evanescent field on the order of $\lambda$ along the waveguide allows for systematic studies of interaction effects like self broadening in the one dimensional case, as it has been shown for two dimensions in a thin vapor cell [@Keaveney2012]. These interactions could also be utilized to add nonlinearity e.g. to a system of coupled ring resonators, which on their own create a synthetic gauge field for photons in the non-interacting regime [@Hafezi2013]. We acknowledge support by the ERC under contract number 267100 and the Deutsche Forschungsgemeinschaft (DFG) with the project number LO1657/2. R.R. acknowledges funding from the Landesgraduiertenförderung Baden-Württemberg. [26]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****,  ()](\doibase 10.1103/PhysRevLett.109.230801) [****,  ()](\doibase 10.1103/PhysRevLett.108.090402) [****,  ()](\doibase 10.1103/RevModPhys.81.1051) [****, ()](\doibase http://dx.doi.org/10.1063/1.4801756) [****, ()](\doibase http://dx.doi.org/10.1063/1.3255041) [****,  ()](\doibase 10.1364/OL.37.002247) [****,  ()](\doibase 10.1103/PhysRevLett.110.160802) [****,  ()](\doibase 10.1103/PhysRevLett.27.707) [****,  ()](\doibase http://dx.doi.org/10.1016/0030-4018(94)90221-6) [****, ()](\doibase http://dx.doi.org/10.1063/1.1787942) [**** (), 10.1038/ncomms5132](\doibase 10.1038/ncomms5132) [****, ()](\doibase 10.1103/PhysRevA.81.053825) [****,  ()](\doibase 10.1007/s00340-011-4730-x) [****, ()](\doibase 10.1103/PhysRevLett.105.173602) [****,  ()](\doibase 10.1103/PhysRevLett.100.233602) [****,  ()](\doibase 10.1038/nphoton.2009.260) [****,  ()](\doibase 10.1103/PhysRevLett.108.173601) [****,  ()](\doibase 10.1038/nphoton.2007.74) [****,  ()](\doibase 10.1002/lpor.200900040) [**** (), 10.1038/ncomms2554](\doibase 10.1038/ncomms2554) [****,  ()](\doibase 10.1103/PhysRevA.38.5197) [****,  ()](\doibase 10.1364/OE.21.031678) [****,  ()](\doibase 10.1364/OE.22.001277) [****,  ()](\doibase http://dx.doi.org/10.1016/0030-4018(94)90196-1) [****, ()](\doibase http://dx.doi.org/10.1063/1.4891534) [****,  ()](\doibase doi:10.1038/nphoton.2013.274)
--- abstract: 'The specific heat and thermal conductivity of the insulating ferrimagnet Y$_3$Fe$_5$O$_{12}$ (Yttrium Iron Garnet, YIG) single crystal were measured down to 50 mK. The ferromagnetic magnon specific heat $C$$_m$ shows a characteristic $T^{1.5}$ dependence down to 0.77 K. Below 0.77 K, a downward deviation is observed, which is attributed to the magnetic dipole-dipole interaction with typical magnitude of 10$^{-4}$ eV. The ferromagnetic magnon thermal conductivity $\kappa_m$ does not show the characteristic $T^2$ dependence below 0.8 K. To fit the $\kappa_m$ data, both magnetic defect scattering effect and dipole-dipole interaction are taken into account. These results complete our understanding of the thermodynamic and thermal transport properties of the low-lying ferromagnetic magnons.' author: - 'B. Y. Pan, T. Y. Guan, X. C. Hong, S. Y. Zhou, X. Qiu, H. Zhang, and S. Y. Li$^*$' title: Specific heat and thermal conductivity of ferromagnetic magnons in Yttrium Iron Garnet --- Introduction ============ Recently, the ferrimagntic insulator Y$_3$Fe$_5$O$_{12}$ (Yttrium Iron Garnet, YIG) draws great attention due to the long-range transport ability of spin current.[@YK; @HiK] In these experiments, electronic signal can be transferred by spin angular momentum of the spin waves in insulating YIG, via the spin Hall and inverse spin Hall effects, which provides a new method to transfer information by pure spin waves.[@YK; @HiK] In this context, it is important to understand the thermodynamic and transport properties of the spin waves. In spin wave theory for magnets, the quanta of spin waves are magnons. There are antiferromagnetic (AFM) and ferromagnetic (FM) magnons, which have totally different dispersion relations, and distinct thermodynamic and transport properties. The AFM magnons have linear dispersion relation, so both their specific heat and boundary-limited thermal conductivity at low temperature obey the $T^3$ dependence.[@UR; @SYL] For FM magnons, the situation is more complex due to the existence of magnetic dipole-dipole interaction (MDDI).[@IO; @SHC] At not very low temperature, the dispersion relation $E$ = $Dk^2$ is a good approximation, and the specific heat and boundary-limited thermal conductivity of FM magnons show the characteristic $T^{1.5}$ and $T^2$ dependence, respectively.[@HS; @AIA; @AK] The equations are $$C_m(T)=\frac{15\zeta(5/2)k_B^{2.5}T^{1.5}}{32{\pi}^{1.5}D^{1.5}}$$ and $$\kappa_m(T)=\frac{\zeta(3)k_B^3LT^2}{{\pi}^2{\hbar}D},$$ where $\zeta$ is the Riemann function, $L$ is the boundary-limited mean free path.[@UR; @HS; @AIA; @AK] However, at sub-Kelvin temperature range, the MDDI $$H_{d-d}=2\mu_B^2\sum_{i\neq{j}}\frac{r_{ij}^2(\bf{S_i}\bf{S_j})-(\bf{r_j}\bf{S_i})(\bf{r}_{ij}\bf{S}_j)}{r_{ij}^5},$$ with typical order of $10^{-4}$ eV, has to be considered for FM magnons.[@VC] It significantly modifies the dispersion relation of FM magnons below 1 K, and makes the approximate form $E$ = $Dk^2$ no longer valid.[@TH] The MDDI is long range and anisotropic. It is a basic interaction in magnets and critical for many phenomena such as the demagnetization factors and the formation of domain walls in ferromagnets, the spin ice behavior in Ising pyrochlore magnets, and the spin anisotropy in ferromagntic films.[@NWA; @BCDH; @JGG] Theoretically, when the effect of MDDI was taken into account for FM magnons, both the $T^{1.5}$ dependence of $C_m$ and $T^2$ dependence of $\kappa_m$ changed.[@IO; @SHC; @VC; @DCM] Yet so far experimental verification of this effect on FM magnons is still lacking. YIG is an archetypical ferrimagnetic insulator. Its low-energy magnetic excitations are FM magnons because at low temperature the important spin-wave branch in a ferrite has the same form as in a ferromagnet.[@HK; @DD] The study of this interesting and complex compound started from late 1950s, and it has become indispensable for investigating the properties of magnets since then.[@VC] In fact, YIG is the first material in which thermal conductivity contributed by magnetic excitations was observed.[@BL] At low temperature, its magnon specific heat and thermal conductivity can even exceed those contributed by phonons.[@BL; @RLD; @DW] Therefore, YIG is an ideal compound to study the thermodynamic and transport properties of FM magnons. Previous lowest temperature for specific heat measurement on YIG was 1 K.[@TDE; @JEK] The data of magnon specific heat $C_m(T)$ between 1 and 4 K showed the characteristic $T^{1.5}$ dependence.[@TDE; @JEK] The thermal conductivity of YIG was roughly measured down to 0.23 K.[@DW] The data of magnon thermal conductivity $\kappa_m(T)$ between 0.23 and 1 K did not obey the characteristic $T^2$ dependence, and it was explained by considering the effect of magnetic defect scattering.[@DW; @JC; @JC2] In this paper, we present the specific heat and thermal conductivity measurements of YIG single crystal down to 50 mK. The magnon specific heat data deviate downward from the $T^{1.5}$ dependence below 0.77 K, which is attributed to the MDDI effect. Below 0.2 K, the magnon thermal conductivity data cannot be fitted by only considering the boundary and magnetic defect scatterings, and the MDDI effect has to be taken into account. To our knowlege, this is the first experimental observation of MDDI effect on FM magnon specific heat and thermal conductivity, giving complete understanding of the thermodynamic and thermal transport properties of the low-lying FM magnons. Experiment ========== ![(Color online) X-ray diffraction pattern for the (332) plane of YIG single crystal. Inset: rocking curve of the (664) reflection. The two peaks are from the Cu $K_{\alpha 1}$ and $K_{\alpha 2}$ radiations, respectively](Fig1.eps){width="8.5cm"} The YIG single crystal was grown by the optical floating zone furnace. [@LLA; @SK; @SK2; @SK3; @AR] The single crystal grew along the \[332\] crystallographic direction, characterized by the X-ray diffraction. Ultra-low temperature specific heat measurement was carried out on a sample with mass 45.55 mg in a small dilution refrigerator adapted into a Physical Property Measurement System (PPMS, Quantum Design company). Sample 1 (S1) was cut from the single crystal for thermal conductivity measurements. It is rectangle shaped with dimensions 2.08$\times$0.84 mm$^2$ in the plane and 0.63 mm thick along the \[332\] direction (the sample growth direction). Sample 2 (S2) was obtained by thinning S1 to 0.23 mm. For both samples, the heat current was along the \[11$\overline{3}$\] direction. Ultra-low temperature thermal conductivity was measured in a dilution refrigerator (Oxford Instruments), using a standard four-wire steady-state method with two RuO$_2$ chip thermometers, calibrated $in$ $situ$ against a reference RuO$_2$ thermometer. Four contacts were made on the sample surface by silver epoxy. Magnetic fields were applied parallel to the heat current. Results and Discussion ====================== The quality of our YIG single crystal was characterized by X-ray diffraction (XRD), as shown in Fig. 1. The main panel is the XRD pattern of the (332) plane. The inset shows the rocking scan curve of the (664) reflection, with two peaks from the Cu $K_{\alpha 1}$ and $K_{\alpha 2}$ radiations, respectively. The full width at half maximum (FWHM) of the peak from Cu $K_{\alpha 1}$ is only 0.07 $^{\rm{o}}$, indicating high quality of the crystal. ![(Color online) Specific heat of YIG single crystal. The data between 0.77 and 2.5 K can be fitted by the solid line $C = 6.7T^{1.5}+2.3T^3$. Below 0.77 K the curve deviates from the solid line, which is attributed to the effect of magnetic dipole-dipole interaction. Note that the upturn below 0.38 K is the Schottky anomaly from nuclear moments.](Fig2.eps){width="7.5cm"} The specific heat of YIG single crystal below 2.5 K is shown in Fig. 2. In the figure, $C/T^{1.5}$ was poltted as a function of $T^{1.5}$ in order to sperate phonon and magnon contributions. Between 0.77 and 2.5 K, the total specific heat can be fitted by $C$ = $aT^{1.5}$+$bT^3$, in which the first term is from the FM magnons and the second term is from the phonons. From the fitted coefficient $a$ = 6.7 $\mu$J/cm$^3$ K$^{2.5}$, we get $D$ = 5.2$\times$10$^{-36}$ J cm$^2$ according to Eq. (1). This value is very close to Edmonds and Petersen’s result $D$ = 5.1$\times$10$^{-36}$ J cm$^2$.[@TDE] Below 0.77 K, however, there is an apparent deviation from the solid fitting line. As we have described, the MDDI will affect the specific heat of FM magnons below 1 K, by modifying the dispersion relation. The modified dispersion relation was proposed as the following form[@TH] $$E(k)=\sqrt{(Dk^2-N_z\hbar\omega_m)(Dk^2-N_z\hbar\omega_m+\hbar\omega_m\sin^2\theta_k)}$$ where $\theta_k$ is the angle between the magnon wave vector and the magnetization direction, $N$$_z$ is the $z$ demagnetization factor, and $\hbar$$\omega$$_m$ = $g$$\mu_B$4$\pi$$M$. The dispersion relation of Eq. (3) is plotted in Fig. 3. The approximate dispersion relation $E$ = $Dk^2$ is only valid for $k_B$$T$ $\gg$ $\hbar$$\omega$$_m$. For YIG, 4$\pi$$M$ = 2449 Gs,[@DTE; @MAG] $g$ = 2,[@EPW] so $\hbar$$\omega$$_m$ = $g\mu_B$4$\pi$$M$ = 0.32$k_B$. Therefore, the $T^{1.5}$ dependence of $C_m$ should only hold for $T$ $\gg$ 0.32 K. From our experimental data in Fig. 2, $C_m$ shows $T^{1.5}$ dependence above 0.77 K. The downward deviation below 0.77 K should come from the effect of MDDI. Note that the upturn below 0.38 K is the Schottky anomaly from nuclear moments. ![(Color online) The dispersion relation of FM magnons at low energy when considering the magnetic dipole-dipole interaction effect:[@IO] $E(k)$ = $\sqrt{(Dk^2-N_z\hbar\omega_m)(Dk^2-N_z\hbar\omega_m+\hbar\omega_m\sin^2\theta_k)}$. It strongly deviates from the $E$ = $Dk^2$ approximation below the energy $\hbar\omega_m$ = $g\mu_B$4$\pi$$M$. For YIG, $\hbar\omega_m$ = 0.32$k_B$.](Fig3.eps){width="9cm"} Next we discuss the thermal conductivity results. Figure 4(a) shows the ultra-low temperature thermal conductivity of YIG in magnetic fields up to 11 T, plotted as $\kappa/T$ vs $T$. One can see that $\kappa$ is strongly suppressed by field, as previously reported.[@BL; @RLD; @DW] In the insulating YIG, the total thermal conductivity can be expressed as $\kappa = \kappa_{ph} + \kappa_m$, in which $\kappa_{ph}$ and $\kappa_m$ are the phonon and magnon thermal conductivity, respectively. Since $\kappa_{ph}$ is usually not affected by magnetic field, the rapid suppression of $\kappa$ with field in Fig. 4(a) should come from the reduction of $\kappa_m$. As we know, the external magnetic field $H$ opens a gap $\Delta$ = g$\mu_B$$H$ in the magnon spectrum. When external field is high enough to satisfy g$\mu_B$$H$ $\gg$ $\kappa_B$$T$, there would be no magnon contribution. The saturated thermal conductivity from $H$ = 4 to 11 T shown in Fig. 4(a) suggests that there is only phonon contribution left below 0.8 K in $H \geq$ 4 T, consistent with previous reports.[@RLD; @DW] Therefore the FM magnon thermal conductivity $\kappa_m$ in zero field can be extracted by subtracting $\kappa$(4T) from $\kappa$(0T). In Fig. 4(b), $\kappa_m$ is plotted as $\kappa_m/T$ vs $T$. For AFM magnons, ballistic boundary-limited $\kappa_m = aT^3$ was observed below 0.5 K.[@SYL] Apparently, for FM magnons in Fig. 4(b), there is no such a simple power-law temperature dependence. ![(Color online) (a) Thermal conductivity of YIG single crystal in magnetic fields up to 11 T. In $H \geq$ 4 T, $\kappa$ tends to saturate. (b) Zero field thermal conductivity of FM magnons obtained by subtracting $\kappa$(4T) from $\kappa$(0T). The dashed line is the fitting of the data between 0.2 and 0.8 K, by considering the boundary scattering and magnetic defect scattering. The deviation below 0.2 K is attributed to the MDDI effect. The solid is the fitting of the data below 0.12 K by including the MDDI effect.](Fig4.eps){width="7.5cm"} ![(Color online) Thermal conductivity of YIG samples S1 and S2 in zero field and $H$ = 4 T. S2 is obtained by thinning S1 from 0.63 to 0.23 mm.](Fig5.eps){width="7.5cm"} Theoretically, the thermal conductivity of magnons is calculated by the equation $$\kappa=\frac{k_B}{24{\pi}^3\hbar}{\int}(\frac{E}{k_BT})^2\frac{e^{E/k_BT}}{(e^{E/k_BT}-1)^2}(\nabla_kE){\ell}d^3k,$$ where $\ell$ is the mean free path of magnons. At not very low temperature, if the approximate dispersion relation $E$ = $Dk^2$ of FM magnons is taken and only boundary scattering is considered, we get the $T^2$ dependence of $\kappa_m$ in Eq. (2). Therefore, the anomalous temperature dependence of $\kappa_m$ in Fig. 4(b) must come from the MDDI effect or some other scattering mechanism. Previously, additional magnetic defect scattering was considered, and the data of $\kappa_m$ between 0.23 and 1 K can be well fitted.[@DW] In this case, $\ell^{-1}$ can be expressed as the sum of two terms $$\ell^{-1}=L^{-1}+\ell_D^{-1}$$ where $L^{-1}$ is from boundary scattering and $\ell_D^{-1}$ is from magnetic defect scattering with $\ell_D^{-1}$ = $\alpha$$k^4$. In this way, the magnon thermal conductivity is $$\kappa_m(T)=BT^2\int_{0}^{\infty}\\\frac{x^3csch^2(\frac{1}{2}x)dx}{1+{\beta}T^2x^2},$$ where $x=E/k_BT$, $B = \frac{\zeta(3)k_B^3L}{{\pi}^2{\hbar}D}$, $\beta = \frac{{\alpha}Lk^2_B}{D^2}$. By using this model, our $\kappa_m$ data between 0.2 and 0.8 K can also be well fitted, with the parameters $B$ = 0.056 mW/K$^3$ cm and $\beta$ = 0.15 K$^{-2}$. Using the obtained value of $B$, together with $D$ = 5.2$\times$10$^{-36}$ J cm$^2$, we get the boundary-limited mean free path $L$ = 32.7 $\mu$m, which we will discuss later. However, the magnetic defect scattering model can not fit the $\kappa_m$ data below 0.2 K, as seen in Fig. 4(b). We further consider the effect of MDDI. Since the dispersion relation Eq. (3) is too complex to do calculation, Ortenberger and Sparks chose a simplified dispersion relation[@IO] $$E=Dk^2+ck_B.$$ Substituting this dispersion relation into Eq. (4) results in $$\kappa_m=BT\int_{0}^{\infty}\frac{x^2csch^2(\frac{1}{2}x)(Tx+c)}{1+\beta(Tx+c)^2}dx.$$ With the $B$ and $\beta$ value obtained above, the $\kappa_m$ data can be fitted well below 0.12 K, as shown in Fig. 4(b). We want to emphasize that we tried to fit the $\kappa_m$ data below 0.8 K with only boundary scattering and MDDI effect, without considering the magnetic defect scattering, but it did not work. Therefore, the magnetic defect scattering mechanism is necessary for explain the low-temperature $\kappa_m$ data in YIG. Comparing the results of $C_m$ and $\kappa_m$, the MDDI starts to affect thermal conductivity at a lower temperature than specific heat. The reason may relate to the involvement of magnetic defect scattering in thermal conductivity. Finally, we discuss the boundary-limited mean free path $L$ of FM magnons in YIG. From Fig. 4(b), $L$ = 32.7 $\mu$m is obtained by the fitting. This value is one order smaller than the expected $L$ = 2$\sqrt{A/\pi}$ = 727 $\mu$m for sample S1, with $A$ being the cross section area.[@SYL2] Such a phenomenon has been observed previously.[@BL; @RLD; @SAF] Friedberg and Harris speculated that it is due to the inner boundary of thin layers rich in Fe$^{\rm{2+}}$ inside the sample.[@SAF] To test this idea, we did the same thermal conductivity measurements on sample S2, obtained by thinning S1 from 0.63 to 0.23 mm. In Fig. 5, the 0 and 4 T data of S2 are almost identical to those of S1, indicating the same $\kappa_m$ in S1 and S2. This result shows that $\kappa_m$ indeed does not change with the sample boundary, therefore the actual boundaries are inside the sample, likely the thin layers proposed by Friedberg and Harris.[@SAF] These thin layers may form during the growing process. Summary ======= In summary, by extending the measurements of the specific heat and thermal conductivity down to 50 mK, we investigate the thermodynamic and transport properties of the low-lying ferromagnetic magnons in YIG single crystal. The deviation of magnon specific heat $C_m(T)$ from the characteristic $T^{1.5}$ dependence below 0.77 K is attributed to the effect of magnetic dipole-dipole interaction. The magnon thermal conductivity $\kappa_m(T)$ is extracted by subtracting $\kappa_m$(4T) from $\kappa_m$(0T). Below 0.8 K, $\kappa_m(T)$ does not obey the characteristic $T^2$ dependence due to the magnetic defect scattering. With further decreasing temperature, the magnetic dipole-dipole interaction also shows effect on $\kappa_m(T)$ below 0.2 K. Our work provides a complete understanding of the thermodynamic and transport properties of the low-lying ferromagnetic magnons. [**ACKNOWLEDGEMENTS**]{} This work is supported by the Natural Science Foundation of China, the Ministry of Science and Technology of China (National Basic Research Program No: 2009CB929203 and 2012CB821402), Program for Professor of Special Appointment (Eastern Scholar) at Shanghai Institutions of Higher Learning.\ $^*$ E-mail: shiyan$\_$li@fudan.edu.cn [99]{} Y. Kajiwara, K. Harii, S. Takahashi, J. Ohe, K. Uchida, M. Mizuguchi, H. Umezawa, H. Kawai, K. Ando, K. Takanashi, S. Maekawa, and E. Saitoh, Nature **464**, 262 (2010). H. Kurebayashi, O. Dzyapko, V. E. Demidov, D. Fang, A. J. Ferguson, and S. O. Demokritov, Nature Mater. **10**, 660 (2011). U. R$\ddot{\rm{o}}$ssler, $Solid$ $State$ $Theory$ (Springer, Belin, 2009) S. Y. Li, Louis Taillefer, C. H. Wang, and X. H. Chen, Phys. Rev. Lett. **95**, 156603 (2005). I. Ortenburger and M. Sparks, Phys. Rev. **133**, A784 (1964). S. H. Charap, Phys. Rev. Lett. **13**, 237 (1964). H. Sato, Progr. Theoret. Phys. **13**, 119 (1955). A. I. Akhieser and L. A. Shishkin, Soviet Phys. -JETP **7**, 875 (1958). A. Kumar, Phys. Rev. B **25**, 3369 (1982). V. Cherepanov, I. Kolokolov, and V. L’vov, Phys. Rep. **229**, 81 (1993). T. Holtstein and H. Primakoff, Phys. Rev. **58**, 1098 (1940). N. W. Ashcroft and N. D. Mermin, $Solid$ $State$ $Physics$ (Thomson Learning, London, 1976) B. C. den Hertog and M. J. P. Gingras, Phys. Rev. Lett. **84**, 3430 (2000). J. G. den Gay and R. Richter, Phys. Rev. Lett. **56**, 2728 (1986). D. C. McCollum, R. L. Wild, and J. Callaway, Phys. Rev. **136**, A426 (1964). H. Kaplan, Phys. Rev. **86**, 121 (1952). D. Douthett and S. A. Friedberg, Phys. Rev. **121**, 1662 (1961). B. L$\ddot{\rm{u}}$thi, J. Phys. Chem. Solids **23**, 35 (1962). R. L. Douglass, Phys. Rev. **129**, 1132 (1963). D. Walton, J. E. Rives, and Q. Khalid, Phys. Rev. B **8**, 1210 (1973). D. T. Edmonds and R. G. Petersen, Phys. Rev. Lett. **2**, 499 (1959). J. E. Kunzler, L. R. Walker, and J. K. Galt, Phys. Rev. **119**, 1609 (1960). J. Callaway, Phys. Rev. **132**, 2003 (1963). J. Callaway and R. Boyd, Phys. Rev. **134**, A1655 (1964). L. L. Abernethy, T.H. Ramsey, and J. W. Ross, J. Appl. Phys. **32**, 376S (1961). S. Kimura and I. Shindo, J. Crystal Growth **41**, 192 (1977). S. Kimura, I. Shindo, K. Kitamura, Y. Mori, and H. Takamizawa, J. Crystal Growth **44**, 621 (1978). S. Kimura, K. Kitamura, and I. Shindo, J. Crystal Growth **65**, 543 (1983). A. Revcolevschi, U. Ammerahl, and G. Dhalene, J. Crystal Growth **198/199**, 593 (1999). D. T. Edmonds and R. G. Petersen, Phys. Rev. Lett. **4**, 92 (1960). M. A. Gilleo and S. Geller, Phys. Rev. **110**, 73 (1958). E. P. Wohlfarth and K. H. J. Buschow, $Ferromagnetic$ $Materials$$:$ $A$ $Handbook$ $on$ $The$ $Properties$ $of$ $Magnetically$ $Ordered$ $Substances$, $Vol.$ $2$, (North Holland, Amsterdam, 1999) S. Y. Li, J.-B. Bonnemaison, A. Payeur, P. Fournier, C. H. Wang, X. H. Chen, Louis Taillefer, Phys. Rev. B **77**, 134501 (2008). S. A. Friedberg and E. D. Harris, Proceedings of the Eighth International Conference on Low Temperature Physics £¨Butterworths, London, 1962), p. 302.
--- abstract: | The acoustic-to-word model based on the Connectionist Temporal Classification (CTC) criterion is a natural end-to-end (E2E) system directly targeting word as output unit. Two issues exist in the system: first, the current output of the CTC model relies on the current input and does not account for context weighted inputs. This is the hard alignment issue. Second, the word-based CTC model suffers from the out-of-vocabulary (OOV) issue. This means it can model only frequently occurring words while tagging the remaining words as OOV. Hence, such a model is limited in its capacity in recognizing only a fixed set of frequent words. In this study, we propose addressing these problems using a combination of attention mechanism and mixed-units. In particular, we introduce Attention CTC, Self-Attention CTC, Hybrid CTC, and Mixed-unit CTC. First, we blend attention modeling capabilities directly into the CTC network using Attention CTC and Self-Attention CTC. Second, to alleviate the OOV issue, we present Hybrid CTC which uses a word and letter CTC with shared hidden layers. The Hybrid CTC consults the letter CTC when the word CTC emits an OOV. Then, we propose a much better solution by training a Mixed-unit CTC which decomposes all the OOV words into sequences of frequent words and multi-letter units. Evaluated on a 3400 hours Microsoft Cortana voice assistant task, our final acoustic-to-word solution using attention and mixed-units achieves a relative reduction in word error rate (WER) over the vanilla word CTC by 12.09%. Such an E2E model without using any language model (LM) or complex decoder also outperforms a traditional context-dependent (CD) phoneme CTC with strong LM and decoder by 6.79% relative. author: - 'Amit Das,  Jinyu Li,  Guoli Ye,  Rui Zhao,  and Yifan Gong, [^1]' bibliography: - 'strings.bib' - 'refs.bib' title: 'Advancing Acoustic-to-Word CTC Model with Attention and Mixed-Units' --- CTC, OOV, acoustic-to-word, attention, end-to-end system, speech recognition Introduction {#sec: Intro} ============ automatic speech recognition (ASR), we are given a sequence of acoustic feature vectors ${\mathbf{x}}$. The objective is to decode a sequence of words ${\mathbf{y}}$ from ${\mathbf{x}}$ with minimum probability of error. With the 0-1 loss function, the optimal solution uses the Bayesian Maximum Aposteriori (MAP) rule $$\begin{aligned} \hat{{\mathbf{y}}} &= \operatorname*{arg\,max}_{{\mathbf{y}}} \ P({\mathbf{y}}|{\mathbf{x}}; \Theta_{\text{ASR}}), \label{eq:asr_map} \\ &= \operatorname*{arg\,max}_{{\mathbf{y}}} \ P({\mathbf{x}}|{\mathbf{y}}; \Theta_{\text{AM}}) P({\mathbf{y}}; \Theta_{\text{LM}}). \label{eq:asr_am_lm}\end{aligned}$$ However, to reduce complexity, practical ASR systems often use the sub-optimal solution $$\begin{aligned} \hat{{\mathbf{y}}} &\approx \operatorname*{arg\,max}_{{\mathbf{y, l}}} \ P({\mathbf{x}}|{\mathbf{l}}; \Theta_{\text{AM}}) P({\mathbf{l}}|{\mathbf{y}}; \Theta_{\text{PM}}) P({\mathbf{y}}; \Theta_{\text{LM}}). \label{eq:asr_am_pm_lm}\end{aligned}$$ Here, ${\mathbf{l}}$ is a sequence of phonemes and $\Theta_{\text{ASR}} = \{\Theta_{\text{AM}}, \Theta_{\text{PM}}, \Theta_{\text{LM}}\}$ is the set of parameters to be estimated during training. The first term $P({\mathbf{x}}|{\mathbf{l}}; \Theta_{\text{AM}})$ in Eq.  is the likelihood of the features given the phoneme sequence and is obtained from an acoustic model (AM). The second term $P({\mathbf{l}}|{\mathbf{y}};\Theta_{\text{PM}})$ is the likelihood of the phoneme sequence given the word sequence and is obtained from a lexicon or pronunciation model (PM). The third term $P({\mathbf{y}}; \Theta_{\text{LM}})$ is the prior probability of the word sequence and is obtained from a language model (LM). In theory, all $\{\Theta_{\text{AM}}, \Theta_{\text{PM}}, \Theta_{\text{LM}}\}$ should be estimated jointly. However, in practice, they are estimated separately and hence training an ASR system becomes a complex disjoint learning problem. Moreover, decoding at test time involves a complex graph search procedure which is intensive both in time and memory. This makes traditional ASR systems often cumbersome for deployment in real-world devices. In contrast, an end-to-end (E2E) ASR system [@Yu-RecentProgDeepLearningAcousticModels; @sak2015learning; @miao2015eesen; @Chan-LAS; @prabhavalkar2017comparison; @battenberg2017exploring; @sak2017recurrent; @hadiantowards; @chiu2018state; @sainath2017improving] circumvents the disjoint learning problem by directly transducing a sequence of features ${\mathbf{x}}$ to a sequence of words ${\mathbf{y}}$. Some widely used contemporary neural network based E2E approaches for sequence-to-sequence transduction are: (a) Connectionist Temporal Classification (CTC) [@Graves-CTCFirst; @Graves-E2EASR], (b) Recurrent Neural Network (RNN) Encoder-Decoder (ED) [@Cho-RNNEncDecSMT; @Bahdanau-RNNEncDecAlignTranslate; @Bahdanau-AttentionASR; @Chorowski-AttentionASR], and (c) RNN Transducer (RNN-T) [@Graves-RNNSeqTransduction]. These approaches have been successfully applied to large scale ASR [@sak2015learning; @miao2015eesen; @Lu2015StudyRNNED; @Chan-LAS; @soltau2016neural; @prabhavalkar2017comparison; @battenberg2017exploring; @rao2017exploring; @chiu2018state; @masumura2019largecontext; @moritz2019triggeredattention; @bahar2019onusing2ds2s; @xiang2019crfbased]. In this study, we confine ourselves to the CTC approach. CTC, first introduced in [@Graves-CTCFirst; @Graves-E2EASR], involves training a stack of underlying RNNs and minimizing the sequence level cross-entropy (CE) loss $-\text{log}\ P({\mathbf{y}}|{\mathbf{x}})$. In contrast, RNN training minimizes the frame level CE loss. Moreover, CTC networks offer the versatility to model output units of different sizes such as monophones, characters, words, or other sub-word units. Owing to this simplicity in the training structure and versatility of output units, CTC is regarded as one of the most popular E2E methods [@Hannun-DeepSpeech; @sak2015learning; @sak2015fast; @miao2015eesen; @kanda2016maximum; @soltau2016neural; @Zweig-AdvancesNeuralASR; @liu2017gram; @audhkhasi2017direct; @Li17CTCnoOOV; @Yu-RecentProgDeepLearningAcousticModels; @Li2018Speaker]. In ASR, the number of output labels in ${\mathbf{y}}$ is usually smaller than the number of input speech frames in ${\mathbf{x}}$. However, since a CTC network is essentially an RNN, it is forced to predict a label for every frame in ${\mathbf{x}}$. Since some frames may not always be associated with a label (a) CTC introduces a special *blank* label as an additional output label which acts as a filler and, (b) it allows for repetition of labels (for both blank or non-blank). As a result, CTC frame level outputs are usually dominated by blank labels. The outputs corresponding to the non-blank labels usually occur with spikes in their posteriors because of their high confidences. Thus, an easy way to convert intermediate frame level outputs to final ASR outputs using CTC involves a simple two-step procedure. In the first step, generate a sequence of labels corresponding to the highest posteriors and merge consecutive duplicate labels. In the second step, remove the blank labels and concatenate the remaining non-blank labels into words. This is known as greedy decoding. It is a very attractive feature for E2E modeling as there is neither any LM nor any complex decoding involved. This makes it easy for deployment in real-world devices. The E2E ASR developed in this study uses greedy decoding. As the goal of ASR is to generate a word sequence from speech acoustics, the word is the most natural output unit compared to other output units such as monophones or characters. A big challenge in the word-based CTC model, a.k.a. acoustic-to-word (A2W) CTC or word CTC, is the OOV issue [@bazzi2002modelling; @decadt2002transcription; @yazgan2004hybrid; @bisani2005open]. In [@sak2015fast; @soltau2016neural; @audhkhasi2017direct], only the most frequent words in the training set were used as output targets whereas the remaining words were lumped together as OOVs. These OOVs can neither be modeled nor recognized correctly. For example, consider an utterance containing the sequence “have you been to newyorkabc" in which “newyorkabc" is an OOV (infrequent) word. For an OOV-based model, a likely output for this utterance would be “have you been to OOV”. Despite it being the expected output from the OOV-based model, the presence of the OOV tag in the sentence degrades the end-user experience. Another disadvantage of OOV modeling is that the data related to those infrequent words are wasted, resulting in reduced modeling power. To underscore this issue, [@sak2015fast] trained a word CTC with up to 25 thousand (k) word targets. However, the ASR accuracy of the word CTC was far below the accuracy of a context dependent (CD) phoneme CTC model with LM, partially due to the high OOV rate when using only around 3k hours of training data. The accuracy gap between a word CTC and CD phoneme CTC can be attributed to multiple reasons. First, training a word CTC requires orders of magnitude of more training data than a CD phoneme CTC because words which qualify as non-OOVs (frequent words) require sufficient number of training examples. Words which do not meet this sufficiency requirement are simply tagged as OOVs. Hence, such words can neither be modeled as valid words during training nor recognized during evaluation. Second, even in the presence of large training data, it is difficult to capture the entire vocabulary of a language. For example, a word CTC cannot handle unfamiliar nouns or emerging hot-words (e.g. selfie, meme, unfriend) which gradually become popular after an acoustic model has been built. Several studies in the past have attempted to address these issues. In [@soltau2016neural], it was shown that by using 100k words as output targets and by training the model with 125k hours of data, a word CTC was able to outperform a CD phoneme CTC. However, easy accessibility to such large databases is rare. Usually, at most a few thousand hours of data are available. In [@audhkhasi2018building], the authors were able to train a word CTC model with only 2k hours of data achieving ASR accuracy comparable to that of a CD phoneme CTC. Their proposed training regime included initializing the word CTC with a well-trained phoneme CTC, curriculum learning [@Bengio2009Curriculum], Nesterov momentum-based stochastic gradient descent, dropout, and low rank matrix factorization [@Sainath2013LRMF]. To address the hot-words issue, [@audhkhasi2018building] also proposed a spell and recognize (SAR) model which has a combination of words and characters as output targets. The SAR model is used to learn to first spell a word as a sequence of characters and then recognize it as a whole word. However, whenever an OOV is detected, the decoder consults the letter sequence from the speller. Thus, the displayed hypothesis to the end-user contains words (for non-OOVs) and characters (for OOVs). Spelling out the characters for OOVs is more meaningful to the users than simply displaying “OOV". However, it was reported that the overall recognition accuracy of the SAR model improved only marginally over a word-only CTC. In [@Chen2018OnModular], the authors proposed training two CTC models separately - an acoustics-to-phoneme model from acoustic data and a phoneme-to-word model using text data respectively. Then, the two models were jointly optimized resulting in an A2W model. In this study, we propose four solutions to improve the recognition accuracy of the all-neural word CTC using only 3400 hours of training data while also alleviating the OOV issue. - First, in Section \[sec: CTCAttn\], we propose *Attention CTC* [@Das18CTCAttention] to address the inherent hard alignment problem in CTC. Since CTC relies on the hidden feature vector at the current time to make predictions, it does not directly attend to feature vectors of the neighboring frames. This is the hard alignment problem which makes CTC’s output independent assumption worse. Our proposed solution generates new hidden features that carry attention weighted context information. We achieved this by blending some concepts from RNN-ED into CTC modeling. - Second, in Section \[sec: SelfAttnCTC\], we investigate another attention mechanism called *Self-Attention* [@vaswani2017selfattention] in CTC networks. - Third, we propose *Hybrid CTC* [@Li17CTCnoOOV] which is a single CTC consisting of a word CTC and a letter CTC trained jointly using multi-task learning (MTL) [@Caruana-MTL; @Seltzer-MTLPhonemeRecog]. We train the word CTC first and then add a letter CTC as an auxiliary task by sharing the hidden layers of the word CTC. During recognition, the word and letter CTCs generate sequences of words and letters respectively. However, the letter CTC is consulted for the letter sequence only when the word CTC emits an OOV token. This makes the Hybrid CTC capable of recognizing OOVs and thereby reducing errors introduced by OOVs. - Finally, we further improve the word CTC and reduce OOV errors by introducing *Mixed-unit CTC* [@Li18CTCnoOOV]. Here, during training, the OOV word is decomposed into a sequence of frequent words and letters (which we refer to as *mixed-units*). During testing, we perform greedy decoding for the whole E2E system in a single step without the need of using the two-stage process (OOV-detection and then letter-sequence-consulting) as in Hybrid CTC. We will later show that a CTC with mixed-units outperformed a CTC with wordpieces which have become popular in recent RNN-ED frameworks [@chiu2018state]. Our final proposed word CTC achieved a relative WER reduction (WERR) of about 12.09% over the vanilla word CTC [@Graves-CTCFirst]. Furthermore, the same word CTC outperformed the traditional CD phoneme CTC with a strong LM and decoder by 6.79% relative. The remainder of the article is organized as follows. In Section \[sec: E2E\] we give a brief overview of CTC and RNN-ED. In Sections \[sec: CTCAttn\], \[sec: SelfAttnCTC\], \[sec: hybCTC\], \[sec: multimixCTC\], we explain the proposed Attention CTC, Self-Attention CTC, Hybrid CTC, and Mixed-unit CTC respectively. In Section \[sec: Expts\], we provide experimental evaluations of our proposed algorithms. Finally, we summarize our study and draw conclusions in Section \[sec: Conclusions\]. The terms letter and character have been interchangeably used in this study. End-to-End Speech Recognition {#sec: E2E} ============================= An E2E ASR system models the posterior distribution $p({\mathbf{y}}|{\mathbf{x}})$ by transducing an input sequence of acoustic feature vectors ${\mathbf{x}}$ to an output sequence of tokens ${\mathbf{y}}$ (phonemes, characters, words etc.). More specifically, for an input sequence of feature vectors ${\mathbf{x}} = ({\mathbf{x}}_{1}, \cdots, {\mathbf{x}}_{T})$ of length $T$ with ${\mathbf{x}}_{t} \in {\mathbb{R}}^{m}$, an E2E ASR system transduces the input sequence to an intermediate sequence of hidden feature vectors ${\mathbf{h}} = ({\mathbf{h}}_{1}, \cdots, {\mathbf{h}}_{L})$ of length $L$ with ${\mathbf{h}}_{l} \in {\mathbb{R}}^{n}$. The sequence ${\mathbf{h}}$ undergoes another transduction resulting in an output sequence ${\mathbf{y}}$ whose posterior probability is $\tilde{p}({\mathbf{y}}|{\mathbf{x}})$. Here ${\mathbf{y}} = (y_{1}, \cdots, y_{U})$ is of length $U$ with $y_{u} \in {\mathbb{L}}$, ${\mathbb{L}}$ being the label set. Usually $U \leq T$ and $L = T$ in E2E ASR systems. Thus, an E2E neural network, parameterized by $\mathbf{W}$, learns a many-to-one functional ${\mathbf{f}}_{\mathbf{W}}: {\mathbf{x}} \mapsto \tilde{p}({\mathbf{y}}|{\mathbf{x}})$ where $\tilde{p}({\mathbf{y}}|{\mathbf{x}})$ closely resembles the true $p({\mathbf{y}}|{\mathbf{x}})$. Connectionist Temporal Classification (CTC) {#ssec: CTC} ------------------------------------------- A CTC network uses a recurrent neural network (RNN) and the CTC error criterion [@Graves-CTCFirst; @Graves-E2EASR] which directly optimizes the prediction of a transcription sequence. As the length of the output labels is shorter than the length of the input speech frames, a CTC path is introduced to make their lengths equal by adding the blank symbol $\phi$ as an additional label and allowing repetition of labels. Thus, the new label set becomes ${\mathbb{L}}^{\prime} = {\mathbb{L}} \cup \phi$. Let $K = \left\vert{\mathbb{L}}^{\prime}\right\vert$ be the cardinality of the label set ${\mathbb{L}}^{\prime}$. Denote $\bm\pi = (\pi_{1}, \cdots, \pi_{T})$ as the CTC path (or alignment) with $\pi_{t} \in {\mathbb{L}}^{\prime}$, $\bf{y}$ as the target label sequence (transcription) we want to recognize, and $B^{-1}(\bf{y})$ as the preimage of ${\mathbf{y}}$ mapping all possible CTC paths $\bm\pi$ that result in $\bf{y}$. Then, the CTC loss function is defined as the negative log of sum of the probabilities of all possible CTC paths $\bm\pi$ that result in $\bf{y}$. This is given by $$L_{CTC} = - \ln p( {\bf{y}|\bf{x}} ) = - \ln \sum_{{\bm\pi} \in B^{-1}(\bf{y})} p( {\bm\pi} | \bf{x} ) \label{eq:ctcloss}.$$ With the conditional independence assumption ($\pi_{t} \Perp \pi_{\ne t}|{\mathbf{x}}$), $p( {\bm\pi} | \bf{x} )$ can be further decomposed into a product of posteriors of each frame as $$p( {{\bm\pi} | \bf{x}} ) = \prod_{t=1}^T p( \pi_{t}| \bf{x}) \label{eq:ctcpathprob}.$$ During decoding, it is very simple to generate the decoded sequence using greedy decoding: simply concatenate the labels corresponding to the highest posteriors and merge the duplicate labels; then remove the blank labels. Thus, there is neither a language model nor any complex graph search in greedy decoding. RNN Encoder-Decoder (RNN-ED) {#ssec: RNNEncDec} ---------------------------- An RNN-ED [@Cho-RNNEncDecSMT; @Bahdanau-RNNEncDecAlignTranslate; @Bahdanau-AttentionASR; @Chorowski-AttentionASR] uses two distinct networks - an RNN encoder network that transforms ${\mathbf{x}}$ into ${\mathbf{h}}$ and an RNN decoder network that transforms ${\mathbf{h}}$ into ${\mathbf{y}}$. Using these, an RNN-ED models $p({\mathbf{y}}|{\mathbf{x}})$ as $$\begin{aligned} p({\mathbf{y}}|{\mathbf{x}}) &= \prod_{u=1}^{U} p(y_{u}|{\mathbf{y}}_{1:u-1}, {\mathbf{c}}_{u}), \label{eq:RNNED-transcriptprob}\end{aligned}$$ where ${\mathbf{c}}_{u}$ is the context vector at time $u$ and is a function of ${\mathbf{x}}$. There are two key differences between CTC and RNN-ED. First, $p({\mathbf{y}}|{\mathbf{x}})$ in Eq.  is generated using a product of ordered conditionals. Thus, RNN-ED is not impeded by the conditional independence constraint of Eq. . Second, The decoder output $y_{u}$ at time $u$ is dependent on ${\mathbf{c}}_{u}$ which is a weighted sum of all its inputs (soft alignment), i.e., ${\mathbf{h}}_{t}, t = 1, \cdots, T$. In contrast, CTC generates $y_{u}$ using only ${\mathbf{h}}_{t}$ (hard alignment). The decoder network of RNN-ED has three components: a multinomial distribution generator Eq. , an RNN decoder Eq. , and an attention network Eq. - [@Chorowski-AttentionASR; @Bahdanau-AttentionASR] as follows: $$\begin{aligned} p(y_{u}|{\mathbf{y}}_{1:u-1}, {\mathbf{c}}_{u}) &= \text{Generate}(y_{u-1}, {\mathbf{s}}_{u}, {\mathbf{c}}_{u}), \label{eq:RNNED-generate} \\ {\mathbf{s}}_{u} &= \text{Recurrent}({\mathbf{s}}_{u-1}, {\mathbf{y}}_{u-1}, {\mathbf{c}}_{u}), \label{eq:RNNED-recurrent} \\ {\mathbf{c}}_{u} &= \text{Annotate}(\bm\alpha_{u}, {\mathbf{h}}) = \sum_{t=1}^{T} \alpha_{u,t} {\mathbf{h}}_{t}, \label{eq:RNNED-annotate} \\ \alpha_{u,t} &= \text{Attend}({\mathbf{s}}_{u-1}, \bm\alpha_{u-1}, {\mathbf{h}}_{t}), \quad t = 1, \cdots, T. \label{eq:RNNED-attend}\end{aligned}$$ Here, ${\mathbf{h}}_{t}, {\mathbf{c}}_{u} \in {\mathbb{R}}^{n}$, and $\bm\alpha_{u} = [\alpha_{u,1} \cdots \alpha_{u,T}]$ is a probability distribution. Hence, $\alpha_{u,t} \in {\mathbb{U}}$ with ${\mathbb{U}} = [0,1]$ such that $\sum_t \alpha_{u,t} = 1$. Also, for simplicity assume ${\mathbf{s}}_{u} \in {\mathbb{R}}^{n}$. $\text{Generate}(.)$ is a feedforward network with a softmax operation generating the ordered conditional $p(y_{u}|{\mathbf{y}}_{1:u-1}, {\mathbf{c}}_{u})$ . Recurrent(.) is an RNN decoder operating on the output time axis indexed by $u$ and has hidden state ${\mathbf{s}}_{u}$. Annotate(.) computes the context vector ${\mathbf{c}}_{u}$ (also called the soft alignment) using the attention probability vector $\bm\alpha_{u}$ and the hidden sequence ${\mathbf{h}}$. Attend(.) computes the attention weight $\alpha_{u,t}$ using a single layer feedforward network (Score(.) function) followed by softmax normalization as follows: $$\begin{aligned} e_{u,t} &= \text{Score}({\mathbf{s}}_{u-1}, \bm\alpha_{u-1}, {\mathbf{h}}_{t}), \quad t = 1, \cdots, T, \label{eq:RNNED-score} \\ \alpha_{u, t} &= \frac{ \text{exp}(e_{u, t}) } { \sum_{t^{\prime}=1}^{T} \text{exp}(e_{u, t^{\prime}}) }, \quad t = 1, \cdots, T. \label{eq:RNNED-normalizedscore}\end{aligned}$$ Here, $e_{u, t} \in {\mathbb{R}}$ and $\text{Score}(.)$ can either be a content-based or hybrid-based function. The latter encodes both content (${\mathbf{s}}_{u-1}$) and location ($\bm\alpha_{u-1}$) information. $\text{Score}(.)$ is computed using $$\begin{aligned} \hspace{-2mm} e_{u, t} &= \begin{cases} {\mathbf{v}}^{T}\text{tanh}\ ({\mathbf{U}} {\mathbf{s}}_{u-1} + {\mathbf{W}} {\mathbf{h}}_{t} + {\mathbf{b}}), \ \mbox{(content)} \\ {\mathbf{v}}^{T}\text{tanh}\ ({\mathbf{U}} {\mathbf{s}}_{u-1} + {\mathbf{W}} {\mathbf{h}}_{t} + {\mathbf{V}} {\mathbf{f}}_{u} + {\mathbf{b}}), \ \mbox{(hybrid)} \end{cases} \label{eq:RNNED-ContentHybrid} \\ &\text{where,} \quad {\mathbf{f}}_{u} = {\mathbf{F}} \ast \bm\alpha_{u-1} \label{eq:RNNED-locfeat}.\end{aligned}$$ The operation $\ast$ denotes convolution. Thus, in the hybrid case, the dependence on $\bm\alpha_{u-1}$ is through ${\mathbf{f}}_{u}$. Attention parameters ${\mathbf{U}}, {\mathbf{W}}, {\mathbf{V}}$, ${\mathbf{F}}, {\mathbf{b}}, {\mathbf{v}}$ are learned while training RNN-ED. Attention CTC {#sec: CTCAttn} ============= In this section, we outline various steps required to model attention directly within CTC. In the past, several attempts have been made to apply attention on E2E models. For example, attention-based RNN-ED [@Chorowski-AttentionASR; @Bahdanau-AttentionASR] network was used to predict word outputs in [@lu2016training]. Other studies have investigated using CTC as an auxiliary task to improve attention-based RNN-ED using an MTL framework. For example, CTC was used either at the top layer [@Kim-JointCTCRNNEncDecUsingMTL; @hori2017advances] or at an intermediate layer [@Toshniwal-MTLLowLevelRNNED] in the MTL framework. Extensions of CTC such as RNN-T [@Graves-RNNSeqTransduction; @rao2017exploring] and RNN aligner [@sak2017recurrent] either change the objective function or the training process to relax the frame independence assumption of CTC. However, none of these approaches used attention directly within the CTC network. The proposed Attention CTC model is different from all these approaches since we use attention mechanism to improve the hidden layer representations with more context information without changing the CTC objective function and the training process. Our primary motivation in this work is to address the hard alignment problem of CTC, as outlined earlier in Section \[sec: Intro\], by modeling attention directly within the CTC framework. An example of the proposed Attention CTC network is shown in Figure \[fig:CTCAttn\]. We propose the following key ideas to blend attention into CTC. (a) First, we derive context vectors using *time convolution features* (Section \[ssec: CTCAttn-conv\]) and apply attention weights on these context vectors (Section \[ssec: CTCAttn-attn\]). This makes it possible for CTC to be trained using soft alignments instead of hard. (b) Second, to improve attention modeling, we incorporate a *pseudo language model* (Section \[ssec: CTCAttn-LM\]) during CTC training. (c) Finally, we improve our attention modeling further by introducing *component attention* (Section \[ssec: CTCAttn-Comp\]) where context vectors are produced as a result of applying attention on hidden features across both time and their individual components. We explain each of these ideas separately with illustrations in the following subsections. We will use the indices $t$ and $u$ to denote the time step for input ${\mathbf{h}}$ and output ${\mathbf{c}}$ respectively of the attention block to maintain notational consistency with RNN-ED. Time Convolution (TC) Features {#ssec: CTCAttn-conv} ------------------------------ First, we construct TC features from the hidden outputs ${\mathbf{h}}$ of the last LSTM layer. This is illustrated in Fig. \[fig:TC\]. Consider a subsequence of ${\mathbf{h}}$ rather than the entire sequence. We refer to this subsequence, $({\mathbf{h}}_{u-\tau}, \cdots, {\mathbf{h}}_{u}, \cdots, {\mathbf{h}}_{u+\tau})$, as the *attention window*. Each ${\mathbf{h}}_{t} \in {\mathbb{R}}^{n}$. The attention window is centered around the current time $u$ with $\tau$ being the length of the attention window on either side of $u$. Thus, the total length of the attention window is $C = 2\tau + 1$. Now consider $C$ time convolution kernels $({\mathbf{W}}^{\prime}_{u-\tau}, \cdots, {\mathbf{W}}^{\prime}_{u}, \cdots, {\mathbf{W}}^{\prime}_{u+\tau})$ where ${\mathbf{W}}^{\prime}_{t} \in {\mathbb{R}}^{n \times n}$ and ${\mathbf{W}}^{\prime}_{t_1} \ne {\mathbf{W}}^{\prime}_{t_2}$ for $t_{1} \ne t_{2}$. Then the context vector ${\mathbf{c}}_{u}$ is computed using time convolution as, $$\begin{aligned} {\mathbf{c}}_{u} & = \sum_{t =u-\tau}^{u+\tau} {\mathbf{W}}^{\prime}_{u - t} {\mathbf{h}}_{t} \nonumber \\ &\stackrel{\Delta}{=} \sum_{t =u-\tau}^{u+\tau} {\mathbf{g}}_{t} \nonumber \\ &= \gamma \sum_{t =u-\tau}^{u+\tau} \alpha_{u,t} {\mathbf{g}}_{t}. \label{eq:CTCAttn-TimeConvolution}\end{aligned}$$ Here, ${\mathbf{g}}_{t}, {\mathbf{c}}_{u} \in {\mathbb{R}}^{n}$ represents the $filtered$ signal at time $t$. The last step Eq.  holds when $\alpha_{u,t} = \frac{1}{C}$ and $\gamma = C$. Since Eq.  is similar to Eq.  in structure, ${\mathbf{c}}_{u}$ represents a special case context vector with uniform attention weights $\alpha_{u,t} = \frac{1}{C}$, $t \in [u-\tau, \ u+\tau]$. Moreover, ${\mathbf{c}}_{u}$ is a result of convolving features ${\mathbf{h}}$ with ${\mathbf{W}}^{\prime}$ in time. Thus, ${\mathbf{W}}^{\prime}$ and ${\mathbf{c}}_{u}$ represent *time convolution kernel* and *time convolution feature* respectively. Content Attention (CA) and Hybrid Attention (HA) {#ssec: CTCAttn-attn} ------------------------------------------------ To incorporate non-uniform attention in Eq. , we need to compute a non-uniformly distributed $\bm{\alpha}_{u}$ where $\bm{\alpha}_{u} = (\alpha_{u-\tau}, \cdots, \alpha_{u}, \cdots, \alpha_{u+\tau})$ using an attention network similar to Eq. . However, since there is no explicit decoder like Eq.  in CTC, there is no decoder state ${\mathbf{s}}_{u}$. Therefore, we use ${\mathbf{z}}_{u}$ instead of ${\mathbf{s}}_{u}$. The term ${\mathbf{z}}_{u} \in {\mathbb{R}}^{K}$ is the logit to the softmax and is given by $$\begin{aligned} {\mathbf{z}}_{u} &= {\mathbf{W}}_{\text{soft}}{\mathbf{c}}_{u} + {\mathbf{b}}_{\text{soft}}, \nonumber \\ {\mathbf{p}}(\pi_{u}|{\mathbf{x}}) &= \text{Softmax}({\mathbf{z}}_{u}), \label{eq:CTCAttn-generate}\end{aligned}$$ where ${\mathbf{W}}_{\text{soft}} \in {\mathbb{R}}^{K \times n}, {\mathbf{b}}_{\text{soft}} \in {\mathbb{R}}^{K}$. The term ${\mathbf{p}}(\pi_{u}|{\mathbf{x}}) = [p(\pi_{u}=1|{\mathbf{x}}) \ p(\pi_{u}=2|{\mathbf{x}}) \cdots p(\pi_{u}=K|{\mathbf{x}})]^{\text{T}}$ is the vector of probabilities of labels in the alignment at time $u$. Thus, Eq.  is similar to the Generate(.) function in Eq.  but lacks the dependency on ${\mathbf{y}}_{u-1}$ and ${\mathbf{s}}_{u}$. Consequently, the Attend(.) function in Eq.  becomes $$\begin{aligned} \alpha_{u,t} &= \text{Attend}({\mathbf{z}}_{u-1}, \bm{\alpha}_{u-1}, {\mathbf{g}}_{t}), \quad t = u-\tau, \cdots, u+\tau \label{eq:CTCAttn-attend}\end{aligned}$$ where ${\mathbf{h}}_{t}$ in Eq.  is replaced with ${\mathbf{g}}_{t}$. The Attend(.) function is illustrated in Fig. \[fig:CA\_HA\] and is simply a single layer neural network with a softmax. A scoring function Score(.), similar to Eq. , computes the layer activations. However, here the Score(.) function uses the filtered signal ${\mathbf{g}}_{t}$ instead of the raw signal ${\mathbf{h}}_{t}$ in Eq. . Thus, the new Score(.) function becomes $$\begin{aligned} e_{u,t} &= \text{Score}({\mathbf{z}}_{u-1}, \bm\alpha_{u-1}, {\mathbf{g}}_{t}), \quad t = u-\tau, \cdots, u+\tau \label{eq:CTCAttn-score} \\ &= \begin{cases} {\mathbf{v}}^{T}\text{tanh}({\mathbf{U}} {\mathbf{z}}_{u-1} + {\mathbf{W}} {\mathbf{g}}_{t} + {\mathbf{b}}), \ \mbox{(content)} \\ {\mathbf{v}}^{T}\text{tanh}({\mathbf{U}} {\mathbf{z}}_{u-1} + {\mathbf{W}} {\mathbf{g}}_{t} + {\mathbf{V}} {\mathbf{f}}_{u} + {\mathbf{b}}) \ \mbox{(hybrid)} \end{cases} \label{eq:CTCAttn-ContentHybrid}\end{aligned}$$ with ${\mathbf{f}}_{u}$ a function of $\bm\alpha_{u-1}$ through Eq. . The content and location information are encoded in ${\mathbf{z}}_{u-1}$ and $\bm\alpha_{u-1}$ respectively. Thus, the hybrid function in Eq.  includes both content and location information. Scores from Eq.  can be normalized using the softmax operation (as in Eq. ) to generate non-uniform $\alpha_{u, t}$ for $t \in [u-\tau, \ u+\tau]$. Now, $\bm\alpha_{u}$ can be plugged into Eq. , along with ${\mathbf{g}}$ to generate the context vector ${\mathbf{c}}_{u}$. This completes the attention network. We found that excluding the scale factor $\gamma$ in Eq. , even for non-uniform attention, was detrimental to the final performance. Therefore, we continue to use $\gamma = C$. Pseudo Language Model (PLM) {#ssec: CTCAttn-LM} --------------------------- The performance of the attention model can be improved further by providing more reliable content information from the past. This is possible by introducing another recurrent network, which we refer to as PLM, that can utilize content from several time steps in the past instead of just one. This network, in essence, would learn an LM-like model implicitly. This is illustrated in Fig. \[fig:PLM\]. To build the PLM network, we follow an architecture similar to RNN-LM [@Mikolov-RNNLM]. As illustrated in the PLM block of Fig. \[fig:CTCAttn\], the input to the PLM network is computed by stacking the previous output ${\mathbf{z}}_{u-1}$ with the context vector ${\mathbf{c}}_{u-1}$ and feeding it to a recurrent function $\mathcal{H}(.)$. The output of $\mathcal{H}(.)$ is ${\mathbf{z}}^{\text{LM}}_{u-1}$ which, instead of ${\mathbf{z}}_{u-1}$, is fed to the Attend(.) block in Eq. . This is represented as $$\begin{aligned} {\mathbf{z}}^{\text{LM}}_{u-1} &= \mathcal{H}({\mathbf{x}}_{u-1}, {\mathbf{z}}^{\text{LM}}_{u-2}), \quad {\mathbf{x}}_{u-1} = \begin{bmatrix} {\mathbf{z}}_{u-1} \\ {\mathbf{c}}_{u-1} \end{bmatrix}, \label{eq:CTCAttnLM-LSTM} \\ \alpha_{u,t} &= \text{Attend}({\mathbf{z}}^{\text{LM}}_{u-1}, \bm{\alpha}_{u-1}, {\mathbf{g}}_{t}), \quad t = u-\tau, \cdots, u+\tau . \label{eq:CTCAttn-attendLM}\end{aligned}$$ We model $\mathcal{H}(.)$ using a single layer long short-term memory (LSTM) unit [@Hochreiter1997long] with $n$ memory cells and input and output dimensions set to $K + n$ (since ${\mathbf{x}}_{u-1} \in {\mathbb{R}}^{K+n}$) and $n$ (since ${\mathbf{z}}^{\text{LM}}_{u-1} \in {\mathbb{R}}^{n}$) respectively. Notice that ${\mathbf{z}}^{\text{LM}}_{u-1}$ encodes the content of a pseudo LM rather than a true LM since CTC outputs are interspersed with blank symbols by design. Also, ${\mathbf{z}}^{\text{LM}}_{u-1}$ is a real-valued vector instead of a one-hot vector. Hence, the PLM is not a true LM. Component Attention (COMA) {#ssec: CTCAttn-Comp} -------------------------- In the previous sections, $\alpha_{u,t}$ is a scalar term weighting the contribution of the entire $n$-dimensional vector ${\mathbf{g}}_{t}$ to generate the output ${\mathbf{p}}(\pi_{t}|{\mathbf{x}})$. This means all $n$ components (or dimensions) of the vector ${\mathbf{g}}_{t}$ are weighted by the same scalar $\alpha_{u,t}$. In this section, we consider weighting each component (dimension) of ${\mathbf{g}}_{t}$ using a separate weight. Therefore, we need an $n$-dimensional weight vector $\bm\alpha_{u,t} \in {\mathbb{U}}^{n}$ instead of the scalar $\alpha_{u,t} \in {\mathbb{U}}$. The vector $\bm\alpha_{u,t}$ can be generated as follows. First, compute an $n$-dimensional score ${\mathbf{e}}_{u, t}$ for each $t$. This is easily achieved using the Score(.) function in Eq.  but without taking the inner product with ${\mathbf{v}}$. For example, in the case of hybrid, the scoring function becomes $$\begin{aligned} \hspace{-8pt}{\mathbf{e}}_{u, t}&=\text{tanh}({\mathbf{U}} {\mathbf{z}}_{u-1} + {\mathbf{W}} {\mathbf{g}}_{t} + {\mathbf{V}} {\mathbf{f}}_{u} + {\mathbf{b}}), \ t = u-\tau, \cdots, u+\tau. \label{eq:CTCAttn-Comp-score}\end{aligned}$$ Now, we have $C$ column vectors $[{\mathbf{e}}_{u, u-\tau}, \cdots, {\mathbf{e}}_{u, u+\tau}]$ where each vector is of dimension $n$. Stacking them column-wise, we have an $n \times C $ scoring matrix ${\mathbf{E}}$ $$\begin{aligned} {\mathbf{E}} &= \begin{bmatrix} {\rule[-0.5ex]{0.5pt}{2.5ex}}& {\rule[-0.5ex]{0.5pt}{2.5ex}}& & {\rule[-0.5ex]{0.5pt}{2.5ex}}\\ {\mathbf{e}}_{u, u-\tau} & {\mathbf{e}}_{u, u - \tau + 1} & \ldots & {\mathbf{e}}_{u, u+\tau} \\ {\rule[-0.5ex]{0.5pt}{2.5ex}}& {\rule[-0.5ex]{0.5pt}{2.5ex}}& & {\rule[-0.5ex]{0.5pt}{2.5ex}}\end{bmatrix}_{n \times C}. \label{eq:coma-score-splice}\end{aligned}$$ Let $e_{u, t}(j) \in (-1,1)$ be the $j^{\text{th}}$ component of the vector ${\mathbf{e}}_{u, t}$. To compute $\alpha_{u, t}(j)$ from $e_{u, t}(j)$, we normalize $e_{u, t}(j)$ across $t$ (columns) keeping $j$ (row) fixed. Thus, $\alpha_{u, t}(j)$ is computed as $$\begin{aligned} \alpha_{u, t}(j) &= \frac{\text{exp}(e_{u, t}(j))}{\sum_{t^{\prime}=u-\tau}^{u+\tau} \text{exp}(e_{u, t^{\prime}}(j))}, \quad j=1,\cdots,n. \label{eq:CTCAttn-Comp-scoresoftmax}\end{aligned}$$ Since $\text{exp}(.)$ and $\text{tanh}(.)$ are both one-to-one functions, their composition is also one-to-one. Thus, there is a one-to-one correspondence between the input $g_{t}(j)$ and output $\alpha_{u, t}(j)$ through the composite function. Consequently, $\alpha_{u, t}(j)$ can be interpreted as the amount of contribution of $g_{t}(j)$ in computing $c_{u}(j)$. Now, from Eq. , we know the values of the vectors $\bm\alpha_{u,t}$, $t \in [u-\tau, \ u+\tau]$. Hence, under the COMA formulation, the context vector ${\mathbf{c}}_{u}$ can be computed from $\bm\alpha_{u,t}$ and ${\mathbf{g}}_{t}$ using $$\begin{aligned} {\mathbf{c}}_{u} &= \text{Annotate}(\bm\alpha_{u}, {\mathbf{g}}, \gamma) = \gamma \sum_{t=u-\tau}^{u+\tau} \bm\alpha_{u,t} \odot {\mathbf{g}}_{t}, \label{eq:CTCAttn-Comp-annotate}\end{aligned}$$ where $\odot$ is the Hadamard product. One attractive feature of the COMA formulation is that it does not introduce any additional training parameters. Finally, we highlight the differences between the attention mechanism in this work and in [@prabhavalkar2017comparison]. First, we apply attention across time (past, present, future) on the time convolution features extracted from the final layer of the recurrent network (encoder). Moreover, we attend only to a small context window. In contrast, [@prabhavalkar2017comparison] attends to the entire output sequence of the encoder in addition to the state of the decoder. There is no time convolution applied on the encoder sequence either. Second, to improve attention modeling, we make use of the logit from the previous time ${\mathbf{z}}_{u-1}$ (or ${\mathbf{z}}^{\text{LM}}_{u-1}$) as an additional input to our attention block. The attention mechanism in [@prabhavalkar2017comparison] does not make use of logit due to the presence of an explicit decoder. Finally, our COMA formulation yields additional gains without introducing any additional training parameters. There is no such formulation in [@prabhavalkar2017comparison]. Self-Attention CTC {#sec: SelfAttnCTC} ================== In this section, we investigate another attention-based paradigm known as Self-Attention (SA) [@vaswani2017selfattention] in the context of CTC training. There are some key differences in the way the attention weights are computed between SA-CTC and Attention CTC (Section \[sec: CTCAttn\]). In Attention CTC, the attention weights are computed using the hidden features and the output prediction from the previous time step (${\mathbf{z}}_{u-1}$). This is evident from the scoring function in Eq. . In contrast, in SA-CTC, the weights are computed from the hidden features only. It does not use any past output predictions. Another difference is that the attention weights are computed using additive operations in Attention CTC whereas multiplicative operations (inner products) are used in SA-CTC. Moreover, matrix-vector multiplications used in Attention CTC are computationally slower than performing inner products in SA-CTC. We highlight only the most important steps in the formulation of SA-CTC. First the hidden features are converted into input projections using the projection matrix ${\mathbf{W}}_{p}$ as $$\begin{aligned} {\mathbf{b}}_{t} &= {\mathbf{W}}_{p} {\mathbf{h}}_{t}, \quad t = u-\tau, \cdots, u+\tau\end{aligned}$$ where $u$ denotes the current time step. The inputs to the attention block of SA-CTC consists of three kinds of vectors - keys, values, and a query. These are derived using $$\begin{aligned} {\mathbf{q}}_{t} &= {\mathbf{Q}} {\mathbf{b}}_{t}, \quad t = u, \\ {\mathbf{k}}_{t} &= {\mathbf{K}} {\mathbf{b}}_{t}, \quad t = u-\tau, \cdots, u+\tau, \\ {\mathbf{v}}_{t} &= {\mathbf{V}} {\mathbf{b}}_{t}, \quad t = u-\tau, \cdots, u+\tau,\end{aligned}$$ where ${\mathbf{Q}}, {\mathbf{K}}, {\mathbf{V}}$ are the query, key, and value matrices respectively. Here, the dimensions of ${\mathbf{q}}_{t}, {\mathbf{k}}_{t}, {\mathbf{v}}_{t}$ are $d_{k}, d_{k}, d_{v}$ respectively. Note that while there is a single query vector corresponding to the current time step $u$, there are multiple key and value vectors corresponding to the context window $[u-\tau, u+\tau]$. Following this, scores are evaluated between the query and the keys by taking their dot products and scaling them with $\frac{1}{\sqrt{d_{k}}}$. This is given by $$\begin{aligned} e_{u, t} &= \frac{{\mathbf{q}}^{T}_{u} {\mathbf{k}}_{t}}{\sqrt{d_{k}}}, \quad t = u-\tau, \cdots, u+\tau.\end{aligned}$$ The scores reflect the correlation between the current input and the neighboring inputs. These scores are then converted into probabilities (attention weights) using the softmax operation. Linear combination of the value vectors using these attention weights generates a context vector ${\mathbf{c}}_{u}$ as follows: $$\begin{aligned} \alpha_{u,t} &= \frac{\text{exp}(e_{u, t})}{\sum_{t^{\prime}=u-\tau}^{u+\tau} \text{exp}(e_{u, t^{\prime}})}, \quad t = u-\tau, \cdots, u+\tau \\ {\mathbf{c}}_{u} &= \sum_{t=u-\tau}^{t=u+\tau} \alpha_{u,t} {\mathbf{v}}_{t}.\end{aligned}$$ This is followed by a residual connection [@he2017deepresidual] and layer normalization, i.e., $\text{LayerNorm}({\mathbf{c}}_{u} + {\mathbf{b}}_{u})$. The output of this is fed to a single layer feed-forward network which is followed by another round of residual connection and layer normalization. This is the uni-head attention architecture of SA-CTC since it computes a scalar weight $\alpha_{t}$ for the entire value vector ${\mathbf{v}}_{t}$. This can be easily extended to multi-head attention where ${\mathbf{v}}_{t}$ is fragmented into smaller sub-vectors and each sub-vector is weighted using a distinct scalar weight. For more details on SA architecture, readers may refer [@vaswani2017selfattention]. Hybrid CTC {#sec: hybCTC} ========== In this and the next section, our primary motivation is to mitigate the OOV issue of the A2W model as mentioned in Section \[sec: Intro\]. First, we describe the Hybrid CTC network. The Hybrid CTC network uses a word CTC as the primary task and a letter CTC as the auxiliary task in an MTL framework. The output units of the word CTC correspond to frequently used words and an OOV token. Infrequent words in the training set are lumped together and tagged as OOV. Given an input sequence of features, the word and letter CTCs emit a word and letter sequence respectively. If the word sequence contains a list of frequent words, then the letter sequence from the letter CTC is completely ignored. However, if the word sequence contains the OOV token, the letter CTC is consulted at the segment that generated the OOV token. In the consultation process, the letter sequence from the letter CTC is merged to form a word. Finally, this newly constructed word from the letter CTC is used to replace the OOV token. Since the word CTC and letter CTC are time synchronized through the shared hidden layers of the MTL network, it is possible to find a correspondence between the outputs of the two CTCs. An illustration of this method is shown in Fig. \[fig:hybCTC\]. Here, the word CTC generates the sequence “*play artist OOV*". The word sequence generated after merging the letters from the letter CTC is “*play artist ratatat*". Since the segment containing “*ratatat*" from the letter CTC has the most time overlap with the segment containing “*OOV*" from the word CTC, the OOV token is replaced with “*ratatat*". Thus, the final output of the Hybrid CTC is “*play artist ratatat*". The detailed steps for building the Hybrid CTC model are described as follows: - Build an LSTM-CTC model of $L$ layers with its output units mapped to frequently occurring words in the training corpus. Map all the remaining infrequent words (occurring less than $N$ times) as the OOV token. Thus, the output units in this LSTM-CTC model correspond to (a) the frequent words, (b) the OOV token, and (c) blank and silence (two additional tokens). - Freeze the bottom $L-1$ hidden layers of the word-CTC, add one LSTM hidden layer and one softmax layer to build a new LSTM-CTC model with letters as its output units. - During testing, generate the word output sequence using greedy decoding. If the output word sequence contains an OOV token, replace the OOV token with the word generated from the letter CTC that has the largest time overlap with the OOV token. Mixed-unit CTC {#sec: multimixCTC} ============== In this section, we briefly explain multi-letter CTC and compare the past implementation of multi-letter CTC with ours. Based on this foundation, we then explain our proposed Mixed-unit CTC. Although single-letter units in CTCs perform well, they are prone to high degree of variability across training examples due to their short temporal context. As we will see later in Table \[Tab:WER\_CTC\_multiletter\], multi-letter units tend to perform better than single-letter units since they have low degree of variability by capturing context information and thereby offer more stability during training. Improving letter CTCs can help improve the accuracy of word CTCs. For example, a stronger letter CTC can lower the WER of the Hybrid CTC since the OOV token may be replaced by more precise words generated by the letter CTC. Gram CTC [@liu2017gram] and multi-phone CTC [@siohan2017ctc] are multi-letter CTCs based on letters and phonemes respectively. They allow variable number of letters (or grams) and phonemes to be output at each time step. The size of the units in gram CTC and multi-phone CTC are learned automatically with the modified forward-backward algorithm accounting for all decompositions. However, in the test phase, their decoding procedure is more complex than the simple greedy decoding procedure used in single-letter CTC models. To reduce the decoding complexity, the authors in [@Chen2017PhoneSynchronous] proposed phone synchronous decoding. In contrast, we offer a facile implementation of our multi-letter CTC. We simply decompose every word (which includes both frequent and OOV words) into a sequence of one or more letter units. Examples are shown in the first three rows of Table \[Tab:units\] where each word, frequent or OOV, is decomposed into single-letter or double-letter or triple-letter units. The advantages of doing this are three-fold. First, our decomposition is straightforward. Second, it does not change the CTC forward-backward algorithm. Finally, during the test phase, our method is able to retain the same greedy decoding procedure used in single-letter CTC models. \[Tab:units\] In Hybrid CTC, the shared-hidden-layer constraint is used to aid the time synchronization of word outputs between the word and letter CTCs. However, the blank symbol dominates most of the frames. The unit boundaries from CTC is also notorious for being arbitrary. Therefore, time synchronization may not be very reliable with the two CTCs running in parallel. A direct solution is to forgo the MTL framework and train a single CTC model comprising of a mixture of frequent words and letters. The letters arise as a result of decomposing the infrequent words in the training set into letters before CTC training begins. The working of this CTC is illustrated in Fig. \[fig:mixCTC\]. If the word is a frequent word, then we just keep it in the output token list. If it is an OOV, then we decompose it into a letter sequence. As shown in the fifth row of Table \[Tab:units\], the OOV “newyorkabc" is decomposed into “n e w y o r k a b c” for single letter decompositions. However, the word “newyork" is not decomposed any further because it is a frequent word. Therefore, the output units of the CTC are both words (for frequent words) and letters (for OOVs). However, we note that artificially decomposing OOVs into sequences of single-letters only may confuse CTC training because the network output modeling units are frequent words and letters. To solve such a potential issue, we decompose the OOVs into a combination of frequent words and letters. We refer to this combination as *mixed units*. For example, in the last two rows of Table \[Tab:units\], the OOV “newyorkabc" is decomposed into “newyork a b c” if we use words and single-letter units or “newyork abc” if we use words and triple-letter units. In addition, for mixed units, we use “\$” to separate each word in the sentence. For example, the sentence “have you been to newyorkabc” is decomposed into “\$ have \$ you \$ been \$ to \$ newyork abc \$”. The “\$” symbol acts as a word separator (like the space symbol) and is essential for finding word boundaries of the mixed-units. During training, since the OOVs are decomposed into mixed units, there is no “OOV" output node in the Mixed-unit CTC model. Consequently, during testing, the model emits mixed units instead of “OOV" while still emitting frequent words. Experiments {#sec: Expts} =========== In this section, we compare the performance of the proposed CTCs with the baseline CTC. We evaluated the proposed methods using Microsoft’s Cortana voice assistant task. The training and test sets consist of approximately 3400 hours ($\sim$ 3.3 million utterances) and 6 hours ($\sim$ 5600 utterances) of audio spoken in American English respectively. All CTC models were trained using either unidirectional LSTMs (ULSTM) or bidirectional LSTMs (BLSTM). The ULSTM is a 5-layer LSTM with 1024 memory cells in each layer. Similarly, the BLSTM is a 6-layer LSTM with 512 memory cells in each direction (therefore resulting in 1024 output dimensions when combining outputs from both directions). The cell outputs are linearly projected to 512 dimensions. The base feature vector is a 80-dimensional vector containing log filterbank energies computed every 10 ms. Eight frames of base features were stacked together ($m = 80 \times 8 = 640$) as the input to the unidirectional CTC, while three frames were stacked together ($m = 80 \times 3 = 240$) as the input to the bidirectional CTC. The skip rate for both unidirectional and bidirectional CTCs was three frames as in [@sak2015fast]. The dimension $n$ of vectors ${\mathbf{h}}_{t}, {\mathbf{g}}_{t}, {\mathbf{c}}_{u}$ was set to 512. For decoding, the greedy decoding procedure (no complex beam search decoder or external LM) was used. This makes our E2E ASR systems purely all-neural. We focus on letter CTC first and then move on to word CTC. This is because improvements in the letter CTC increase the accuracy of the word CTC especially when encountering an OOV word during test time. Thus, we evaluated the performance of Attention CTC (Section \[sec: CTCAttn\]), SA-CTC (Section \[sec: SelfAttnCTC\]), and multi-letter CTC using letter units. Then, we evaluated the performance of our proposed Hybrid CTC (Section \[sec: hybCTC\]) and Mixed-unit CTC (Section \[sec: multimixCTC\]) using both word and letter units. Experiments With Letter-Based CTCs {#ssec: Expts1} ---------------------------------- We experimented with different sizes of letter units. The sizes are represented by the cardinality $K$ of the label set (defined in Section \[ssec: CTC\]). For single-letter units, $K$ was set to 30. This corresponds to 26 English letters \[a-z\], ’, \*, \$, and a blank symbol. For double and triple-letter units, $K$ was set to 763 and 8939 respectively covering all double-letter and triple-letter occurrences in the training set. ### Attention CTC (Section \[sec: CTCAttn\]) {#sssec: exp_CTCAttn_letter} In the first set of experiments, we evaluated Vanilla CTC [@Graves-CTCFirst] and the proposed Attention CTC models trained using our 5-layer ULSTM with single-letter units. We experimented with $\tau = 4$ (length of one-sided attention window, defined in Section \[ssec: CTCAttn-conv\]) considering the training efficiency with this setting. The results are tabulated in the second column of Table \[Tab:WER\_CTCAttn\_ULSTM\_BLSTM\_letter\]. The top row presents the WER for Vanilla CTC. All subsequent rows under “Attention CTC" present the WER for the proposed Attention CTC models when attention modeling capabilities were gradually added in a stage-wise fashion. The best proposed model is in the last row. It includes component attention (COMA) along with all the other enhancements above it (i.e., TC, HA, PLM). It may be recalled, from Eq. , that hybrid attention (HA) is a combination of both content and location attention. The best proposed model outperformed Vanilla CTC by 22.72% relative. We found that the gains are marginal when going from CA to HA. Our conjecture is that the benefits of adding location information in HA could become more pronounced with smaller frame sizes and larger attention windows. However, smaller frame sizes lead to an exponential increase in the number of CTC paths resulting in instability during CTC training. \[Tab:WER\_CTCAttn\_ULSTM\_BLSTM\_letter\] Next we evaluated Attention CTC models trained with our 6-layer BLSTM. The results are tabulated in the third column of Table \[Tab:WER\_CTCAttn\_ULSTM\_BLSTM\_letter\]. Similar to the unidirectional case, the best proposed model outperformed Vanilla CTC by 18.89% relative. This shows that the proposed Attention CTC models continue to perform well even with stronger baselines like BLSTMs. As an additional experiment, we compared RNN-T models trained with 5-layer ULSTM or 6-layer BLSTM transcription networks along with 1-layer ULSTM prediction network and 30 letters as output units. The transcription networks have the same structure as our baseline CTC models. We observed 21.07% and 16.96% WER for ULSTM and BLSTM transcription networks respectively. While this outperforms the baseline CTC error rates reported in Table \[Tab:WER\_CTCAttn\_ULSTM\_BLSTM\_letter\], it could not outperform our final Attention CTC model (last row in Table \[Tab:WER\_CTCAttn\_ULSTM\_BLSTM\_letter\]). ### Self-Attention CTC (Section \[sec: SelfAttnCTC\]) {#sssec: exp_SACTC_letter} In the next set of experiments, we evaluated the performance of SA-CTC models using our ULSTM and BLSTM with attention window size $\tau = 4$. We used 1024-dimensional vectors for both key/query and value vectors. Thus, $d_{k} = 1024, d_{v} = 1024$. This is in accordance with the number of memory cells used in Attention CTC. We experimented with other dimensions but they performed worse. Furthermore, we experimented with both single and multi-head attention (4 and 8 heads). The results are tabulated in Table \[Tab:WER\_SelfAttnCTC\_letter\]. SA-CTC with 8 heads performed the best for each case. The relative WERR over Vanilla CTC are 21.56% and 16.59% using ULSTM and BLSTM respectively. Comparing the best models from Attention CTC and SA-CTC, we find that Attention CTC performed slightly better than SA-CTC by about 1.2% (22.72-21.56) and 2.3% (18.89-16.59) for ULSTM and BLSTM respectively. \[Tab:WER\_SelfAttnCTC\_letter\] ### Multi-letter CTC {#sssec: exp_letter} In the next set of experiments, we evaluated the performance of various CTC models trained using our 6-layer BLSTM with multi-letter units as outputs. We evaluated three kinds of CTC models: Vanilla CTC, Attention CTC, and Attention CTC sharing 5 hidden layers with a word CTC. In the third CTC model, we applied attention only to the letter CTC. As shown in the third column of Table \[Tab:WER\_CTC\_multiletter\], the WER of Vanilla CTC drops significantly when the output units become larger (and hence more stable). The letter CTC using triple-letter units achieved 13.28% WER which is a relative WERR of 25.56% compared to the letter CTC using single-letter units. As shown in the fourth column of Table \[Tab:WER\_CTC\_multiletter\], Attention CTC improves hugely over the Vanilla CTC. It achieves about 18.89%, 20.88%, and 14.46% relative WERR over Vanilla CTC using single-letter, double-letter, and triple-letter units respectively. In the last column of Table \[Tab:WER\_CTC\_multiletter\], the shared Attention CTC performed better than the Vanilla CTC but worse than its non-sharing counterpart. This indicates one shortcoming of the shared Attention CTC – it sacrifices the accuracy of the letter CTC because of the shared-hidden-layer constraint with the word CTC. \[Tab:WER\_CTC\_multiletter\] Experiments With Word-Based CTCs {#ssec: Expts2} -------------------------------- In this section, we evaluate the performance of the Hybrid CTC (Section \[sec: hybCTC\]) and the Mixed-unit CTC (Section \[sec: multimixCTC\]) using both words and letters as targets. We refer to these CTCs as word CTCs since a majority of the output nodes in these CTCs directly correspond to words. We are primarily interested in recognizing the OOVs as accurately as possible while also boosting the accuracy of recognizing non-OOVs. All attention models in this section are based on Attention CTC (Section \[sec: CTCAttn\]) instead of SA-CTC (Section \[sec: SelfAttnCTC\]) owing to the superior results of the former (Section \[sssec: exp\_SACTC\_letter\]). Our Vanilla CTC [@Graves-CTCFirst] is a 6-layer BLSTM with approximately 27k output nodes consisting of frequent words and the OOV token. We defined frequent words as those which occurred at least 10 times in the training corpus. All the remaining words were tagged as OOV. This is the mapping scheme described in the fourth row of Table \[Tab:units\]. Thus, within the family of word CTCs, the Vanilla CTC is a CTC with 6-layer BLSTM whose output units model words and the OOV token. The Vanilla CTC achieved 9.84% WER (Table \[Tab:WER\_HybCTC\_word\]) among which the OOVs contributed to 1.87% WER. \[Tab:WER\_HybCTC\_word\] ### Hybrid CTC (Section \[sec: hybCTC\]) {#sssec: exp_hybrid} Our Hybrid CTC model has both word and letter CTCs operating in parallel in an MTL framework. They share 5 hidden BLSTM layers. An additional LSTM layer was added for each task (word and letter CTC) and fine tuned. Thus, the underlying structure of Hybrid CTC is still a 6-layer BLSTM which has the same number of hidden layers as that of the Vanilla CTC. Results are tabulated in Table \[Tab:WER\_HybCTC\_word\]. Both hybrid models achieved 9.66% WER which is a marginal improvement over the Vanilla CTC. Several factors contribute to such a small improvement. First, the shared-hidden-layer constraint degrades the performance of the letter CTC, potentially affecting the final hybrid system performance. Second, although the shared-hidden-layer constraint helps to synchronize the word outputs from the word and letter CTC, we still observed that the time synchronization can fail at times. In such cases, the OOV token was replaced with its neighboring word because of word segment misalignments. Because of these factors, the triple-letter CTC did not improve over the double-letter CTC. ### Mixed-unit CTC (Section \[sec: multimixCTC\]) {#sssec: exp_mix} \[Tab:WER\_mixCTC\_word\] \[Tab:WER\_summary\_wordCTC\] In the next set of experiments, we compared the performance of CTCs by changing their output units to mixed-letter units or wordpieces [@Senrich2016NMT; @Wu2016WordPiece]. Wordpieces are commonly occurring sub-word units that can be merged to form whole words. Similar to mixed-units, wordpieces offer the flexibility to generate open-vocabulary words. Previous studies [@chan2016latent; @rao2017exploring] have explored using wordpieces. To build a wordpiece model (WPM), each word in a training corpus is first segmented into a sequence of individual characters and an end-of-word symbol. Following this, the most frequently occurring character pair is merged to form a new symbol or wordpiece. This process is iterated until a predefined number of wordpieces are generated. The outcome of this is that the corpus is now redefined using those wordpieces which result in minimal number of whole word segmentations. However, our approach of building mixed-units is different from building wordpieces since we decompose *only* OOVs while still retaining the high frequency words as whole word units. Results are tabulated in Table \[Tab:WER\_mixCTC\_word\]. As before, the Vanilla CTC achieved a WER of 9.84%. In the next experiment, we decomposed only the OOVs in the training set into single-letters. Thus, the output nodes consist of both single-letters and 27k frequent words. There indeed is no such clear boundary of decomposition with 2 distinct sets of basic units. As mentioned in Section \[sec: multimixCTC\], having a mixture of word and single-letters confuses CTC training as the network does not know why the frequent words cannot be decomposed into letters. Therefore, this model achieved 20.10% WER which is far worse than Vanilla CTC. Analyzing the posterior spikes of this model, we observed that the word spikes and letter spikes are interspersed with each other which proves our hypothesis. However, when we decomposed OOVs into mixed-units (frequent word + single-letters), the WER dropped sharply to 10.17% but still a little worse than the Vanilla CTC. This is again because of the mixture of words with single-letters. Next, we decomposed the OOVs into a combination of frequent words and double-letters. The WER dropped further to 9.58%. When triple-letters and frequent words were used (totally 33k outputs), the WER dropped even more to 9.32%. This is a 5.28% relative WERR over Vanilla CTC. Then we applied attention on this model. To save computational costs, because of large number of output units, we excluded the PLM network in Eq. . This model achieved a WER of 8.65%, which is about 12.09% relative WERR over the Vanilla CTC. This is our final word CTC model (mixed-units with triple-letters + attention). As an additional experiment, instead of mixed-units, we used wordpieces as targets. This model achieved a WER of 9.73% which is a little better than that achieved with Vanilla CTC but worse than the results obtained with mixed-unit CTC. This indicates that building A2W models using mixed-units or WPMs is a better choice than simply using words and OOV (as in Vanilla CTC). Finally, we compared our final word CTC model with a traditional CD phoneme CTC in Table \[Tab:WER\_summary\_wordCTC\]. We trained a CD phoneme 6-layer BLSTM with the CTC criterion, modeling around 9000 tied CD phonemes. It has the same structure as other CTC models except that it uses different output units (phonemes instead of mixed-units or words). This CD phoneme CTC model achieved 9.28% WER when decoding with a well-trained 5-gram LM with totally around 100 million (M) N-grams. Despite a strong CD phoneme CTC model and LM, the mixed-unit + Attention CTC model (without any LM or complex decoder) was still able to outperform it by about 6.79% relative. Note that the proposed model not only reduces the WER of the word CTC but also improves the end-user experience. The proposed model provides more meaningful outputs without outputting any OOV token which can be distracting to users. Moreover, we observed that even when the proposed model failed to recognize the OOVs accurately, it still came out with words which were a close match with the ground truth words. For example, the proposed method recognizes “text fabine” as “text fabian” and “call zubiate” as “call zubiat”. However, the Vanilla CTC recognized these words as “text OOV” and “call OOV” respectively. Conclusions {#sec: Conclusions} =========== We proposed improving letter and word CTC models using Attention CTC, Self-Attention CTC, Hybrid CTC, and Mixed-unit CTC. In attention-based CTCs, we generated new hidden features that carry attention weighted context information which are more useful than hidden features without context information. To solve the OOV issue in word CTC, we presented Hybrid CTC which uses a word and letter CTC as primary and auxiliary tasks in an MTL framework. Finally, to boost the performance of Hybrid CTC, we introduced Mixed-unit CTC whose output units contain both words and multi-letters. While the frequent words are treated as whole word units, the OOVs are decomposed into a sequence of frequent words and multi-letters. We evaluated all these methods on a 3400 hours Microsoft Cortana voice assistant task. The proposed word-based Mixed-unit CTC model with triple letters when combined with attention improved over the word-based Vanilla CTC model by 12.09% relative. Such an acoustic-to-word CTC model is a pure end-to-end model without using any LM and complex decoder. It also outperformed a traditional CD phoneme CTC model equipped with strong LM and complex decoder by 6.79% relative. Code {#sec: Code} ==== The CNTK script for Attention CTC described in Section \[sec: CTCAttn\] is available online at: <https://github.com/microsoft/CNTK/tree/vadimma/CTC/Examples/Speech/AttentionCTC>. Acknowledgment {#acknowledgment .unnumbered} ============== The authors would like to thank Kastubh Kalgaonkar, during his time at Microsoft, for his help in building WPMs. [^1]: The authors are with Microsoft Corporation, USA (email: amitdas@illinois.edu; jinyli@microsoft.com; guoye@microsoft.com; ruzhao@microsoft.com; yifan.gong@microsoft.com).
--- abstract: 'The Axelrod model has been widely studied since its proposal for social influence and cultural dissemination. In particular, the community of statistical physics focused on the presence of a phase transition as a function of its two main parameters, $F$ and $Q$. In this work, we show that the Axelrod model undergoes a second order phase transition in the limit of $F \rightarrow \infty $ on a complete graph. This transition is equivalent to the Erd[ő]{}s-R[é]{}nyi phase transition in random networks when it is described in terms of the probability of interaction at the initial state, which depends on a scaling relation between $F$ and $Q$. We also found that this probability plays a key role in sparse topologies by collapsing the transition curves for different values of the parameter $F$. We explore the extent of this collapse and the dynamical mechanisms that lead to this.' author: - 'Sebastián Pinto [^1]' - Pablo Balenzuela title: 'Erd[ő]{}s-R[é]{}nyi phase transition in the Axelrod model on complete graphs' --- Introduction ============ The Axelrod model, originally proposed for cultural dissemination [@axelrod1997dissemination], is grounded in two key dynamical features: Social influence, through which people become more similar when they interact; and homophily, which is the tendency of individuals to interact preferentially with similar ones. Specifically, the agents are described by a vector of $F$ components called cultural features, which can take one of $Q$ integer values called cultural traits. The dynamics of the model is based on an imitation rule: A random agent adopts a cultural trait of another one with a probability proportional to the number of shared features. Despite its simplicity, the Axelrod model attracted the attention of the statistical physics community due to the emergency of a phase transition from a monocultural to a multicultural state [@castellano2000nonequilibrium; @klemm2003nonequilibrium]. The phase transition takes place by varying the number of cultural traits $Q$ for a given fixed $F$. If the number of cultural traits is low, the probability of interaction is high, leading the system to a monocultural state. If $Q$ is high, the mentioned probability is low and, after few interactions, the system evolves to a stationary multicultural state. This phase transition is usually studied by taking the size of the biggest fragment as the order parameter. The transition was reported to be continuous for one-dimensional networks and discontinuous for two dimensions when $F > 2$ [@klemm2003role], although a continuous transition is recovered when the topology becomes small-world [@klemm2003nonequilibrium]. On the other hand, for $F = 2$ , the type of the transition is the opposite: Continuous for 2-D, and discontinuous for small-world networks [@reia2016effect]. The case of $F = 2$ is important due to the possibility of taking an analytical approach to study the model [@vazquez2007non]. Several scaling relationships have been found in the Axelrod model. For instance, in [@peres2015nature] and [@reia2016phase] a finite-size scaling analysis is performed for $F=2$ in square-lattices and small-world networks. A scaling relation between $Q$ and size $N$ can be found in scale-free networks for $F=10$ [@klemm2003nonequilibrium], a scaling relationship between the density of active bonds and time in one-dimensional network in [@vilone2002ordering], and the finding of an effective noise rate is explored in [@klemm2003global]. Among the reported scaling relations, there is a particular one reported in [@klemm2005globalization] and [@lanchier2013fixation] where the transition curves in one-dimensional networks collapse when the control parameter is $F/Q$. This ratio has an immediate interpretation as the mean value of shared features given two agents in the initial state, suggesting that the initial distribution contains key information about the final outcome of the Axelrod model. In this work, we review the Axelrod model in terms of the initial interaction probability between agents. In particular, we found that the second order phase transition in the limit of $F \rightarrow \infty $ on a complete graph is equivalent to the phase transition observed in Erd[ő]{}s-R[é]{}nyi random networks [@erdHos1960evolution]. In this model, a set of $N$ initially disconnected nodes are linked with a probability $p$, and for $p>p_c$, a fragment which scales with the size of the system emerges [@erdHos1960evolution; @newman2003structure]. Notoriously, the initial interaction probability between agents plays also a key role in describing the Axelrod model on sparse topologies, by making the transition curves collapse for different values of $F$. We end our work by discussing the mechanisms which lead to this collapse in the transition curves. Results ======= We studied the transition of the Axelrod model for a wide range of $F$ and $Q$ values. We explore the transition between mono and multicultural states on two different topologies: Complete graphs and the classical two-dimensional lattice. Axelrod model in complete graphs -------------------------------- We analyze the transition in the Axelrod model for a $N=1024$ complete graph. In Fig. \[figure1\] we show the relative size of the biggest fragment in the initial and final state, for different values of $F$. A fragment is defined as a group of topologically connected agents by active links, which are defined as a link who connect two agents with at least one feature in common. We can observe that the biggest fragment in the final state is equal or smaller than in the initial state for low values of $F$. When $F$ increases, this difference approaches to zero. This result suggests that in the limit of $F \rightarrow \infty$, the size of the stationary biggest fragment is fully determined at the initial condition. The importance of the initial condition is reflected in the fact that two agents who initially do not share any feature cannot interact (at least, until other interactions take place and eventually change their cultural states). Given two agents, the parameters $F$ and $Q$ set their initial number of shared features, by sampling this quantity from a binomial distribution with parameters $F$ and $1/Q$. If we define the interaction probability $p_{int}$ as the probability of having an active link between them: $$p_{int} = 1 - (1 - \frac{1}{Q})^F, \label{eq:pint}$$ the initial state of the Axelrod model on a complete graph is equivalent to the Erd[ő]{}s-R[é]{}nyi model with parameter $p_{int}$. In the limit of $F \rightarrow \infty$ the stationary sizes of the biggest fragment converge to their initial state, as can be seen in Fig. \[figure1\]. This suggests that the transition in this limit of the Axelrod model is similar to the Erd[ő]{}s-R[é]{}nyi one. In fact, Fig. \[figure2\] shows that this happens by taking $p_{int}$ as the control parameter. This can be observed both for the biggest fragment (panel (a)) and the average finite-fragment size (panel (b)). ![[**Biggest fragment $S_{max}$ and average fragment size $\langle s \rangle$ as function of $p_{int}$.**]{} Both figures show that the Axelrod transition tends to the Erd[ő]{}s-R[é]{}nyi transition for increasing $F$. Dashed lines in panel (b) point out the critical values of $p_{int}$ for different values of $F$.[]{data-label="figure2"}](Figure2.pdf){width="\columnwidth"} The definition of $p_{int}$ (Eq. (\[eq:pint\])) allows to estimate the critical value of the transition, $Q_c$, in the limit of $F \rightarrow \infty$. Since the biggest fragment emerges in an Erd[ő]{}s-R[é]{}nyi network when $Np_{int}^c = 1$ [@newman2003structure], it gives: $$Q_c = (1 - (1 - \frac{1}{N})^\frac{1}{F})^{-1},$$ in the Axelrod model. This analogy provides a good estimation of $Q_c$ for large values of $F$. For instance, if $N = 1024$ and $F = 100$, $Q_c\sim 10^5$, as can be seen in Fig. \[figure1\]. The equivalence between both phase transitions can be completed by the calculation of the critical exponents. We estimate them for the case of $F=100$, given the closeness of its curves to the Erd[ő]{}s-R[é]{}nyi transition (see Fig. \[figure2\]). The critical exponents [@newman2003structure] are introduced following the usual relationships: $$\begin{aligned} S_{max} \sim (p_{int} - p_{int}^c)^\beta \\ \langle s \rangle \sim |p_{int} - p_{int}^c|^{-\gamma}, \end{aligned}$$ where $p_{int}^c$ is the critical probability, $S_{max}$ is the biggest fragment and $\langle s \rangle$ is the average finite-fragment size, which is a measure of the fluctuations of the order parameter. For finite systems, the critical exponents can be calculated by performing finite-size scaling following [@newman1999monte]. Here, the authors propose the following scaling relationships: $$\begin{aligned} S_{max} = N^{-\frac{\beta}{\nu}} F_1[(p_{int}-p_{int}^c) N^{\frac{1}{\nu}}] \label{eq:finite_size1}\\ \langle s \rangle = N^{\frac{\gamma}{\nu}} F_2[(p_{int}-p_{int}^c) N^{\frac{1}{\nu}}], \label{eq:finite_size2}\end{aligned}$$ where $F_1$ and $F_2$ are unknown scaling functions, but with the property that $F_{1(2)}(x) \rightarrow constant$, when $x \rightarrow 0$ (which means, near the critical value). This implies that exactly at the critical point, $S_{max} \sim N^{-\frac{\beta}{\nu}}$ and $\langle s \rangle \sim N^{\frac{\gamma}{\nu}}$. These expressions can be used to estimate the relation between exponents without knowledge about $F_1$ and $F_2$. The argument of these scaling functions defines another relationship followed by the exponent $\nu$ and the critical value $p_{int}^c$, which can be read in the following equation: $$p_{int}^c(N) = p_{int}^c - b N^{-\frac{1}{\nu}}, \label{eq:pc_scaling}$$ where $p_{int}^c(N)$ is the pseudo-critical value in which $\langle s \rangle$ takes its maximum values as was shown in panel (b) of Fig. \[figure2\] for different values of $F$. Finally, the fragments size distribution $f(s)$ near the critical point follows a power-law distribution with parameter $\tau$, i.e. $f(s) \sim s^{-\tau}$. This relation defines this last critical exponent. Fig. \[figure3\] shows the scaling relationships for $F=100$ as a function of $N$. Panel (a) shows the scaling relationship derived from Eq. (\[eq:pc\_scaling\]). Panel (b) and (c) show the scaling relationships at the critical point for both the biggest fragment and the average fragment size. Finally, panel (d) shows that the fragments size distribution follows a power-law distribution at the critical point. The estimated exponents are pointed out both in Fig. \[figure3\] and Table \[tab:table1\]. The exponent $\tau$ was calculated following the methodology sketched in [@clauset2009power]. Table \[tab:table1\] shows that the estimations are consistent with the theoretical values predicted for the Erd[ő]{}s-R[é]{}nyi model [@newman2003structure]. Given that the equivalence is in the limit of $F \rightarrow \infty$, we expect that the matching between both set of exponents improves for larger values of $F$. ![[**Finite-size scaling and critical behaviour for $F=100$ on a complete network.**]{} Full lines point out the fitted curves. The estimation of the critical exponents are also shown. Panel (d) belongs to the fragment distribution at the critical point with $N=1024$.[]{data-label="figure3"}](Figure3.pdf){width="\columnwidth"} ------------------------------------------------- -- -- **Exponent & **Estimated & **Theoretical\ $p_{int}^c$ & $(1 \pm 0.7) \times 10^{-4}$ & 0\ $\nu$ & $0.89 \pm 0.02$ & 1\ $\beta/\nu$ & $0.35 \pm 0.05$ & $1/3$\ $\gamma/\nu$ & $ 0.34 \pm 0.02$ & $1/3$\ $\tau$ & $2.4 \pm 0.2$ & $2.5$\ ****** ------------------------------------------------- -- -- : [**Critical exponents**]{}. Estimation from the Axelrod model with $F=100$ and the predicted theoretical values for the Erd[ő]{}s-R[é]{}nyi phase transition.[]{data-label="tab:table1"} It should be noticed that the equivalence between both phase transitions (described by similar critical exponents) is not present in other topological features because of the dynamical evolution of the Axelrod model. This could be understood as follows: Given an active link between two agents, the Axelrod model always tends to increase their similarity. An active link can only become inactive by third party interactions. When the value of $F$ increases, the probability that an active link becomes inactive decreases and goes to zero when $F \rightarrow \infty$. Then, the initial active links define the sizes of the connected components (as in Erd[ő]{}s-R[é]{}nyi model) and the only effect of the dynamics is to transform all the connected components in cliques of the same size. Axelrod model in 2D lattices ---------------------------- Let’s explore now the Axelrod transition in classical 2D lattices as a function of the new control parameter $p_{int}$. Fig. \[figure4\] shows the transition curves for different $F$ as a function of $p_{int}$, in addition to the initial size of the biggest fragment. The inset of this figure shows the same curves as function of $Q$, where it can be seen that the transition shifts to larger values of $Q$ when $F$ increases. Fig. \[figure4\] also shows that all transition curves collapse to one. This collapse is also found in other sparse topologies, as random regular networks with equivalent mean degree (not shown). However, in contrast to the observed behavior in complete networks (see Fig. \[figure1\]), the collapsed curve does not match the corresponding to the initial state. The collapse as a function of $p_{int}$ is essentially the same pointed out by [@klemm2005globalization] for one dimensional networks. As was mentioned above, $F$ and $Q$ set the number of shared features for a pair of agents, by sampling this quantity from a binomial distribution with parameters $F$ and $1/Q$. These quantities also set the value of $p_{int}$. In the limit of large $F$ and $Q$, this binomial distribution can be well approximated by a Poisson one with parameter $F/Q$, which is the mean shared features by two random agents, and it is the control parameter introduced in [@klemm2005globalization] for one-dimensional networks. To understand the mechanisms underlying the collapsing of the curves as a function of $p_{int}$, we look for a dynamical observable who tracks the activation and deactivation of links during the dynamics. We define the fraction of deactivated links $f_{dl}$: $$f_{dl} = \frac{d}{d + c}, \label{eq:fraction_of_active_links}$$ where $d$ is the number of active links which became inactive during the dynamics and $c$ the number of links who were inactive and become active. This quantity can be measured both, in complete networks and lattices. In lattices, we also discriminate between any pairs of agents (homophilic links) and pairs of connected agents (physical links). This distinction is unnecessary for complete networks. Fig. \[figure5\] shows, in panel (a) and (b), the value of $f_{dl}$ as a function of $p_{int}$ near the critical point for both homophilic and physical links in a lattice. Panel (a) shows that the homophilic links have the same rate of deactivation, independently of the value of $F$. Panel (b) shows this same behaviour for larger values of $p_{int}$, but the curves differ when $p_{int}$ goes to zero. However, in this case there is few events associated to an activation or deactivation of a physical link, and the fluctuations of $f_{dl}$ makes unclear the differences among curves. Within the error bars, we conclude that the dynamic makes a similar effect on the initial state no matter the value absolute of $F$, leading to the collapse of the transition curves observed in Fig. \[figure4\]. In contrast, panel (c) of Fig. \[figure5\] shows that $f_{dl}$ on a complete network is systematically lower for $F=100$ respect to other values of $F$, over a range of values of $p_{int}$. A low value of $f_{dl}$ means that there are more links that become active than inactive, and in particular leads to preserve most of the initial active links in the final state. This preservation is consistent with the observed equivalence between phase transitions sketched in previous section. Conclusions =========== In this work, we show that the Axelrod model on complete graphs displays a phase transition equivalent to the one observed in the Erd[ő]{}s-R[é]{}nyi model when the order parameter is plotted as a function of $p_{int}$, which is the probability that two agents share at least one feature in the initial state. This happens in the limit of $F \rightarrow \infty$. This claim is supported by the calculation of critical exponents following the approach of finite size scaling sketched in [@newman1999monte]. Despite this similarity, the Axelrod dynamics leads to a stationary state where the connected components are also cliques, in contrast to what happens in the Erd[ő]{}s-R[é]{}nyi model. When same scaling relationship is analyzed in Axelrod model on sparse graphs, we found the collapse of transition curves for different values of $F$. These collapsed curves do not coincide with the initial state, as did in complete networks for $F \rightarrow \infty$. Both behaviors can be understood in terms of the fraction of links that become inactive during the dynamics. We have observed that this quantity is the same for different values of $F$ in a lattice and tends to zero when $F \rightarrow \infty$ in a complete network. Summarizing, the dynamics of the Axelrod model produces different stationary states depending on the underlying connectivity. For complete network, all agents with homophily different from zero (active links) are able to interact. In the particular case of $F \rightarrow \infty$, the active links have low probability to become inactive and therefore are preserved in the final state (see Fig. \[figure5\]). Here, the main effect of the dynamics is to transform the connected components (fixed by the set of initial active links) in cliques of the same size. On the other hand, the lack of this kind of physical connectivity in sparse networks changes dramatically the effect of the dynamics. Most of the agents which share at least one feature are not able to interact and the final state cannot be obtained by simply fulfilling cliques. However, we observed that the rate of activation and deactivation of links is the same for different values of $F$, leading to the curve collapse observed in Fig. \[figure4\]. Acknowledgements ================ We thank Juan Pablo Pinasco, Lucía Pedraza, and Ignacio Sticco for bringing us a critical revision of the manuscript. [10]{} Robert Axelrod. The dissemination of culture: A model with local convergence and global polarization. , 41(2):203–226, 1997. Claudio Castellano, Matteo Marsili, and Alessandro Vespignani. Nonequilibrium phase transition in a model for social influence. , 85(16):3536, 2000. Konstantin Klemm, Víctor M Eguíluz, Raúl Toral, and Maxi San Miguel. Nonequilibrium transitions in complex networks: A model of social interaction. , 67(2):026120, 2003. Konstantin Klemm, Víctor M Eguíluz, Raúl Toral, and Maxi San Miguel. Role of dimensionality in axelrod’s model for the dissemination of culture. , 327(1-2):1–5, 2003. Sandro M Reia and José F Fontanari. Effect of long-range interactions on the phase transition of axelrod’s model. , 94(5):052149, 2016. Federico V[á]{}zquez and Sidney Redner. Non-monotonicity and divergent time scale in axelrod model dynamics. , 78(1):18002, 2007. Lucas R Peres and Jos[é]{} F Fontanari. The nature of the continuous non-equilibrium phase transition of axelrod’s model. , 111(5):58001, 2015. Sandro M Reia and Jos[é]{} F Fontanari. The phase transition of axelrod’s model revisited. , 2016. Daniele Vilone, Alessandro Vespignani, and Claudio Castellano. Ordering phase transition in the one-dimensional axelrod model. , 30(3):399–406, 2002. Konstantin Klemm, Victor M Egu[í]{}luz, Ra[ú]{}l Toral, and Maxi San Miguel. Global culture: A noise-induced transition in finite systems. , 67(4):045101, 2003. Konstantin Klemm, Víctor M Eguíluz, Raul Toral, and Maxi San Miguel. Globalization, polarization and cultural drift. , 29(1-2):321–334, 2005. Nicolas Lanchier, Stylianos Scarlatos, et al. Fixation in the one-dimensional axelrod model. , 23(6):2538–2559, 2013. Paul Erd[ő]{}s and Alfr[é]{}d R[é]{}nyi. On the evolution of random graphs. , 5(1):17–60, 1960. Mark EJ Newman. The structure and function of complex networks. , 45(2):167–256, 2003. M Newman and G Barkema. . Oxford University Press: New York, USA, 1999. Aaron Clauset, Cosma Rohilla Shalizi, and Mark EJ Newman. Power-law distributions in empirical data. , 51(4):661–703, 2009. [^1]: spinto@df.uba.ar
--- abstract: 'A biperiodic planar network is a pair $(G,c)$ where $G$ is a graph embedded on the torus and $c$ is a function from the edges of $G$ to non-zero complex numbers. Associated to the discrete Laplacian on a biperiodic planar network is its spectrum: a triple $(C,S,\nu)$, where $C$ is a curve and $S$ is a divisor on it. We give a complete classification of networks (modulo a natural equivalence) in terms of their spectral data. The space of networks has a large group of cluster automorphisms arising from the $Y-\Delta$ transformations. We show that the spectrum provides action-angle coordinates for the discrete cluster integrable systems defined by these automorphisms.' author: - Terrence George title: Spectra of biperiodic planar networks --- Introduction ============ A planar resistor network is a pair $(\tilde{G},\tilde{c})$ where $\tilde{G}$ is a planar graph and $\tilde{c}$ is a conductance function that assigns a non-zero complex number to each edge of $\tilde G$, defined up to multiplication by a global constant. It is said to be biperiodic if translations by $\mathbb Z^2$ act on $(\tilde{G},\tilde{c})$ by isomorphisms. This is equivalent to the data of the quotient $(G,c):=(\tilde{G},\tilde{c})/\mathbb Z^2$, where $G$ is a graph on a torus. Hereafter we assume that our networks are on a torus.\ The fundamental operator in the study of networks is the discrete Laplacian. It has a certain spectrum, defined below, and the main goal of this paper is to show that this is a birational isomorphism with a certain moduli space of curves and divisors and therefore provides a way to classfiy networks. While in typical geometric or probabilistic applications the conductances are always positive real numbers, the algebraic nature of the problem leads us to consider general (nonzero) complex conductances.\ There is a natural equivalence relation on networks, defined by certain local rearrangements of the graph and its conductances, which preserves the spectrum. To define this equivalence relation, let us start by defining a zig-zag path. A zig-zag path on $G$ is a path that alternately turns maximally left or right. A resistor network $G$ is minimal ([@CdV94], [@CIM98]) if lifts of any two zig-zag paths to $\tilde{G}$ do not intersect more than once and any lift of a zig-zag path has no self intersections. Minimality is a mild assumption on networks since any network may be reduced to a minimal one by certain elementary moves without affecting its electrical properties. The Newton polygon of a minimal resistor network is the unique integral polygon whose primitive edges are given by the homology classes of zig-zag paths in cyclic order. Since zig-zag paths come in pairs related by flipping the orientation, the Newton polygon of a network is always centrally symmetric.\ We say that two minimal networks $(G_1,c_1)$ and $(G_2,c_2)$ are topologically equivalent if there is a sequence of $Y-\Delta$ moves that takes the underlying graph $G_1$ to the graph $G_2$. Topological equivalence classes of networks are parameterized by centrally symmetric Newton polygons ([@GK12]). In particular, any two minimal resistor networks with the same Newton polygon are related by a sequence of elementary transformations called $Y-\Delta$ transformations.\ Two networks $(G_1,c_1)$ and $(G_2,c_2)$ are electrically equivalent if there is a sequence of $Y-\Delta$ moves that takes the network $(G_1,c_1)$ to the network $(G_2,c_2)$. Goncharov and Kenyon ([@GK12]) constructed the resistor network cluster variety $\mathcal R^0_N$ that parameterizes electrical equivalence classes of resistor networks that lie in the same topological equivalence class associated to the polygon $N$ as follows: A centrally symmetric integral polygon $N$ determines a finite collection of minimal resistor networks whose Newton polygon is $N$, related by $Y-\Delta$ transformations. To each minimal resistor network $G$ is associated a complex torus $(\mathbb C^*)^{\text{number of edges of }G-1}$, which parameterizes conductance functions on $G$. The $Y-\Delta$ move $G_1 {\rightarrow}G_2$ induces a birational map between the complex tori associated to $G_1$ and $G_2$. $\mathcal{R}^0_N$ is obtained by gluing the complex tori using these birational maps.\ Goncharov and Kenyon further showed that $\mathcal{R}^0_N$ is a Lagrangian subvariety of an algebraic completely integrable Hamiltonian system $\mathcal X_N^0$ associated to the dimer model. Let $S_N$ be the moduli space of triples $(C,S,\nu)$ where $C$ is the vanishing locus of a Laurent polynomial $P(z,w)$ with Newton polygon $N$, $S$ is a degree $g$ effective divisor on $C$ (where $g=$ number of interior lattice points in $N$) and $\nu$ is a parameterization of the points at infinity of $C$. Goncharov and Kenyon constructed the spectral map $\mathcal X_N^0 {\rightarrow}\mathcal S_N$ and showed that it is a birational isomorphism. Fock ([@Fock15]) constructed an explicit inverse map in terms of theta functions on the Jacobian of $C$. In this construction, the elementary transformation in dimer model (the spider move) is described by Fay’s trisecant identity.\ Associated to the Laplacian on a biperiodic planar network is its spectrum $\mathcal R^0_N {\rightarrow}\mathcal{S}_N$, where $\mathcal S_N$ is defined as in the previous paragraph, but with the divisor $S$ now of degree $g=$ number of interior lattice points in $N$-1. Let $\mathcal S_N'$ be the subspace where $P(z,w)$ satisfies 1. $P(1,1)=0$ and the point $(1,1)$ is a node; 2. $\sigma:(z,w) \mapsto (\frac{1}{z},\frac{1}{w})$ is an involution on $C$, and the divisor $S$ satisfies $$S+\sigma(S)-q_1-q_2 \equiv K_{\hat{C}},$$ where $\hat{C}$ is the normalization of $C$, $q_1,q_2$ are the points in the fiber of the node at $(1,1)$ and $K_{\hat{C}}$ is the canonical divisor class on $\hat{C}$. Our main result is the following complete classification of biperiodic planar resistor networks in terms of their spectral data: \[thm1\] The spectral map is a birational isomorphism $\mathcal{R}_N^0 {\rightarrow}\mathcal S_N'$. Along the way, we provide an explicit description of oriented cycle rooted spanning forests of $G$ (OCRSFs) whose homology classes are boundary lattice points of $N$ (Lemmas \[crsfextremal\], \[crsfexternal\]), analogous to results for dimers in [@Bro12], [@GK12]. In particular, we show that every OCRSF corresponding to a boundary lattice point is a union of cycles (Corollary \[cyc\]).\ We construct an explicit inverse spectral map (see (\[invmap\])). A key player in the construction is the theta function on the Prym variety of $\hat{C}$. The $Y-\Delta$ transformation is described by Fay’s quadrisecant identity ([@Fay89]). Further we show that the inverse map is compatible with $Y-\Delta$ transformations (Theorem \[ydcomp\]).\ \[amoeba\] ![The divisor $S$ on the amoeba of the spectral curve.[]{data-label="amoeba"}](amoeba.png "fig:"){width="50.00000%"} Since the $Y-\Delta$ move involves subtraction free rational expressions, the set of positive real valued points of the cluster variety is well defined, which we denote by $\mathcal R_N({{\mathbb R}}_{\geq 0})$. This subspace is important for probabilistic applications. For a positive real valued point, the spectrum $(C,S,\nu)$ has the following additional properties (see [@K17]): 1. $C$ is a simple Harnack curve ([@M]). Compact ovals (connected components) of $C$ are in bijection with interior lattice points of $N$. 2. The oval corresponding to the origin is degenerated to a real node. 3. $S$ has a point in each of the other compact ovals. The spectral curves of genus zero correspond to the isoradial networks studied in [@K02]. In this case, the inverse spectral map recovers Kenyon’s results expressing the conductances in terms of tangents, and the quadrisecant identity reduces to the triple tangent identity. For a different generalization of isoradial networks to the case of the massive Laplacian on isoradial graphs, see [@BdeTR17].\ Consider the map $C({{\mathbb C}}) {\rightarrow}\mathcal A(C), (z,w) \mapsto (\log |z|,\log |w|) \subset {{\mathbb R}}^2$ from the ${{\mathbb C}}-$valued points of $C$ to its amoeba $\mathcal A(C)$. For a simple Harnack curve, this map is a homeomorphism from the compact ovals to the boundaries of the holes of the amoeba, and therefore provides a way to depict the divisor $S$ (see Figure \[amoeba\] for an example, where the network is a $2 \times 1$ fundamental domain of the triangular lattice).\ A sequence of $Y-\Delta$ moves that takes a graph $G$ to itself gives rise to a birational automorphism (called a cluster modular transformation) of $\mathcal R_N$, where $N$ is the Newton polygon of $G$. A cluster modular transformation provides a discrete integrable system on $\mathcal R_N$. For example, if we consider the honeycomb lattice, and do the $Y-\Delta$ move at the downward triangles, we obtain the cube recurrence studied by Carroll and Speyer ([@CS04], see also [@GK12] section 6.3). We show that cluster modular transformations are linearized on the Prym variety of $C$ (Theorem \[lin\]). In the case of positive real conductances, we may view this as moving each point along the boundary of the corresponding hole in the amoeba.\ **Acknowledgements.** We thank Giovanni Inchiostro, Rick Kenyon and Xufan Zhang for helpful discussions. The dimer model =============== Line bundles with connection ---------------------------- A *surface graph* $\Gamma$ on a torus ${{\mathbb T}}$ is a graph embedded on ${{\mathbb T}}$ such that each face is contractible. A *line bundle with connection* $(V,\phi)$ on $\Gamma$ is the data of a complex line $V_v \cong {{\mathbb C}}$ at each vertex of $\Gamma$ along with isomorphisms called *parallel transport* $\phi_{v v'}:V_v {\rightarrow}V_{v'}$ for each edge $\langle v,v' \rangle$ such that $\phi_{v' v}=\phi_{v v'}^{-1}$. Two line bundles with connection $(V,\phi)$ and $(V',\phi')$ are *gauge equivalent* if there exists isomorphisms $\psi_v:V_v {\rightarrow}V'_v$ such that for all edges, the following diagram commutes. $$\begin{tikzcd} V_v \arrow[r, "\phi_{vv'}"] \arrow[d,"\psi_v"] & V_{v'} \arrow[d, "\psi_{v'}" ] \\ V'_{v} \arrow[r, "\phi'_{vv'}" ] & V'_{v'} \end{tikzcd}$$ If $L$ is an oriented loop in $\Gamma$, the *monodromy* $m(L)$ of $(V,\phi)$ around $L$ is the composition of the parallel transports around $L$. A line bundle with connection is *flat* if the monodromy around the boundary of any face of $\Gamma$ is trivial.\ The moduli space of line bundles with connection on $\Gamma$ modulo isomorphisms is denoted $\mathcal{L}_\Gamma$. Let $\mathcal L_\Gamma^{\text{flat}}$ be the subspace of flat connections. The monodromies around loops in $\Gamma$ give rise to isomorphisms such that the following diagram commutes: $$\begin{tikzcd} \mathcal L_\Gamma^{\text{flat}} \arrow[hookrightarrow,r] \arrow[d,"\cong"] & \mathcal{L}_\Gamma \arrow[d, "\cong" ] \\ H^1({{\mathbb T}},{{\mathbb C}}^*) \arrow[hookrightarrow,r] & H^1(\Gamma,{{\mathbb C}}^*) \end{tikzcd}$$ A *dimer cover* (or *perfect matching*) of $\Gamma$ is a collection of edges of $\Gamma$ such that every vertex is adjacent to a unique edge. A dimer cover $M$ on $\Gamma$ gives a $1$-chain $\omega_M$ on $\Gamma$. If $M_0$ is another dimer cover, $\omega_M-\omega_{M_0}$ is a cycle and therefore determines a homology class in $H_1(\Gamma,{{\mathbb Z}})$. Under the projection $H_1(\Gamma,{{\mathbb Z}}) {\rightarrow}H_1({{\mathbb T}},{{\mathbb Z}})$, we obtain a homology class $[M]\in H_1({{\mathbb T}},{{\mathbb Z}}).$ The Newton polygon of the dimer model is $$N:=\text{Conv }\{[M] \in H_1({{\mathbb T}},{{\mathbb Z}}): M \text{ is a dimer cover}\}.$$ $N$ depends on the choice of reference dimer cover $M_0$. Changing the reference matching corresponds to translating the polygon $N$. $M \mapsto [M]$ gives a well defined map from the set of dimer covers to the integer lattice points in $N$. Zig-zag paths on bipartite graphs and minimality ------------------------------------------------ A *zig-zag path* on a bipartite torus graph $\Gamma$ is a path that turns maximally right at black vertices and maximally left at white vertices. Let us denote by $\mathcal Z_\Gamma$ the set of all zig-zag paths in $\Gamma$. We say that $\Gamma$ is *minimal* if in the universal cover $\tilde{\Gamma}$, zig-zag paths have no self intersections and no pairs of zig-zag paths oriented in the same direction meet twice.\ Suppose $\Gamma$ is a minimal bipartite graph on a torus. Each path $\alpha \in Z_{\Gamma}$ gives us a homology class $[\alpha] \in H_1({{\mathbb T}},{{\mathbb Z}})$ which is an integral pimitive vector on a side of the Newton polygon $N$. The zig-zag paths taken in cyclic order correspond to cyclically ordered primitive integral vectors in the boundary of the Newton polygon. Therefore an edge of $N$ corresponds to a family of zig-zag paths, each with homology class equal to the primitive integral edge vector of the edge. External dimer covers --------------------- In this section, we collect some results about dimer covers from [@Bro12],[@GK12]. Let $\Gamma$ be a minimal bipartite graph on a torus. We say that a dimer cover $M$ is *extremal* if $[M]$ is a vertex of the Newton polygon. If $b$ is any black vertex in $\Gamma$, we define the *local zig-zag fan* $\Sigma_b$ at $b$ to be the complete fan of strongly convex rational polyhedral cones in $H_1({{\mathbb T}},{{\mathbb Z}})$ whose rays are generated by homology classes of those zig-zag paths in $\Gamma$ that contain $b$. \[span\] The rays corresponding to two families of zig-zag paths span a two dimensional cone $\Sigma_v$ if and only if there is an edge incident to $v$ at which the two families intersect. The *global zig-zag fan* of $\Gamma$ is the fan whose rays are generated by the homology classes of all zig-zag paths on $\Gamma$. The identity map in $H_1({{\mathbb T}},{{\mathbb Z}})$ defines a map of fans $i_b:\Sigma {\rightarrow}\Sigma_b$. If $\sigma$ is any two dimensional cone in $\Sigma$, $i_b(\sigma)$ is contained in a unique two dimensional cone in $\Sigma_b$ which we call $\sigma_b$. $\sigma_b$ corresponds to a unique edge incident to $b$, given by the intersection of the two zig-zag paths through $b$ whose rays in $\Sigma_b$ form the boundary of $\sigma_b$. Define the 1-chain $\omega(\sigma_b)$ to be $1$ on the edge $\langle w,b \rangle$ and $0$ on all other edges. Define $$\omega(\sigma)=\sum_{b \in V(\Gamma) \text{ black }}\omega(\sigma_b).$$ Two dimensional cones in $\Sigma$ are in bijection with vertices of the Newton polygon: If $\sigma$ is a two dimensional cone in $\Sigma$, let $E_1$ and $E_2$ be the edges of $N$ whose associated rays form the boundary of $\sigma$ in $\Sigma$. Then $E_1$ and $E_2$ occur in cyclic order and therefore there is a vertex $V$ between them in $N$. \[extremaldimer\] $\omega_V:=\omega(\sigma)$ is the unique extremal dimer cover associated to the vertex $V$ of $N$ that corresponds to $\sigma$. We say that a dimer cover $M$ is *external* if $[M]$ is a boundary lattice point of $N$. To a zig-zag path $\alpha$ we associate a 1-form $\omega_\alpha$ that is $1$ on edges $e$ in $\alpha$ that are oriented the same way as $\alpha$ and $0$ on edges not in $\alpha$. If $M$ is external, $[M]$ lies on an edge $E$ of $N$, which corresponds to a family of zig-zag paths $\{\alpha_k\}$. Let $E=\langle V_1,V_2\rangle$, where $V_1,V_2$ are vertices of $N$ such that $V_2$ is the vertex after $V_1$ when the boundary of $N$ is traversed counterclockwise. \[externaldimer\] Let $A$ be a subset of the family of zig-zag paths $\{\alpha_k\}$ corresponding to $E$. The external dimer covers on $E$ are of the form $$\omega_A:=\omega_{V_1}+\sum_{\alpha_k \in A}\omega_{\alpha_k}.$$ In particular, $\omega_{V_2}=\omega_{V_1}+\sum_{k}\omega_{\alpha_k}$, and the number of dimer covers corresponding to a boundary lattice point of $N$ is a binomial coefficient. Resistor networks ================= A *resistor network* is a pair $(G,c)$ where $G$ is a surface graph on ${{\mathbb T}}$ and $c:E(G){\rightarrow}{{\mathbb C}}^*$ is a function defined modulo global multiplication by a non-zero scalar. Associated to $G$ is a bipartite graph $\Gamma_G$ obtained by superposing $G$ and its dual graph $G^\vee$. The vertices and faces of $G$ become the black vertices of $\Gamma_G$ and the edges of $G$ become the white vertices of $\Gamma_G$. Applying Euler’s formula on ${{\mathbb T}}$ to $G$ we see that $\Gamma_G$ has equal number of white and black vertices. The resistor network cluster variety ------------------------------------ A conductance function $c$ determines a line bundle with connection $V(c)$ on $\Gamma_G$ as follows:\ The weight assigned to an edge of $\Gamma_G$ incident to a vertex of $G$ is the conductance of that edge in $G$. The weight of an edge incident to a face of $G$ is $1$. Composing the isomorphisms $\mathcal L^{\text{flat}}_G \cong H^1({{\mathbb T}},{{\mathbb C}}^*) \cong \mathcal L^{\text{flat}}_{\Gamma_G}$, we see that the moduli spaces of line bundles with flat connections on $G$ and $\Gamma_G$ are canonically isomorphic. $\mathcal{R}_G \subset \mathcal L_G$ of line bundles of the form $V(c)\otimes i$, where $c$ is a conductance function on $G$ and $i\in \mathcal L^{\text{flat}}_G$ is a closed subvariety. A $Y-\Delta$ transformation([@Kenn1899])$G_1 {\rightarrow}G_2$ is given by replacing a $Y$ in the graph $G_1$ with a triangle as shown in Figure \[et\]. Any two minimal resistor networks with Newton polygon $N$ are related by $Y-\Delta$ moves. A $Y-\Delta$ move $G_1 {\rightarrow}G_2$ induces a birational map $\mathcal{L}_{\Gamma_{G_1}} {\rightarrow}\mathcal{L}_{\Gamma_{G_2}}$. Gluing the $\mathcal{L}_{\Gamma_{G}}$ using these birational maps, we obtain a cluster Poisson variety $\mathcal{X}_N$. The birational map $\mathcal{L}_{\Gamma_{G_1}} {\rightarrow}\mathcal{L}_{\Gamma_{G_2}}$ restricted to $\mathcal{R}_{G_1} {\rightarrow}\mathcal{R}_{G_2}$ is given in the notation of Figure \[et\] by $$\begin{aligned} A=\frac{bc}{a+b+c},B=\frac{ac}{a+b+c},C=\frac{ab}{a+b+c}.\end{aligned}$$ Gluing the subvarieties $\mathcal{R}_{G}$ using these birational isomorphisms, we obtain a cluster subvariety $\mathcal{R}_N$ of $\mathcal{X}_N$. Quotienting by the moduli space of the flat connections, we get $\mathcal{R}_N^0 \subset \mathcal{X}_N^0$, called the *resistor network cluster variety*. The line bundle Laplacian ------------------------- Let $(G,c)$ be a resistor network and let $i \in \mathcal L^{\text{flat}}_G$. The line bundle Laplacian is the linear operator $\Delta=\Delta(c,i):\mathbb C^{V(G)} {\rightarrow}\mathbb C^{V(G)}$ defined by $$\Delta(f)(v):=\sum_{v' \sim v }c(v,v')(f(v)-i_{v'v}f(v')).$$ An *oriented cycle rooted spanning forest* (OCRSF) $\gamma$ of $G$ is a collection of edges of $G$ such that each connected component of $\gamma$ has the same number of vertices and edges (so that each connected component has a unique cycle), along with a choice of orientation for each cycle in $\gamma$. Since two distinct cycles in $\gamma$ cannot intersect, if $\eta$ is a cycle in $\gamma$, every cycle has homology class $\pm [\eta]$. The weight of an $OCRSF$ $\gamma$ is defined to be $wt(\gamma)=\prod_{e \in \gamma}c(e)$. \[pfnlap\] $$\text{det }\Delta = \sum_{ \text{OCRSFs } \gamma}wt(\gamma)\prod_{\text{Cycles }\eta \in \gamma}(1-m(\eta)),$$ where $m(\eta)$ is the monodromy of $i$ along the cycle $\eta$. An OCRSF $\gamma^\vee$ on $G^\vee$ is *dual* to an OCRSF $\gamma$ on $G$ if no edge of $\gamma^\vee$ crosses an edge of $\gamma$. It is easy to see that $\gamma^\vee$ has the same number of cycles as $\gamma$ and each cycle has homology class $\pm [\eta]$, where $\eta$ is any cycle in $\gamma$. An OCRSF $\gamma$ has $2^k$ duals where $k$ is the number of cycles in $\gamma$, one for each choice of orientation of the dual cycles.\ Given a pair $(\gamma,\gamma^\vee)$ of dual OCRSFs, define its weight to be $wt(\gamma,\gamma^\vee):=wt(\gamma)$. To each pair we associate a homology class, $$[(\gamma,\gamma')]:=\frac{1}{2}\sum_{ \text{Cycles }\eta \text{ in }\gamma \cup \gamma^\vee}[\eta] \in H_1({{\mathbb T}},{{\mathbb Z}}).$$ Newton polygon of the resistor network -------------------------------------- The *Newton polygon* of the resistor network is $$N=\text{Conv }\{[(\gamma,\gamma^\vee)] \in H_1({{\mathbb T}},{{\mathbb Z}}): (\gamma,\gamma^\vee) \text{ is a pair of OCRSFs}\}.$$ $(\gamma,\gamma') \mapsto [(\gamma,\gamma')]$ associates to each pair of dual OCRSFs an integer lattice point in the Newton polygon. $N$ is always centrally symmetric and therefore we can center it at the origin.\ Since $\mathcal L_G^{\text{flat}} \cong H^1({{\mathbb T}},{{\mathbb C}}^*)$, we have the natural pairing between homology and cohomology, $$\begin{aligned} (\cdot,\cdot):H_1({{\mathbb T}},{{\mathbb Z}}) \times \mathcal L_G^{\text{flat}}&{\rightarrow}{{\mathbb C}}^*\end{aligned}$$ We can rephrase Theorem \[pfnlap\] as $$\text{det }\Delta=\sum_{(\gamma,\gamma^\vee)}wt(\gamma)([(\gamma,\gamma')],i) .$$ $P(i):=\text{det }\Delta$ is called the *characteristic polynomial*. The Newton polygon of the characteristic polynomial is $$\text{Conv}\{h \in H_1({{\mathbb T}},{{\mathbb Z}}): \text{Coefficient of }(h,i) \text{ is non-zero in }P(i)\},$$ and it coincides with the Newton polygon of the resistor network.\ If we fix a basis for $H_1({{\mathbb T}},{{\mathbb Z}})$, we get isomorphisms $H_1({{\mathbb T}},{{\mathbb Z}})\cong {{\mathbb Z}}^2$ and $\mathcal L_G^{\text{flat}} \cong ({{\mathbb C}}^*)^2$. If $i \mapsto (z,w) \in ({{\mathbb C}}^*)^2$, $P(i)=P(z,w)$ is a Laurent polynomial in $z,w$. Temperley’s bijection on the torus ---------------------------------- Let $G$ be a resistor network and let $\Gamma_G$ be the associated bipartite graph. The Newton polygon $N$ of the resistor network $G$ coincides with the Newton polygon of the dimer model on $\Gamma_G$. Given a pair of dual OCRSFs $F=(\gamma,\gamma^\vee)$ on $G$, we can construct a dimer cover $M_F$ on $\Gamma_G$ using the rule: The oriented edge $e=\langle u,v \rangle$ is in $F$ if and only if the edge $\langle u,e\rangle$ is in $M_F$. \[temperley\] Let $(G,c)$ be a resistor network on a torus. $F \mapsto M_F$ is a weight preserving bijection from pairs of dual OCRSFs on $G$ to dimer covers on $\Gamma_G$ such that $[F]=[M_F]$ in $N$. Zig-zag paths and minimality for resistor networks -------------------------------------------------- An *oriented zig-zag path* on a resistor network $G$ is a path that alternately turns maximally right or left at each vertex. Zig-zag paths on $G$ come in pairs with opposite orientations. We denote the set of zig-zag paths on $G$ by $\mathcal Z_G$. We say that $G$ is *minimal* if the lift of any zig-zag path to the universal cover does not intersect itself and if the lifts of two different zig-zag paths intersect at most once.\ If $G$ is a minimal resistor network, associated to each $\alpha \in \mathcal Z_G$ is its homology class $[\alpha] \in H_1({{\mathbb T}},{{\mathbb Z}})$. They correspond to integral primitive vectors on the boundary of the Newton polygon $N$ in cyclic order.\ There is a natural bijection between $\mathcal Z_G$ and $Z_{\Gamma_G}$ that preserves the homology class. External OCRSFs =============== We say that a pair of dual OCRSFs $F$ is *external* if $[F]$ is a boundary lattice point of $N$. It is *extremal* if $[F]$ is a vertex of $N$. We note that if $F=[(\gamma,\gamma^\vee)]$ is external, then the orientations of $\gamma$ and $\gamma^\vee$ are uniquely determined by $[F]$ and $[\gamma]=[\gamma^\vee]=[F]$. Therefore we can equivalently define external and extremal $OCRSFs$ on $G$ instead of pairs of dual $OCRSFs$.\ For a vertex $v \in G$, we define the *local zig-zag fan* $\Sigma_v$ at $v$ to be the complete fan of strongly convex rational polyhedral cones in $H_1({{\mathbb T}},{{\mathbb R}})$ whose rays are generated by the homology classes of zig-zag paths through $v$ that turn maximally right at $v$.\ The fan $\Sigma$ whose rays are generated by the homology classes of all zig-zag paths on $G$ is called the *global zig-zag fan* of $G$. We have the natural map of fans $i_v:\Sigma {\rightarrow}\Sigma_v$ for each $v \in G$. If $\sigma$ is a 2-dimensional cone in $\Sigma$, $i_v(\sigma)$ is contained in a unique two dimensional cone in $\Sigma_v$, which we shall denote by $\sigma_v$. $\sigma_v$ determines a unique edge $e$ adjacent to $v$ that is oriented away from $v$. Let $\gamma_{{\sigma}_v}$ be the 1-chain that is $1$ on $e$, $-1$ on $-e$ and $0$ on all other edges. We define $$\gamma_\sigma:=\sum_{v\in V(G)}\gamma_{\sigma_v}.$$ From Temperley’s bijection (Theorem \[temperley\]) applied to Theorem \[extremaldimer\], we obtain: \[crsfextremal\] $\gamma_V:=\gamma_\sigma$ is the unique extremal OCRSF on $G$ such that $[\gamma_V]$ is the vertex $V$ of $N$ that corresponds to $\sigma$. To a zig-zag path $\alpha \in Z_G$ we associate a 1-chain $\omega_\alpha$ that is $1$ on edges $e$ in $\alpha$ that are oriented in the same direction as $\alpha$ and $0$ on edges not in $\alpha$. If $\gamma$ is external, $[\gamma]$ lies on an edge $E$ of $N$, which corresponds to a family of zig-zag paths $\{\alpha_k\}$. Let $E=\langle V_1,V_2\rangle$, where $V_1,V_2$ are vertices of $N$ such that $V_2$ is the vertex after $V_1$ when the boundary of $N$ is traversed counterclockwise.\ Using Temperley’s bijection (Theorem \[temperley\]), Theorem \[externaldimer\] and the bijection between zig-zag paths on $G$ and $\Gamma_G$, we obtain: \[crsfexternal\] Let $A$ be a subset of the family of zig-zag paths $\{\alpha_k\}$ corresponding to $E$. The external OCRSFs on $E$ are of the form $$\gamma_A:=\gamma_{V_1}+\sum_{\alpha_k \in A}\omega_{\alpha_k}.$$ In particular, $\gamma_{V_2}=\gamma_{V_1}+\sum_{k}\omega_{\alpha_k}$, and the number of OCRSFs corresponding to a boundary lattice point of $N$ is a binomial coefficient. \[cyc\] Every external OCRSF is a union of cycles. Suppose $\gamma_\sigma$ is an external OCRSF and let $v$ be a vertex of $G$. By construction, there is a single outgoing edge from $v$. We show that there is also a single incoming edge. Consider the fan $-\Sigma_v$ whose rays are generated by homology classes of zig-zag paths that turn maximally left at $v$ and let $i_v':\Sigma {\rightarrow}-\Sigma_v$ be the natural map. $i_v'(\sigma)$ is contained in a unique two dimensional cone $\sigma_v'$ which corresponds to a unique edge $e$ oriented towards $v$. Define the 1-chain $\gamma_{\sigma_v}'$ to be $1$ on $e$ and $0$ on all other edges and define the 1-chain $$\gamma_\sigma':=\sum_{v\in V(G)}\gamma_{\sigma_v}'.$$ Let $e=\langle u,v \rangle$ be an edge in $G$ and let $\alpha_1$ and $\alpha_2$ be the two zig-zag paths through $e$ that turn maximally left at $v$. Then $\alpha_1$ and $\alpha_2$ turn maximally right at $u$ and therefore using Theorem \[span\], we have $\sigma'_v=\sigma_u$ which implies $\gamma_{\sigma_v}'=\gamma_{\sigma_u}$. Summing over all vertices, we get $\gamma_{\sigma}'=\gamma_{\sigma}$. It is clear from the definition of $\gamma_{\sigma}'$ that every vertex has a unique incoming edge. It follows that $\gamma_\sigma$ is a union of cycles.\ By Lemma \[crsfexternal\], every external OCRSF is obtained from an extremal OCRSF $\gamma_V$ by adding cycles corresponding to some zig-zag paths and therefore is also a union of cycles. Spectral data ============= A convex integral polygon $N$ determines a toric surface $\mathcal N$ along with an ample line bundle $\mathcal L$ on it. The global sections of $\mathcal L$ can be canonically identified with Laurent polynomials with Newton polygon $N$. Let $|\mathcal{L}|$ be the linear system of curves on $\mathcal N$ given by the vanishing loci of global sections of $\mathcal L$. Let $g=\text{number of interior lattice points in }N-1$.\ Let $\mathcal{S}_N$ be the moduli space of triples $(C,S,\nu)$ such that $C$ is a curve in $|\mathcal{L}|$, $S$ is a degree $g$ effective divisor on $C$ and $\nu$ is a parameterization of the points at infinity of $C$. Let $G$ be a minimal resistor network associated to $N$ and $v$ a vertex of $G$. We have a natural rational map $$\rho_{G,v}: \mathcal R_N^0 {\rightarrow}\mathcal S_N,$$ described on the affine chart $\mathcal R_G$ as follows:\ $C_0$ is the spectral curve $\{(z,w) \in (\mathbb{C}^*)^2:\text{det }\Delta(z,w)=0\}$. Let $i:C_0 \hookrightarrow (\mathbb{C}^*)^2$ denote the inclusion. The Laplacian sits in the following exact sequence on $(\mathbb{C}^*)^2$: $$\label{es1} \bigoplus_{v \in V} \mathcal{O}_{({{\mathbb C}}^*)^2} \xrightarrow[]{\Delta} \bigoplus_{v \in V}\mathcal{O}_{({{\mathbb C}}^*)^2} {\rightarrow}\text{Coker }\Delta {\rightarrow}0.$$ \[linebundle\] $i^*\text{Coker }\Delta$ is a line bundle on $C_0$. $i^*\text{Coker }\Delta$ has one dimensional fibers over the non-singular points of $C_0$(see [@CT79] Theorem 2.2). The fiber of $i^*\text{Coker }\Delta$ at $(1,1)$ is the space of harmonic functions on $G$ which is one dimensional because the only harmonic functions are constant. Since $C_0$ is reduced and $i^*\text{Coker }\Delta$ is a coherent sheaf of constant fiber dimension, it is locally free. The image of the section $\delta_{v}$ gives a section of $i^*\text{Coker }\Delta$. $S$ is the divisor of zeroes of this section. $\nu$ is the parameterization of the points at infinity by zig-zag paths on $G$ such that the coordinate of the point at infinity associated to a zig-zag path is given by the monodromy around that zig-zag path.\ Let $W \subset |\mathcal L|$ be the linear system of curves defined by sections $P(z,w)$ of $\mathcal L$ satisfying the following: - $P(1,1)=0$ and the point $(1,1)$ is a node. - $\sigma:(z,w) \mapsto (\frac{1}{z},\frac{1}{w})$ is an involution on $\{P(z,w)=0\}$. Let $\mathcal{S}_N'$ be the moduli space of triples $(C,S,\nu)$ such that $C$ is a curve in $W$, $S$ is a degree $g$ effective divisor on $C \setminus (1,1)$ satisfying $$\label{divcondition} S+\sigma(S)-q_1-q_2 \equiv K_{\hat{C}},$$ where $\hat{C}$ is the normalization of $C$ and $\nu$ is a parameterization of the points at infinity. $\rho_{G,v}(\mathcal{R}^0_N) \subseteq \mathcal S_N'$. The rest of this section is devoted to the proof of this theorem. Consider the following commuting diagram: \[swap\] &C\_0 \[swap\] &(\^\*)\^2\ &C &N , where $\phi$ and $\pi$ are the normalization maps. We pull back (\[es1\]) using $\phi^* i^*$ and use right-exactness of pullback to get the following exact sequence on $\hat{C_0}$: $$\label{es2} \bigoplus_{v \in V} \mathcal{O}_{\hat{C_0}} \xrightarrow[]{\phi^* i^*\Delta} \bigoplus_{v \in V}\mathcal{O}_{\hat{C_0}} {\rightarrow}\text{Coker }\phi^* i^*\Delta {\rightarrow}0.$$ \[harnackthm\] For the space $\mathcal{R}^0_N(\mathbb{R}_{>0})$ of positive real valued points of $\mathcal{R}^0_N$, $(C_0,S,\nu)\in \mathcal S_N'$. Moreover $C_0$ is a simple Harnack curve. $P(z,w)=P(\frac{1}{z},\frac{1}{w})$ follows from $\Delta(z,w)=\Delta(\frac{1}{z},\frac{1}{w})^T$. $P(1,1)=0$ follows from the observation that the constant functions are discrete harmonic that is they are in the kernel of $\Delta(1,1)$. Differentiating the expression $P(z,w)=\sum_{\text{CRSFs }\gamma}wt(\gamma)(2-z^iw^j-z^{-i}w^{-j})$(where $(i,j)=[\gamma]$), we see that $$\frac{\partial{P}(1,1)}{\partial z}=\frac{\partial{P}(1,1)}{\partial w}=0,$$ so $(1,1)$ is a singular point. For all positive real points, Theorem \[harnackthm\] tells us that $(1,1)$ is a node. Since nodes are characterized by non-vanishing of the Hessian, an open condition, $(1,1)$ is a node for all points in an open subset of $\mathcal{R}^0_N$.\ Let $\hat{C}$ be the normalization of $C$ and let $q_1,q_2 \in \hat{C}$ be the two points in the fiber over the node $(1,1)$. The divisor $S$ satisfies $$S+\sigma(S)-q_1-q_2 \equiv K_{\hat{C}}.$$ Let $Q(z,w)$ be the minor of $\Delta(z,w)$ with the row and column corresponding to $v_0$ removed. Consider the meromorphic 1-form $$\omega=\frac{Q(z,w)dz}{zw\frac{\partial P(z,w)}{\partial w}}.$$ For smooth $(z,w) \in C$, we have $\text{corank }\Delta(z,w)=1$. Therefore we can write $\text{adj }\Delta(z,w)=U(z,w)V(z,w)^T$ for some $U(z,w) \in \text{Ker }\Delta(z,w),V(z,w) \in \text{Coker }\Delta(z,w)$. By definition, $S$ is the set of points in $C_0$ where the component $V(z,w) \cdot \delta_{v_0}$ of $V(z,w)$ vanishes. We have $\text{Ker }\Delta(z,w)\cong \text{Coker }\Delta(z,w)^T=\text{Coker }\Delta(\frac{1}{z},\frac{1}{w})$, so $\sigma(S)$ are the points where the component $U(z,w) \cdot \delta_{v_0}$ vanishes. Since $Q(z,w)= (U(z,w) \cdot \delta_{v_0})( V(z,w) \cdot \delta_{v_0})$, we have $$\text{div}_{C _0} Q(z,w)=S+\sigma(S),$$ Since $C$ has a node at $(1,1)$, $\frac{\partial P(z,w)}{\partial w}$ has a simple zero at $(1,1)$ and so $\omega$ has simple poles at $q_1,q_2$. Therefore, the divisor of $\omega$ on the complement of the points at infinity is $S+\sigma(S)-q_1-q_2$, which has degree $2g-2$. It remains to identify the zeros and poles of $\omega$ at the points at infinity.\ The order of vanishing of the 1-form $$\omega_{ij}:=\frac{z^{i-1}w^{j-1}dz}{\frac{\partial P(z,w)}{\partial w}}$$ at the point at infinity corresponding to the primitive integral edge $E$ is given by the twice the signed area of the triangle formed by $E$ and the point $(i,j)$ minus one (where area is positive for points $(i,j)$ inside $N$). $Q(z,w)$ is the partition function of CRSFs on the graph $G'$ obtained from $G$ by deleting the vertex $v_0$. By Corollary \[cyc\], the Newton polygon of $Q(z,w)$ is strictly contained in $N$. Therefore the order of vanishing of $\omega$ must be non-negative at all points at infinity, that is $\omega$ has no poles at these points. The divisor of $\omega$ on the complement of the points at infinity has degree $2g-2$, which is the degree of the canonical class. Therefore $\omega$ must have an equal number of zeroes and poles at the points at infinity and therefore $\omega$ has no zeroes at infinity either. Discrete Abel-Prym map ---------------------- Let $\mathcal Z=\{\alpha_1,...,\alpha_{2n}\}$ be an enumeration of oriented zig-zag paths in $G$ such that $\nu(\alpha_i)$ correspond to the primitive integral edges of the Newton polygon in cyclic order. We have $\sigma(\alpha_i)=\alpha_{n+i}$. Define $d':V(\tilde{G})\cup F(\tilde{G}) {\rightarrow}\mathbb Z^{\mathcal Z} $ as follows:\ Set $d'(v)=0$ for some vertex $v$. For any vertex or face $u$, let $\tilde{\gamma}$ be a path from $v$ to $u$ in $\tilde{G}$ and let $\gamma$ be its image under the universal covering map $\tilde{G} {\rightarrow}G$. Let $$d'(u)=d'(v)+\sum_{\alpha \in \mathcal Z} ([\gamma],[\alpha])_\mathbb T \alpha,$$ where $(\cdot,\cdot)_\mathbb T$ is the intersection pairing on $H_1(\mathbb T,{{\mathbb Z}})$.\ Define the inclusion $$\begin{aligned} H_1({{\mathbb T}},{{\mathbb Z}}) &\hookrightarrow {{\mathbb Z}}^{\mathcal Z}\\ h &\mapsto \sum_{\alpha \in \mathcal Z}(h,[\alpha])_{{\mathbb T}}\alpha.\end{aligned}$$ Abusing notation, we will denote the homology class $h$ and its image in ${{\mathbb Z}}^{\mathcal Z}$ by the same letter $h$. Observe that $d'$ is equivariant with respect to the $H_1({{\mathbb T}},{{\mathbb Z}})$ action, that is, $$d'(h \cdot u)=h \cdot d'(u),$$ for all $u \in V(\tilde{G})\cup F(\tilde{G})$. Define the discrete Abel map([@Fock15]) $d:V(\tilde{G})\cup F(\tilde{G}) {\rightarrow}Cl(\hat{C})$ as the composition $ \nu \circ d'$. For a homology class $h=(i,j)$ we have $\text{div}_{\hat{C}}z^iw^j=\nu(h)$, so $d$ descends to a well defined map $d:V(\tilde{G})\cup F(\tilde{G}) {\rightarrow}Cl(\hat{C})$. We also the define the *discrete Abel-Prym map* $$\begin{aligned} d_P: V(\tilde{G})\cup F(\tilde{G}) &{\rightarrow}\text{Pr}(\hat{C},\sigma) \\ d_P&=\frac{1}{2} I_P \circ d.\end{aligned}$$ The discrete Abel map provides us the following consistent way to extend (\[es2\]): $$\label{es3} \bigoplus_{v \in V} \mathcal{O}_{ \hat C}(d(v)-\sum_{\alpha_i \in \mathcal Z:v \in \alpha}\alpha_i -d(v_0)) \xrightarrow[]{\phi^*i^*\Delta} \bigoplus_{v \in V}\mathcal{O}_{\hat C}(d(v)-d(v_0)) {\rightarrow}\text{Coker }\phi^*i^*\Delta {\rightarrow}0.$$ We wish to identify $\text{Coker }\phi^*i^*\Delta$. The divisor of the image of the section $\delta_{v_0}$ in $\text{Coker }\phi^*i^*\Delta$ in (\[es3\]) restricted to $C_0$ is $S$. The pullback $\phi^* i^*$ preserves zeroes and poles of sections. So we only have to identify the zeroes and poles of the image of $\delta_{v_0}$ at the points at infinity. $\delta_{v_0}$ has no zeroes or poles at infinity. Let $\alpha$ be an oriented zig-zag path. Let $x$ be a local parameter in a neighborhood $U$ of $\alpha$ disjoint from the other points at infinity with a simple zero at $\alpha$. We trivialize the line bundles in (\[es3\]) as follows: $$\begin{aligned} \mathcal O(-k \alpha )(U) & \xrightarrow[]{\cong}\mathcal O(U)\\ f & \mapsto x^{-k} f\end{aligned}$$ Let $z=a x^m + O(x^{m+1})$ and $w=b x^n+O(x^{n+1})$ be the expansions in the local coordinate $x$. Let us order the vertices so that the vertices on the zig-zag path appear first. Then the Laplacian matrix at $\alpha$ has the following block form: $$\Delta=\begin{pmatrix} \Delta_1 & B \\ 0 & \Delta_2 \end{pmatrix}+O(x),$$ where $\Delta_1$ is the restriction of the Laplacian to the zig-zag path $\alpha$ and $\Delta_2$ is the restriction to the rest of the graph, and where $z$ and $w$ are replaced with $a$ and $b$ respectively. Since we are at $\alpha$, $\Delta_1$ is singular. Generically $\text{dim Ker }\Delta_1=1$ and $\Delta_2$ is invertible. In particular, the fiber of $\text{Coker }\phi^*i^*\Delta$ at $\alpha$ is one dimensional. Combined with Lemma \[linebundle\], we get that $\text{Coker }\phi^*i^*\Delta$ is a line bundle.\ Let $v \in \text{Ker }\Delta_1^*$. Then we have $$\text{Ker }\Delta^*=(v,-(\Delta_2^*)^{-1}B^*v)+O(x).$$ Since generically none of the entries in $\text{Ker }\Delta^*$ is $0$, and since these entries are the cofactors of $\Delta$, we see that $\delta_{v_0}$ has no poles or zeros at $\alpha$. Since $\alpha$ was arbitrary, $\delta_{v_0}$ has no zeroes or poles at infinity. $\text{Coker }\phi^*i^*\Delta=\mathcal{O}(S)$. For any other vertex $v$, let $S_v$ denote the divisor of the image of the rational section $\delta_v$ restricted to $\hat{ C_0}$. Then we have $$\text{div}_{\hat{C}} \delta_v =S_v+ d(v)-d(v_0) \equiv S.$$ Let $e=\frac{1}{2}\pi_1(I(S)-I(q_1)-I(q_2)-\pi^*\Delta_C)+d_P(v_0)$. Define for each vertex $v\in G$, $$\psi_v(x):=\frac{\eta(x+d_P(v)-e)}{\eta(d_P(v)-e)}E_{d(v)-d(v_0)}(x).$$ By Theorem \[prt\], $\psi_v$ is a rational section of $\mathcal{O}(S)$ with divisor $S_v+d(v)-d(v_0)$. \[lemcoker\] The cokernel map is given by $\delta_v \mapsto \psi_v$. If $D$ is a generic degree $g$ effective divisor, the Riemann-Roch theorem tells us that $H^0(\hat{C},\mathcal O(D))$ is 1-dimensional. The cokernel map in (\[es3\]) is given by a collection of global sections of $\text{Hom}(\mathcal O(d(v)-d(v_0)),\mathcal O(S)) \cong \mathcal O(S+d(v_0)-d(v))\cong \mathcal O(S_v)$, and therefore uniquely determined up to scaling each component once we specify the image of $\delta_v$ for all $v$. The scaling is fixed by the requirement that the cokernel at $q_0$ and $q_1$ should be $(1,1,...,1)$. Inverse spectral map ==================== We now describe the normalization map $\pi$ explicitly. The following diagram commutes: & &\ & C\_0 \[swap\] & (\^\*)\^2\ & C & N The functions $z$ and $w$ on $(\mathbb{C}^*)^2$ restrict to rational functions on $C$, which pull back to rational functions $\pi^*z$ and $\pi^*w$ on $\hat{C}$. We have $$\text{div}_{\hat{C}}\pi^*z=\text{div}_{\hat{C}}E_{(1,0)}(x),$$ so they agree up to multiplication by a constant. Since $E_{(1,0)}(q_1)=\pi^*z(q_1)=1$, the constant is $1$, and therefore we have $ \pi^*z=E_{(1,0)}(x)$. By the same argument applied to $w$, we get $ \pi^*w=E_{(0,1)}(x)$. \[condfig\] ![Vertices, faces and zig-zag paths in the definition of the conductance function.[]{data-label="condfig"}](defnzig1.png "fig:"){width="30.00000%"} Let $uv$ be an edge in $\tilde{G}$, $f_1$ and $f_2$ be the faces adjacent to $uv$ and let $\alpha,\beta$ be the zig-zag paths as shown in Figure \[condfig\]. Define the conductance function $$\label{invmap} c_{u,v}:=\frac{\eta(d_P(u)-e)\eta(d_P(v)-e )}{\eta(d_P(f_1)-e)\eta(d_P(f_2)-e)}\frac{E(\alpha,\beta)}{E(\alpha,\beta')}.$$ $c_{u,v}$ has the following properties: 1. $c_{u,v}=c_{v,u};$ 2. $c_{u,v}$ is compatible with taking the dual graph, that is, $c_{f_1,f_2}=1/c_{u,v}$. 3. $c_{u,v}$ is $H_1({{\mathbb T}},{{\mathbb Z}})$-periodic and therefore descends to a conductance function $c$ on $G$. <!-- --> 1. Follows from the symmetry $E(\alpha,\beta)=E(\alpha',\beta')$. 2. Clear. 3. Let $h \in H_1({{\mathbb T}},{{\mathbb Z}})$. We have $$\begin{aligned} I_P(d_P(u+h)-d_P(u))&=\frac{1}{2}\pi_1 I(h)=0,\end{aligned}$$ since $h=(i,j)=\text{div}_{\hat{C}}z^iw^j$. The rational map $\rho_{G,v_0}:(C,S,\nu)\mapsto V(c)$ is the inverse of $\kappa_{G,v_0}$. Therefore $\mathcal{R}^0_N $ is birational to $\mathcal S_N'$. \[condfig2\] ![Local configuration near a vertex $u$.[]{data-label="condfig2"}](defnzig3.png "fig:"){width="50.00000%"} 1. $\kappa_{G,v_0} \circ \rho_{G,v_0}=\text{id}:$\ Let $u$ be a vertex in $\Gamma$ and let $v_1,...,v_n$ be the vertices adjacent to $u$ in $G$. Let $\alpha_1,...,\alpha_n$ be the zig-zag paths as shown in Figure \[condfig2\]. Note that $$i_{v,u}^{-1}\psi_v(x)=\psi_u(x).$$ Using Theorem \[fqi\] with $z=q_1,t=d_P(u)-e,x_k=\alpha_k$, we get $$\label{condsum} \sum_{v_k \sim u}c_{u,v_k}=\frac{\eta\left(d_P(u)-e-\sum_{i=1}^k \alpha_k\right)\eta(d_P(u)-e)^2}{\prod_{k=1}^n\eta(d_P(u)-e-\alpha_k)}\prod_{k=1}^n \frac{E(\alpha_k,\alpha_{k+1})}{E(\alpha_k,\alpha'_{k+1})}.$$ Using Theorem \[fqi\] with $z=x,t=d_P(u)-e,x_k=\alpha_k$ and (\[condsum\]), we get $$\sum_{v_k \sim u}c_{u,v_k} (\psi_{u}(x)-i_{v_k,u}^{-1}\psi_{v_k}(x))=0,$$ so the following is sequence is exact: $$0 \rightarrow \text{Ker }\phi^*i^* \Delta^T \xrightarrow[]{1 \mapsto (\psi_v)_{v}}\bigoplus_{v \in V}\mathcal{O}_{\hat C}(-d(v)+d(v_0)) \xrightarrow[]{\phi^*i^*\Delta^T} \bigoplus_{v \in V} \mathcal{O}_{\hat C}(-d(v)+\sum_{\alpha_i \in \mathcal Z:v \in \alpha}\alpha_i +d(v_0)).$$ Since this is the transpose of (\[es3\]), the cokernel map in (\[es3\]) is $\delta_v \mapsto \psi_v$ and we recover $S=\text{div}_{\hat{C_0}}\psi_{v_0}$ as the divisor. 2. $\rho_{G,v_0} \circ \kappa_{G,v_0}=\text{id}$:\ Suppose $c'$ is a conductance function such that $\kappa_{G,v_0}(c')=(C,S,\nu)$. By Lemma \[lemcoker\], the cokernel map is determined by $S$ and is given by $\delta_v \mapsto \psi_v$. Taking transpose, the equation of $\phi^* i^* \Delta^T$ becomes $$\sum_{v_k \sim u}c'_{u,v_k} (\psi_{u}(x)-i_{v_k,u}^{-1}\psi_{v_k}(x))=0.$$ Since the coefficients of the quadrisecant identity are uniquely determined up to a constant, comparing with Theorem \[fqi\] with $z=x,t=d_P(u)-e,x_k=\alpha_k$, we see that $c'$ agrees with $c$ up to a multiplicative constant. Compatibility with $Y-\Delta$ transformations ============================================= \[et\] ![Y-Delta transformation.[]{data-label="et"}](ydeltazig.png){width="70.00000%"} A $Y-\Delta$ transformation is induced by sliding a zig-zag path through the crossing of two other zig-zag paths as shown in Figure \[et\]. Therefore discrete Abel and discrete Abel-Prym maps $d,d_P$ on $G_1$ induce discrete Abel and discrete Abel-Prym maps on $G_2$, which we will also denote by $d,d_P$ respectively. \[ydcomp\] Let $G_1 {\rightarrow}G_2$ be a $Y-\Delta$ transformation and let $v_1$ and $v_2$ be vertices of $G_1$ and $G_2$ respectively. The following diagram commutes:\ & \^0\_[N]{} &\ ’\_N & & ’\_N The birational map $s$ is defined as $(C,S_1,\nu_1) \mapsto (C,S_2,\nu_2)$, where 1. There is a natural bijection between zig-zag paths on $G_2$ and $G_1$ induced by $Y-\Delta$ transformation. $\nu_2$ is obtained by composing this bijection with $\nu_1$. 2. $S_2$ is the generically unique degree $g$ effective divisor satisfying $S_2\equiv S_1 + d(v_1)-d(v_2).$ The $Y-\Delta$ transformation preserves the spectral curve. The local picture is shown in Figure \[et\]. Let $e=\frac{1}{2}\pi_1(I(S_1)-I(q_1)-I(q_2)-\pi^*\Delta_C)+d_P(v_1)$. We show that $\kappa _{G_1,v_1}^{-1}= \kappa _{G_2,v_2}^{-1} \circ s$. We have $$\begin{aligned} a&=\kappa_{G_1,v_1}^{-1}(C,S_1,\nu_1)_{u v_1}=\frac{\eta(d_P(u)-e)\eta(d_P(v_1)-e )}{\eta(d_P(f_2)-e)\eta(d_P(f_3)-e)}\frac{E(\beta,\gamma)}{E(\beta,\gamma')};\\ b&=\kappa_{G_1,v_1}^{-1}(C,S_1,\nu_1)_{u v_2}=\frac{\eta(d_P(u)-e)\eta(d_P(v_2)-e )}{\eta(d_P(f_1)-e)\eta(d_P(f_3)-e)}\frac{E(\gamma,\alpha)}{E(\gamma,\alpha')};\\ c&=\kappa_{G_1,v_1}^{-1}(C,S_1,\nu_1)_{u v_3}=\frac{\eta(d_P(u)-e)\eta(d_P(v_3)-e )}{\eta(d_P(f_1)-e)\eta(d_P(f_2)-e)}\frac{E(\alpha,\beta)}{E(\alpha,\beta')}.\end{aligned}$$ Note that by the definition of $s$, $$\begin{aligned} &\frac{1}{2}\pi_1(I(S_2)-I(q_1)-I(q_2)-\pi^*\Delta_C)+d_P(v_2)\\ &=\frac{1}{2}\pi_1(I(S_1+d(v_1)-d(v_2))-I(q_1)-I(q_2)-\pi^*\Delta_C)+d_P(v_2)\\ &=e\end{aligned}$$ Therefore $$A=\kappa _{G_2,v_2}^{-1} \circ s(C,S_2,\nu_2)_{v_2v_3}=\frac{\eta(d_P(v_2)-e)\eta(d_P(v_3)-e )}{\eta(d_P(f_0)-e)\eta(d_P(f_1)-e)}\frac{E(\gamma,\alpha')}{E(\gamma,\alpha)}.$$ Equation (\[condsum\]) becomes $$a+b+c=\frac{\eta(d_P(u)-e)^2\eta(d_P(f_0)-e )}{\eta(d_P(f_1)-e)\eta(d_P(f_2)-e)\eta(d_P(f_3)-e)}\frac{E(\alpha,\beta)E(\beta,\gamma)E(\gamma,\alpha)}{E(\alpha,\beta')E(\beta,\gamma')E(\gamma,\alpha')}.$$ Plugging in these expressions, we see that $\frac{bc}{a+b+c}=A$, which is the transition map between the $G_1$ and $G_2$ affine charts. Discrete integrable systems from $Y-\Delta$ moves ================================================= Let $T$ be a sequence of $Y-\Delta$ moves on a graph $G$ such that the resulting graph $T \cdot G$ is isomorphic to $G$ as graphs. Let $\phi_T:G {\rightarrow}T \cdot G$ be the isomorphism. The composition $$\begin{aligned} \mathcal R^0_N \supset \mathcal R^0_G {\rightarrow}\mathcal R^0_{T\cdot G} \xrightarrow[]{\simeq} \mathcal R^0_G \subset \mathcal R^0_N \end{aligned}$$ defines a birational automorphism of $\mathcal R^0_N$, which we denote by $\mu_T$. It is a cluster modular transformation as defined in [@FG03b]. Using Theorem \[ydcomp\], we construct the follwing commuting diagram: R\^0\_N \^0\_G & \^0\_[T G]{} & \^0\_[G]{}R\^0\_N\ ’\_N & ’\_N & ’\_N , where $s$ is the map in Theorem \[ydcomp\] and $t$ is the natural map induced by the graph isomorphism $\phi_T$, that is $(C,S,\nu) \mapsto (C,S,\nu')$, where $\nu'$ is obtained from $\nu$ by composing with $\phi_T$. We have shown: \[lin\] The following diagram commutes: \^0\_N & \^0\_N\ ’\_N & ’\_N , where the birational map $s_T$ is defined as $(C,S,\nu) \mapsto (C,S_T,\nu_T)$ where $S_T$ is the (generically) unique degree $g$ effective divisor satisfying $S_T \equiv S +d(v)-d(\phi_T^{-1}(v))$ and $\nu_T=\nu \circ \phi_T^{-1}$. For a fixed $(C)$, the fiber of the projection $(C,S,\nu) \mapsto (C)$ over $(C)$ is a cover of the space of degree $g$ effective divisors on $C$ satisfying (\[divcondition\]), which is birational to a cover of $\text{Prym}(\hat{C},\sigma)$. Therefore Theorem \[lin\] tells us that the discrete integrable system arising from $T$ is linearized on a cover of $\text{Prym}(\hat{C},\sigma)$. A conjecture ============ Let $G$ be a minimal resistor network, $\Gamma_G$ be the associated bipartite graph. Recall the dimer spectral data $\kappa_{\Gamma_G,v}:\mathcal{X}_N^0 {\rightarrow}\mathcal{S}_N$ as defined in [@GK12] Proposition 7.2. By [@GK12] Theorem 1.4, [@Fock15], $\kappa_{\Gamma_G,v}$ is a birational isomorphism. We conjecture that the map $t$ that makes the diagram below commute is $(C,S,\nu) \mapsto (C,S+(1,1),\nu)$. \^0\_N & \_N’\ \_N\^0 & \_N Appendix ======== For background on the material collected here, see [@Fay73], [@Fay89], [@Tata1],[@Tata2], [@Taim97]. Let $\pi: \hat{C} {\rightarrow}C$ be a ramified double covering of genus $\hat{g}$ of a smooth curve of genus $g$ with branch points $q_1,q_2$. By the Riemann-Hurwitz theorem, $\hat{g}=2g$. Let $\sigma: \hat{C} {\rightarrow}\hat{C}$ be the involution permuting the branches of the covering with fixed points at $q_1,q_2$ and let $x'=\sigma(x)$ denote the conjugate point of $x \in \hat{C}$. We can choose a canonical homology basis for $H_1(\hat{C},{{\mathbb Z}})$ $$A_1,B_1,A_2,B_2,...,A_{2g},B_{2g},$$ such that $(\pi_*(A_i),\pi_*(B_i))_{i=1}^{g}$ is a basis for $H_1(C,{{\mathbb Z}})$ and such that $$\sigma(A_k)+A_k=\sigma(B_k)+B_k=0, \quad 1 \leq k \leq g.$$ If the dual basis of holomorphic differentials on $\hat{C}$ is $$u_1,...,u_{2g},$$ then for $1 \leq k \leq g$ we have $$\sigma^* u_{k}+u_{g+k}=0.$$ A holomorphic differential $\omega$ on $\hat{C}$ is called a Prym differential if $\sigma^*(\omega)+\omega=0$. For $1 \leq k \leq g$, $$\omega_k=\sigma^* u_k + u_k$$ is a basis for Prym differentials on $\hat{C}$. Let $\Pi$ be the matrix of periods of the Prym differentials around the $b-$cycles of $\hat{C}$: $$\Pi_{jk}=\int_{b_k} u_j.$$ The *Prym variety* $\text{Pr}(\hat{C},\sigma)$ is defined to be $$\frac{\mathbb{C}^g}{{{\mathbb Z}}^g + \Pi {{\mathbb Z}}^g}.$$ Let $E(x,y)$ denote the prime form on $\hat{C}$. $E(x,y)$ has the symmetry $E(x,y)=E(x',y')$ for all $x,y \in \hat{C}$. Let $Cl(\hat{C})$ denote the divisor class group of $\hat{C}$. For a divisor $D=\sum_i a_i-\sum_j b_j \in Cl(\hat{C})$, define $$E_D(x):= \frac{\prod_i E(x,a_i)}{\prod_j E(x,b_j)}.$$ It is a section of the line bundle associated to $D$ with divisor $D$. Let $\hat{J},J$ be the Jacobians of $\hat{C},C$ respectively and let $I: \hat{C} {\rightarrow}\hat{J}$ be the Abel map with base-point $p_0\in \hat{C}$. By Riemann’s theorem, we have $\hat{J} =I(\text{Symm}^{2g}\hat{C})$ and the involution $\sigma$ induces an involution $ \sigma_*:\hat{J} {\rightarrow}\hat{J}$: Given $\zeta \in \hat{J}$, let $D \in \text{Symm}^{2g}(\hat{C})$ such that $I(D)=\zeta$ and let $\sigma_*(\zeta)=I(\sigma(D))$. In coordinates, $\sigma_*$ is given by $$(z_1,...,z_{2g})\mapsto (-z_{g+1},...,-z_{2g},-z_1,...,-z_g).$$ The Prym variety is embedded by $\phi:\text{Pr}(\hat{C},\sigma) \hookrightarrow \hat{J}:$ $$(z_1,...,z_g) \mapsto (z_1,...,z_g,z_1,...,z_g).$$. We also have projections $\pi_1:\hat{J}{\rightarrow}\text{Pr}(\hat{C},\sigma)$ and $\pi_2:\hat{J} {\rightarrow}J$ given by $$\begin{aligned} \pi_1(z_1,...,z_{2g})&=(z_1+z_{g+1},...,z_g+z_{2g})\\ \pi_2(z_1,...,z_{2g})&=(z_1-z_{g+1},...,z_g-z_{2g}).\end{aligned}$$ Define the Abel-Prym map with base-point $q_1$: $$\begin{aligned} I_{P}:\hat{C} &{\rightarrow}\text{Pr}(\hat{C},\sigma)\\ x &\mapsto \left( \int_{q_1}^{x} \omega_1,...,\int_{q_1}^{x} \omega_g \right), \text{ for } x \in \hat{C}.\end{aligned}$$ Note that $I_P=\pi_1 \circ I$. Let $\eta(z)$ be the theta function on $\text{Pr}(\hat{C},\sigma)$. Note that for $e \in \text{Pr}(\hat{C},\sigma)$, we have $$e=\frac{1}{2}\pi_1(\phi(e)).$$ \[prt\] If $e \in \text{Pr}(\hat{C},\sigma)$, then either $\eta(I_P(x)-e) \equiv 0$ for all $x \in \hat{C}$ or $\text{div} _{\hat{C}}\eta(I_P(x)-e)=D$ is a degree $\hat{g}$ effective divisor satisfying $$\phi(e) = I(D)-I(q_1)-I(q_2)-\pi^* \Delta_C \quad \text{in }\hat{J},$$ where $\Delta_C \in J$ is the vector of Riemann constants on $C$, and $$D+\sigma(D)-q_1-q_2 \sim K_{\hat{C}},$$ where $K_{\hat{C}}$ is the canonical class of $\hat{C}$. Moreover, $D$ is determined by these conditions. \[fqi\] Let $t \in Pr(\hat{C},\sigma)$, $z\in \hat{C}$ and suppose $x_k \in \hat{C}$ for $k \in {{\mathbb Z}}/n {{\mathbb Z}}$. $$\begin{aligned} \sum_{k=1}^{n}\frac{\eta(t+I_P(z)-I_P(x_k)-I_P(x_{k+1}))}{\eta(t-I_P(x_k))\eta(t-I_P(x_{k+1}))}\frac{E(x_k,x_{k+1})}{E(x_k,x'_{k+1})}\frac{E(z,x'_k)E(z,x'_{k+1})}{E(z,x_k)E(z,x_{k+1})}\\ =\frac{\eta\left(t-\sum_{i=1}^k I_P(x_k)\right)\eta(t+I_P(z))}{\prod_{k=1}^n\eta(t-I_P(x_k))}\prod_{k=1}^n \frac{E(x_k,x_{k+1})}{E(x_k,x'_{k+1})}.\end{aligned}$$ [BL]{} Boutillier, C., de Tilière, B., Raschel, K.: [*The Z-invariant massive Laplacian on isoradial graphs*]{}, Invent. math. (2017) 208: 109. Broomhead N.: [*Dimer models and Calabi-Yau algebras*]{}, Mem. Amer. Math. Soc. 215 (2012), no 1011. Colin de Verdière Y., [*Réseaux électriques planaires. I*]{}. Comment. Math. Helv. 69 (1994), no. 3, 351-374. Curtis E. B., Ingerman D., Morrow J. A., [*Circular planar graphs and resistor networks*]{}. Linear Algebra Appl. 283 (1998), no. 1-3, 115?150. Carroll G. , Speyer D., [*The cube recurrence*]{}. Electron. J. Combin. 11 (2004), no. 1, Research Paper 73, 31 pp. arXiv:math/0403417. Cook R. J., Thomas A. D. : [*Line bundles and homogeneous matrices*]{}. Quart. J. Math. Oxford (2), 30 (1979), 423–429. Fay J.:[*Theta functions on Riemann surfaces*]{}. Lecture Notes in Mathematics, Vol. 352. Springer-Verlag, Berlin, 1973. Fay J.:[*Schottky relations on $\frac{1}{2}(C-C)$*]{}. Proc. Symp. in Pure Math, 49 Part 1, 485-502. Fock V. V. [*Inverse spectral problem for GK integrable systems.* ]{} arXiv:1503.00289(2015). Fock V., Goncharov A.B.: [*Cluster ensembles, quantization and the dilogarithm.*]{} Ann. Sci. L’Ecole Norm. Sup. (2009). ArXiv: math.AG/0311245. Goncharov A. B., Kenyon R. [*Dimers and cluster integrable systems*]{} Ann. ENS. Kennelly A.E.: [*Equivalence of triangles and stars in conducting networks*]{}, Electrical World and Engineer, 34 (1899), 413-414. Kenyon R.; The Laplacian and Dirac operators on critical planar graphs. Invent. Math. 150 (2002), no. 2, 409?439. Kenyon R.: [*Spanning forests and the vector bundle laplacian*]{}, Ann. Probab. 39 (2011), no. 5, 1983?2017. arxiv:1001.4028. Kenyon R. : [*Determinantal spanning forests on planar graphs* ]{}, arXiv:1702.03802. Kenyon, R., Propp, J., Wilson, D.B.: [*Trees and matchings*]{}. Electron. J. Combin. 7 (2000), Research Paper 25, 34 pp. (electronic). Mikhalkin, G.: [*Real algebraic curves, the moment map and amoebas*]{}, Ann. of Math. (2) 151 (2000), no. 1, 309-326. arXiv:math/0010018 Mumford D.: [ *Tata lectures on theta I*]{}. Mumford D.: [ *Tata lectures on theta II*]{}. Taimanov I. A.: [*Secants of Abelian varieties, theta functions, and soliton equations*]{}, Uspekhi Mat. Nauk, 52:1(313) (1997), 149–224; Russian Math. Surveys, 52:1 (1997), 147–218. Terrence George, <span style="font-variant:small-caps;">Department of Mathematics, Brown University, Providence, Rhode Island 02912</span> *E-mail address*: `gterrence@math.brown.edu`
--- author: - Moumita Patra - 'Santanu K. Maiti[^1]' title: 'Simultaneous spin-based Boolean logic operations with re-programmable functionality' --- Introduction ============ The logic gates are the most essential bricks of modern computers and digital electronics as their functionalities rely on the implementation of Boolean functions. These gates are usually comprised of various field-effect transistors (FETs) and metal oxide semiconductor field effect transistors (MOSFETs). So, finding of logical responses in a simple nano-scale device is a subject of intense research for better performance of computable operations. And for proper execution of such operations, wiring between individual logic gates is definitely required which limits integration densities, gives rise to huge power consumption and restricts processing speeds [@cite1]. Therefore, accommodation of Boolean logic gates into a single active element is highly desirable to eliminate wiring amongst transistors. Though a wealth of literature knowledge has been developed in designing logic gates essentially based on molecular systems [@cite2; @cite3; @cite4], but most of these works are involved in single logic operation at a time, and very less number of works are available so far in the context of functioning parallel logic gates in one setup which is highly desirable from the efficiency perspective and suitable computable operations. Hod [*et al.*]{} [@hod] have made an effort to design parallel logic gates considering a cyclic molecule where a realistic magnetic field and gate potential are used as the inputs. In their work they have only shown AND and NAND operations. A completely different prescription was given by imposing a novel architecture considering a single parametric resonator (electromechanical) where three logical operations along with multibit logic functions can be performed [@cite1]. This work essentially suggests a suitable prospect of designing parallel logic processor using a single resonator. There are other few realizations of parallel logic operations [@pl1; @pl2; @pl3; @pl4] considering different semi-conducting materials, molecular systems, protein-like molecules, synthetic gene networks. But these studies do not essentially address the phenomenon of ‘simultaneous Boolean logical operations’, which is precisely our main motivation of the present work. Most of the works available in literature exploit electronic charge for logical operations, but the implementation of these functional operations based on spin degree of freedom undoubtedly yields several advantages like rapid processing, much smaller energy consumptions, greater integration densities, etc. [@spin1; @spin2]. In order to design an efficient spintronic device be it logic functions or any other operations, we need to take care about two important things: spin injection efficiency and spin coherence length. Metallic systems are much superior than semiconducting materials in the aspect of spin injection, but the previous ones have much lower spin coherence length [@metal]. Both these two facts viz, efficient spin injection and coherence length, can be incorporated if we can construct the device using normal metal by compromising on system size. Hopefully it can be done with suitable designing of the setup, and we explore it in this article. Here we also circumvent the consideration of molecular systems, as normally used in describing logic operations, due to the fact that they exhibit much lower transconductance [@cite5]. Considering all these factors, here we propose a new idea of designing ‘simultaneous Boolean logic operations’ using a three-terminal bridge setup (see Fig. \[fig1\]) where the output response is fully spin based. In the two outgoing leads we get two different logical operations at an identical time which we measure by calculating spin current $I_s$, and the central mechanism is controlled by the system placed within the three contacting leads. The bridging system consists of a metallic ring which is divided equally to form two sub-rings. Apart from the common portion of the two sub-rings (viz, the dividing line connecting the sites $1$, $9$, $10$ and $5$), the rest section (i.e., the ring circumference joining the sites $1$, $2$, $\dots$, $7$ and $8$) is subjected to Dresselhaus SO interaction (DSOI) [@dsoi], and it is distributed uniformly throughout the ring. Along with this we also consider Rashba SO interaction (RSOI) [@rsoi] where two different cases, viz, uniform and non-uniform, are considered for the distribution of RSOI along the ring circumference to implement specific simultaneous logic operations, and it will be clearly observed from our subsequent analysis. Both these two SO interactions are commonly encountered in solid state materials and among them RSOI draws much attention as its strength can be tuned externally [@gate1; @gate2] which yields controlled spin transmission. We use RSOI as one of the input signals of logic operations, and, in some cases we also introduce equal amount of magnetic flux in the two sub-rings which is treated as another input signal. The ‘ON’ and ‘OFF’ states of the output signal are described by the positive and negative signs of $I_s$, respectively, where $I_s=I_{\uparrow}-I_{\downarrow}$ ($I_{\sigma (\sigma=\uparrow,\downarrow)}$ being the spin dependent current). By selectively choosing the physical parameters, viz, RSOI, magnetic flux and Fermi energy, the present setup can be ‘reprogrammed’ to have all the six two-input Boolean logic gates with two operations at a time. Achieving these parallel logic operations we can also think about other special-purpose logic operations [@repro] like full-adder, half-adder, multiplier, switching spin action, etc. The rest of the paper is arranged as follows. With the above brief introduction and motivation, next we illustrate our model quantum system and the theoretical prescription for the calculations. The logical operations are clearly described in a separate section. In this section we also discuss the possibilities of utilizing the setup for storage mechanism. Logical operations along with storage function is extremely important for the complete executation of computable operation and that is hopefully possible as our response is spin based. In usual charge based devices we need to transfer the information to a memory as these are usually highly volatile [@repro]. Finally, we end with conclusion and future perspectives of spintronic applications. Model Hamiltonian and the Method ================================ The full bridge system described in Fig. \[fig1\] is divided into three parts: the central ring conductor, three leads (one incoming and two outgoing), and conductor-to-lead coupling. We simulate these parts by the tight-binding (TB) framework. Assuming the leads are perfect and semi-infinite, we can write the TB Hamiltonian of the leads as $$H_{\mbox{\tiny leads}} = \sum\limits_p \Big[ \sum\limits_i c_i^\dagger \epsilon_0 \mathds{1} c_i + \sum\limits_i \left( c_{i+1}^\dagger t_0 \mathds{1} c_n + h.c.\right)\Big] \label{eq1}$$ where the summation over $p$ ($p$ runs from $1$ to $3$) is used for the three leads. The parameters $\epsilon_0$ and $t_0$ describe the on-site energy and nearest-neighbor hopping (NNH) integral, respectively. For an ordered lead we can easily put $\epsilon_0=0$, without loss of any generality. $t_0$ controls the band width ($4t_0$) of the leads. We couple the incoming lead at site $1$ of the ring, for the entire analysis, whereas the other two leads are connected at two other sites of the ring (say, $k$ and $l$), those are variables. The leads are coupled to the ring through the hopping parameter $\tau_p$. The TB Hamiltonian for the central system looks quite different from Eq. \[eq1\], as the ring system is subjected to Rashba and Dresselhaus SO interactions, and the magnetic flux as well. The dividing line is free from any kind of SO interaction, and since the two sub-rings are threaded by equal amount of magnetic flux, no phase factor will introduce into this segment. We write the general Hamiltonian of the central ring (CR) geometry as [@ham1; @ham2; @ham3; @ham4] $$\begin{aligned} H_{\mbox{\tiny CR}} &=& \sum\limits_{n (\mbox{\tiny all sites})} c_n^\dagger \epsilon_n \mathds{1} c_n + \sum\limits_{n (\mbox{\tiny wire})} \left(c_{n+1}^\dagger t \mathds{1} c_n + h.c.\right) \nonumber \\ & + &\sum\limits_{n (\mbox{\tiny ring})}\Big[c_{n+1}^\dagger t_D e^{i\theta}(\mbox{$\sigma_y$}\cos\zeta_{n,n+1} +\mbox{$\sigma_x$}\sin\zeta_{n,n+1})c_n\nonumber \\ &+& h.c. \Big] - i\sum\limits_{n (\mbox{\tiny ring})} \Big[c_{n+1}^\dagger t_R e^{i\theta}(\mbox{$\sigma_x$}\cos \zeta_{n,n+1}+ \nonumber \\ & & \mbox{$\sigma_y$} \sin \zeta_{n,n+1}) + h.c.\Big] \label{eq2}\end{aligned}$$ where $c_n$ is a column of operators formed with the fermionic operators $c_{n\uparrow}$ and $c_{n\downarrow}$. $\theta=\pi \Phi/2$ is the phase factor acquired by an electron [@ph] while traversing through the periphery of the ring. In this Hamiltonian we do not consider any spin splitting mechanism due to Zeeman interaction, as it is too small compared to the other two splitting mechanisms associated with the Rashba and Dresselhaus SO couplings. With this assumption no physical picture will be altered. The Rashba and Dresselhaus SO interactions are described by the factors $t_R$ and $t_D$, respectively, and $\zeta_n=2\pi(n-1)/N$ ($N$ being the total number of atomic sites in the ring, and for our schematic diagram it is $8$) which defines the factor $\zeta_{n,n+1}=(\zeta_n+\zeta_{n+1})/2$. The other physical parameters $\epsilon_n$ and $t_n$ represent on-site energy and NNH integral in the ring as well as in the central wire. $\sigma_i$’s ($i=x,y,z$) are the usual Pauli spin matrices where $\sigma_z$ is diagonal. This is all about the model and the TB Hamiltonians describing the full system. Now, in order to describe the logical responses in two outgoing leads we need to calculate spin currents. At absolute zero temperature, the spin current at $q$th lead ($q$ can be lead-2 (i.e., output-I) and lead-3 (i.e., output-II)) is computed from the relation [@datta] $$I_s^q(V) = \frac{e}{h} \int\limits_{E_F-\frac{eV}{2}}^{E_F+ \frac{eV}{2}}T_{1q}(E) \, dE \label{eq3}$$ where $T_{1q}(E)$ is the effective two-terminal spin selective transmission probability, and it is defined as $T_{1q}(E)=(T_{1q}^{\uparrow\uparrow}+T_{1q}^{\downarrow\uparrow})- (T_{1q}^{\downarrow\downarrow}+T_{1q}^{\uparrow\downarrow})$. To find spin dependent transmission probabilities $T_{1q}^{\sigma\sigma^{\prime}}$ we use Green’s function method, and in terms of the retarded and advanced Green’s functions ($G^r$, $G^a$) it can be expressed as [@datta; @car; @fl] $T_{1q}^{\sigma\sigma^{\prime}}(E) = \mbox{Tr}\big[\Gamma_1^{\sigma} G^r \Gamma_q^{\sigma^{\prime}} G^a\big]$, where $\Gamma_1^{\sigma}, \Gamma_q^{\sigma^{\prime}}$ are the coupling matrices and $G^r=(G^a)^{\dagger}=(E-H_{eff})^{-1}$. $H_{eff}$ is the effective Hamiltonian of the central ring system by incorporating the effects of side-attached leads through self-energy corrections. In our prescription, positive $I_s$ means high output, while negative $I_s$ corresponds to the low output. Essential Results and Discussion ================================ [*Simultaneous logical operations:*]{} As already stated above, by selectively choosing physical parameters, viz, Rashba SO coupling, magnetic flux $\Phi$ in each sub-rings and location of the outgoing leads we can design all possible Boolean logic gates, two such gates at a time. Here we present three pairs (OR-NOR, AND-NAND, and XOR-XNOR) for a specific set of parameter values, as illustrative examples, but one can get other different pairs quite easily simply by adjusting the required variables, and thus, our system is reprogrammable. We carry out numerical calculations at absolute zero temperature, considering a $10$-site system as discussed in Fig. \[fig1\]. Throughout the analysis we set, unless otherwise specified, all site energies to zero, NNH integral in contacting leads at $2\,$eV, and the rest other NNH integrals including ring-to-lead coupling at $1\,$eV. The DSOI is fixed at $0.25\,$eV, and it is uniform throughout the ring circumference as stated earlier. The other two physical parameters, RSOI and $\Phi$, are no longer constant and we mention their specific values during the subsequent analysis. In what follows we present different functional logical operations one by one. 0.25cm The setup is shown in Fig. \[fig2\](a) where the outgoing leads are coupled to sites $4$ and $6$, respectively, of the ring. Here two different strengths of RSOI are taken into account those are treated as low and high states of the inputs, and no magnetic flux is added. These two input states are implemented by changing the Rashba strengths at the green portions of the upper and lower arms of the ring (see Fig. \[fig2\](a)), keeping a constant magnitude of RSOI in the other parts. It looks like a hybrid ring and seems quite easy to fabricate. The responses for this setup at the two outgoing leads under different input conditions are shown in Figs. \[fig2\](b) and (c). The spin current $I_s$ is computed up to a reasonable bias voltage, and for this entire voltage window we can clearly see that the two outgoing leads exhibit two different logical operations (OR and NOR) simultaneously. The output currents are also sufficiently high ($\sim \mu$A) which thus easy to detect. The underlying physics involved relies on the interplay between RSOI and DSOI which leads to anisotropic spin dependent transport in the outgoing leads as discussed clearly by Chang and co-workers [@ham1; @chnew; @chnew1]. In presence of both the two SO couplings, an effective periodic potential is developed which breaks the rotational symmetry of the ring, resulting non-trivial spin dependent transport phenomena [@ham1; @chnew; @chnew1]. To achieve simultaneous logical operations we essentially need to get polarized spin currents from an unpolarized beam of electrons in outgoing leads of a multi-terminal bridge setup. Several propositions have already been made by some groups and by one of the authors of us along this direction i.e., how to get polarized spin currents in presence of SO interactions in outgoing leads considering different shaped geometries [@spl1; @spl2; @spl3; @spl4; @spl5]. The main focus of those works was to achieve polarized spin currents under different input conditions, but no one has attempted to think about logical operations, [*especially simultaneous logic functions.*]{} This is precisely what we do in our present work, and the responses what we get in two outgoing leads are basically the combined effect of SO interactions and quantum interference of electronic waves passing through different sectors of the geometry. Here it is important to note that all the logical operations are implemented by determining the spin current $I_s$, and more precisely by noting its sign viz, positive or negative. Thus, for two logical operations at the two output leads, we need to satisfy all the operations simultaneously (a set of four outputs for each logic gates) associated with the input conditions, and we achieve this goal considering the interplay between the RSOI and DSOI, and the interference among the electronic waves. If we set any one the two SO interactions to zero, which brings back the rotational symmetry in the ring [@spl1; @spl2; @spl3; @spl4], it will be too hard to satisfy all the above mentioned operations at the two output leads. Particularly, when DSOI becomes zero (for instance), no spin current will be available for the input condition where RSOI is also zero, which thus fails to explain logic functions. In that case we have to consider non-zero Rashba couplings for the inputs, but satisfying all the output conditions will not be quite simple unlike the cases we discuss here with our present setups. 0.25cm Considering the identical ring type (viz, the hybrid ring where RSOI is distributed non-uniformly) as taken in Case I, and slightly modifying the location of one of the two outgoing leads we get a pair of another two simultaneous logical operations. The setup along with the results are placed in Fig. \[fig3\], where we see that XOR and XNOR operations are clearly obtained from the two outgoing leads. We simulate these results setting the equilibrium Fermi energy at $0.4\,$eV. Comparing the results given in Figs. \[fig2\] and \[fig3\] we get a clear hint about the robust effect of quantum interference as in one case a specific set of two logical operations are obtained, while another such set is visible for the other case. 0.25cm Finally, we consider another configuration to implement other two logic functions i.e., AND and NAND operations. Here the full circumference is subjected to Rashba SO interaction which acts as one of the two input signals, and for the other input we impose equal amount of magnetic flux $\Phi$ in each of the two sub-rings as shown schematically in Fig. \[fig4\](a). Thus, RSOI and $\Phi$ are used for the two inputs of logic functions, and the responses in the two outgoing leads, associated with four input conditions are placed in Figs. \[fig4\](b) and (c). The two logical operations (AND and NAND) are clearly visible, and in this case the interplay between SO couplings, magnetic flux and quantum interference plays the central role to exhibit these two logic operations. 0.25cm To put more emphasis on reprogrammability, finally we search for a possible ring-lead configuration where all the two-input logic gates can be achieved. In Fig. \[fig5\] four logical operations (XOR-XNOR and AND-NAND) are presented for a specific ring-to-lead geometry, and interestingly, for this same setup other two logic functions (OR-NOR) are also implemented as discussed earlier in Fig. \[fig2\]. Looking carefully into the spectra and comparing the results given in Figs. \[fig3\]-\[fig5\] one can see that the responses obtained in Fig. \[fig5\] are quite inferior, as the magnitudes of spin currents in outgoing leads are reasonably lower in few cases, rather than that the individual geometries, i.e., the responses obtained in Figs. \[fig3\] and \[fig4\]. This low-current response hopefully be sacrificed as we can able to establish all the possible two-input logic gates, two operations at a time, from a single ring-lead configuration. Thus, a possible hint of designing reprogrammable logic gates is expected. This argument i.e., the reprogrammability can be strengthened further following the propositions given by Peeters and his group in a work where they have shown that programmable spintronic devices can be designed using a network of quantum rings in which selective spin transmission will be obtained by locally tuning the Rashba SO coupling in different rings of the network [@qnano]. Before we end the discussion of simultaneous logical operations, we would like to note that one may ask whether the same functionality persists if we consider a similar kind of geometry by removing the atomic sites $9$ and $10$ i.e., in the absence of the central horizontal line. The answer should not be strictly no, but it is very difficult to execute all the six logic functions, especially, two logic operations at the two outgoing leads which we confirm through our detailed numerical calculations. It is true that the polarizing effect in presence of RSOI and DSOI, based on which the logic operations are designed, is available even in a single ring geometry with one input and two outgoing leads, but the inclusion of multiple paths to form a network always yields novel spintronic features, which is substantiated clearly in Refs. [@qnano; @qnano1]. 0.25cm [*Applicability as a storage device:*]{} Along with the above mentioned functional logical operations here we give a brief outline how such a system can be utilized for storage purposes as well. Utilization of spin orientation ($\uparrow$, $\downarrow$) for storing information will be the most suitable operation [@spin1] as it does not alter its state unless some perturbations are imposed. The idea originated from the mechanism of spin-transfer torque (STT) [@stt1] which suggests that a beam of polarized spin current having sufficient magnitude can rotate the spin orientation of a free magnetic moment, by transferring spin angular momentum, along the spin direction of the incident beam. Much higher spin current above cutoff for switching spin magnetization can easily be achieved [@stt1] in our case mainly because of too narrow outgoing channel. Depending on the sign ($+$ve or $-$ve) of the polarized spin current $I_s$, the free magnetic moment aligns along $+Z$ or $-Z$ direction, and assigning 1 or 0 of the logic bits with these orientations we can eventually store one bit memory [@stt1; @stt2; @stt3; @stt4]. The free magnetic site can directly be embedded in the outgoing lead wire or be placed in its close proximity, and in either of these two cases angular momentum transfer takes place through exchange mechanism. Thus, for the present setup as there are two outgoing leads, we can think about two such free magnetic sites, and in principle, can store two bits simultaneously which significantly enhances the storage capacity. Closing Remarks =============== In this work we make an in-depth analysis of designing simultaneous logic gates based on spin states that has not been discussed so far in literature, to the best of our concern. The significance of this proposal is that it relies on a simple tailor made geometry that can be configured to achieve different functional logical operations. Though the magnitude of spin current $I_s$ slightly change with the strengths of SO fields and magnetic flux, all the essential results, determined by the sign of $I_s$, remain unchanged for a wide range of parameter values including bias voltage that we confirm through our exhaustive numerical calculations. Along with the logical operations, we also put forward an idea of devising this system for storage purposes utilizing the concept of spin exchange interaction. Since in this three-terminal setup, polarized spin currents are obtained at the two outgoing leads, we can in principle store two bits by imposing two free magnetic sites, which yields higher storage capacity. Thus, both logic functions and storage mechanism can be implemented in a single device, circumventing the use of additional storage device as usually considered in charge based systems, which no doubt brings significant impact to hit the present market of nanotechnology and nanoengineering. Finally, we end our discussion by pointing out that this proposal of simultaneous Boolean logic operations can be generalized to have more complex parallel logic operations by adding more output leads and re-programmed the system by the external factors. First author (MP) would like to acknowledge the financial support of University Grants Commission, India (F. $2-10/2012$(SA-I)) for pursuing her doctoral work. [0]{} I. Mahboob, E. Flurin, K. Nishiguchi, A. Fujiwara, and H. Yamaguchi, Nat. Commun. **2**, 198 (2011). A. P. de Silva, H. Q. N. Gunaratne, and C. P. McCoy, Nature **364**, 42 (1993). F. M. Raymo, Adv. Mater. **14**, 401 (2002). A. P. de Silva [*et al.*]{} Chem. Rev. **97**, 1515 (1997). O. Hod, R. Baer, and E. Rabani, J. AM. CHEM. SOC. **127 (6)**, 1648 (2005). B. Fresch, M. Cipolloni, T.-M. Yan, E. Collini, R. D. Levine, and F. Remacle, J. Phys. Chem. Lett. **6**, 1714 (2015). Y. Xu, X. Jin, and H. Zhang, Phys. Rev. E **88**, 052721 (2013). A. Dari, B. Kia, A. R. Bulsara, and W. L. Ditto, Europhys. Lett. **93**, 18001 (2011). H. Ando, S. Sinha, R. Storni, and K. Aihara, Europhys. Lett. **93**, 50001 (2011). S. A. Wolf, [*et al.*]{} Science **294**, 1488 (2001). D. E. Nikonov, G. I. Bourianoff, and P. A. Gargini, J. Supercond. Novel Magn. **19**, 497-513 (2006). B. Behin-Aein, D. Datta, S. Salahuddin, S. and S. Datta, Nature Nanotech. **6**, 266 (2010). C. Joachim, J. K. Gimzewski, and H. Tang, Phys. Rev. B **58**, 16407 (1998). G. Dresselhaus, Phys. Rev. **100**, 580 (1955). Y. A. Bychkov and E. I. Rashba, JETP Lett. **39**, 78 (1984). Z. Scherübl, G. Fülöp, M. H. Madsen, J. Nygad, and S. Csonka, Phys. Rev. B. **94**, 035444 (2016). T. W. Chen, C. M. Huang, and G. Y. Guo, Phys. Rev. B **73**, 235309 (2006). A. Ney, C. Pampuch, R. Koch, and K. H. Ploog, Nature **425**, 485 (2003). J. S. Sheng and K. Chang, Phys. Rev. B **74**, 235315 (2006). C. P. Moca and D. C. Marinescu, J. Phys.: Condens. Matter **18**, 127 (2006). S. K. Maiti, J. Appl. Phys. **110**, 064306 (2011). M. Patra and S. K. Maiti, Eur. Phys. J. B **89**, 88 (2016). S. K. Maiti, S. Saha, and S. N. Karmakar, Eur. Phys. J. B **79**, 209 (2011). Datta, S. Electronic transport in mesoscopic systems (Cambridge University Press, Cambridge, 1995). C. Caroli, R. Combescot, P. Nozieres, and D. Saint-James, J. Phys C: Solid State Phys. **4**, 916 (1971). D. S. Fisher and P. A. Lee, Phys. Rev. B **23**, 6851 (1981). M. Wang and K. Chang, Phys. Rev. B **77**, 125330 (2008). W. Yang and K. Chang, Phys. Rev. B **73**, 045303 (2008). A. A. Kislev and K. W. Kim, J. App. Phys. **94**, 4001 (2003). I. A. Shelykh, N. G. Galkin, and N. T. Bagraev, Phys. Rev. B **72**, 235316 (2005). P. Földi, O. Kálmán, M. G. Benedict, and F. M. Peeters, Phys. Rev. B **73**, 155325 (2006). M. Dey, S. K. Maiti, S. Sil, and S. N. Karmakar, J. Appl. Phys. **114**, 164318 (2013). S. K. Maiti, Phys. Lett. A **379**, 361 (2015). J. Chen, [*et al.*]{}, Phys. Rev. Lett. **105**, 176602 (2010). Y.-J. Yu, [*et al.*]{}, Nano Letters. **9**, 3430 (2009). P. Földi, O. Kálmán, M. G. Benedict, and F. M. Peeters, Nano Lett. **8**, 2556 (2008). O. Kálmán, P. Földi, M. G. Benedict, and F. M. Peeters, Physica E **40**, 567 (2008). N. Locatelli, V. Cros, and J. Grollier, Nat. Mater. **13**, 11 (2014). Memory with a spin. Editorial. Nature Naotech. **10**, 185 (2015). M. Patra and S. K. Maiti, Europhys. Lett. **121**, 38004 (2018). D. C. Ralph and M. D. Stiles, M. D., J. Magn. Magn. Mater. **320**, 1190 (2008). [^1]: E-mail:
--- abstract: 'People counting is one of the hottest issues in sensing applications. The impulse radio ultra-wideband (IR-UWB) radar has been extensively applied to count people, providing a device-free solution without illumination and privacy concerns. However, performance of current solutions is limited in congested environments due to the superposition and obstruction of signals. In this letter, a hybrid feature extraction method based on curvelet transform and distance bin is proposed. 2-D radar matrix features are extracted in multiple scales and multiple angles by applying the curvelet transform. Furthermore, the distance bin is introduced by dividing each row of the matrix into several bins along the propagating distance to select features. The radar signal dataset in three dense scenarios is constructed, including people randomly walking in the constrained area with densities of 3 and 4 persons per square meter, and queueing with an average distance of 10 centimeters. The number of people is up to 20 in the dataset. Four classifiers including decision tree, AdaBoost, random forest and neural network are compared to validate the hybrid features, and random forest performs the highest accuracies of all above 97% in three dense scenarios. Moreover, to ensure the reliability of the hybrid features, three other features including cluster features, activity features and CNN features are compared. The experimental results reveal that the proposed hybrid feature extraction method exhibits stable performance with significantly superior effectiveness.' author: - | Xiuzhu Yang, Wenfeng Yin, Lei Li and Lin Zhang\ Beijing University of Posts and Telecommunications\ Email: zhanglin@bupt.edu.cn [^1] title: | Dense People Counting Using IR-UWB Radar\ with a Hybrid Feature Extraction Method --- People counting, IR-UWB radar, hybrid feature extraction, curvelet transform, distance bin, random forest. Introduction ============ With the developing requirement for the Internet of Things (IoT) sensing task, estimating the number of people in a monitored area is crucial for sensing applications. Radar systems provide device-free sensing solutions ranging from human detection to activity classification \[1\], \[2\]. They leverage radar signals which are reflected and attenuated by human bodies, and infer the valid information by properly analyzing the received signal. The impulse radio ultra-wideband (IR-UWB) radar transmits and receives a narrow impulse signal that occupies a wide bandwidth in the frequency domain, with fine delay resolution and excellent penetration. It performs outstanding applications in vital sign monitoring \[3\], personnel detection \[4\] and people counting \[5\]-\[7\]. Compared with current researches on people counting using vision-based systems \[8\], the IR-UWB radar doesn’t suffer from insufficient illumination and privacy concerns. Moreover, it is a device-free solution without relying on any dedicated or personal device, which is required in other radio-based systems, such as radio frequency identification (RFID), Bluetooth, Zigbee and WiFi \[9\]. Several studies on people counting using IR-UWB radar are conducted in \[5\]-\[7\]. The algorithm in \[5\] iteratively detects the local maximum of radar signals to count people. In \[6\], theoretical models of UWB signals are conducted in simulation. \[7\] proposes an algorithm based on the major clusters, analyzing the distribution of selected amplitudes with the distance and the number of people. These algorithms adequately distinguish multipaths and count people. However, all of them count each signal separately suffering from ever-changing signals, and the superposition as well as obstruction of signals limit counting performance in congested environments. In this letter, a hybrid curvelet transform based features-distance bin based features (CTF-DBF) extraction method for dense people counting is proposed. Firstly, in order to address challenges of rapid variations between signals and superposed multipaths of each signal in congested scenarios, several continuously received signals are regarded as a 2-D radar matrix. Due to the moving continuity and trajectory consistency of people, characteristics of moving people are represented as textures with spatial locality information in the radar matrix. The curvelet transform is applied to extract statistical features in multiple scales with different frequencies as well as multiple angles with diverse moving directions. Secondly, to extract detailed information and further analyze the superposed and obstructed signal, the distance bin is defined by dividing each signal into several bins along the propagating distance. Characteristics of each distance bin are extracted in an effective way to supplement detailed features for statistical features. The radar signal dataset comprising three dense scenarios is constructed for 0-20 people randomly walking in the constrained area with densities of 3 and 4 persons per square meter, and at most 15 people queueing with an average distance of 10 centimeters. ![image](system_flowchartfinal.pdf){width="1\linewidth"} With these hybrid features extracted from the dataset, four classifiers including decision tree, AdaBoost, random forest and neural network are compared. Random forest achieves the highest accuracies in three dense scenarios of all above 97%. Furthermore, three other features including cluster features proposed in \[7\], activity features in \[10\] and features learnt automatically form LeNet-5 convolutional neural network (CNN) \[11\] are compared to ensure the reliability of the hybrid features. The experimental results demonstrate the effectiveness and robustness of the hybrid feature extraction method in dense scenarios. Fig. 1 shows the workflow of the people counting system, composed by the dataset generation module, the signal preprocessing module, the proposed hybrid feature extraction method module and the classification module. The remainder of this paper is organized as follows. Section [slowromancap2@]{} describes the dataset generation. The proposed hybrid feature extraction method is discussed in Section [slowromancap3@]{}. Section [slowromancap4@]{} presents experimental results and analysis. The conclusions are summarized in Section [slowromancap5@]{}. Dataset Generation ================== Radar System ------------ In this letter, the IR-UWB radar data from a select number of people in a space is acquired by an NVA-R661 radar module transmitting a narrow pulse with a center frequency of 6.8 GHz, and the bandwidth in -10dB concept of 2.3 GHz. The received radar signals are converted to digital signals, and the sampling frequency is about 39 GHz with the resolution of 0.0039 meter. The experimental setup is shown in Fig. 2, in which the radar is installed at a height of 1.8 meters, with the detecting range of 5 meters and a central angle of 90 degrees. To validate the performance of the proposed method, three dense scenarios are considered for radar data collection. Scenarios 1 and 2 are 0-20 people randomly walking in a constrained area with densities of 3 and 4 persons per square meter respectively. To maintain the densities unchanged, the activity range of testers is limited in a rectangular region of which the area increases with the increasing number of people, shown in the red area in Fig. 2(a). Due to the congested environment, the moving speed of people is limited, equal to or less than the normal walking speed. In scenario 3, at most 15 people stand in a queue with an average distance of 10 centimeters described in Fig. 2(b), and their positions are unchanged. To enhance the reliability of the dataset, 44 testers participate in experiments for acquiring diverse data from different people. Five seconds of radar data with 200 received signals are recorded for each measurement, and each radar sample is selected independently from the record for 1.25 seconds with 50 received signals. Radar sample for 1.25 seconds is long enough for curvelet transform based feature extraction, meanwhile counting in every 1.25 seconds is acceptable in a real time system. Each signal in a radar sample contains 1280 sampling points representing the 5 meters detection range. 3,360 radar samples are generated in scenarios 1 and 2 respectively, with a total of 2,560 samples in scenario 3. ![Experimental setup (a) in the constrained area and (b) in a queue.](experiment_revise.pdf){width="2in" height="2.9in"} Signal Modeling --------------- For each received radar sample, the direct current (DC) component is firstly removed, and then a Hamming window is designed as a filter to obtain the bandpass data with frequency from 5.65 GHz to 7.95 GHz, shown in Fig. 1. The 2-D bandpass data composed by multiple radar signals is described as follows, $$r(t,x) = p(t,x) + c(t,x) + n(t,x)$$ where *t* is the accumulating receiving time representing the time it takes the radar to receive multiple signals, while *x* is the propagating distance of each signal. *p(t,x)* is the target signal reflected from people, while *n(t,x)* is the noise signal. *c(t,x)* represents the clutter, which contains the direct wave from the transmitter to receiver and reflections from the background. Hybrid Feature Extraction Method ================================ Curvelet Transform based Features --------------------------------- Fast ever-changing signals and superposed multipaths bring great challenges for dense people counting by amplitudes of each signal. Counting from a single received signal separately is not stable and reliable, therefore the curvelet transform provides statistical features of a radar matrix. Several temporal continuously received signals are considered as a 2-D radar matrix to avoid contingency caused by the single signal. Furthurmore, considering the moving continuity and trajectory consistency, trajectories of people are presented as textures with spatial locality information in the matrix. Superposed multipaths show stronger textures and can be obviously observed with the curvelet transform. In addition, the curvelet transform provides a multi-scale and multi-orientation decomposition for the 2-D radar matrix to adequately represent texture and edge information with curve-like features \[12\], providing information on signal strength and moving direction of people. The definition of discrete Curvelet transform is given as follows, $$C(j,l,k) = \int \hat{f}(\omega)\hat{U_j}(S^{-1}_{\theta_l}\omega)e^{i<S^{-T}_{\theta_l}b,\omega>}d\omega$$ where $j$, $l$ and $k$ are the parameters of the scale, the direction and the position. $f$ represents the input radar data in the Cartesian coordinate system. $U_j$ is the frequency window for each scale $j$, and $S_{\theta_l}$ is the shear matrix with orientation $\theta_l$ defined as $$S_{\theta_l} := \left( \begin{array}{ccc} 1 & 0 \\ -tan\theta_l & 1\\ \end{array} \right)$$ where superscript $T$ represents the transpose of the matrix. $b$ is defined as $b := (k_1\cdot2^{-j}, k_2\cdot2^{-j/2})$, where the sequence of translation parameters $k=(k_1,k_2)\in Z^2$. When people are queueing in a line or remaining still, their positions are unchanged in a period of time, forming straight-like lines in the 2-D image. In this case, signals from people with smaller variances are easily to be mistaken as clutters reflected from background, thus removing clutters will eliminate significant information as well. To fully extract statistical features from radar matrix without losing any useful information, the bandpass data matrix without clutter removal is decomposed by the curvelet transform, shown in Fig. 1. Each bandpass data with 50 continuously received signals and 1280 sampling points in each signal is decomposed into a coarse layer, a detail layer and a fine layer representing different scales. To characterize the signal matrix in all scales, features with three layers are extracted in the curvelet domain. The coarse layer formed by low frequency coefficients shows the general characteristic and the tendency information of the signal matrix, thus the mean value and energy of the coarse layer are extracted to generally describe the radar data. The fine layer contains high frequency coefficients, representing the finer edge information, which is usually represented by the maximum value. Therefore the top five maximum values as well as the energy of the fine layer are extracted. ![Energy distribution of 16 coefficient matrices in the detail layer with corresponding panels in the radar matrix. The red, blue and gray dashed lines parallel to the vertical axis are the energy of corresponding panels marked in the same colour.](Curveletvsdx_final.pdf){width="3.2in" height="1.6in"} ![The radar matrix for (a) the bandpass data, and the reconstructed data by (b) the $90^{\circ}$ vertical coefficients (c) the $45^{\circ}$ diagonal coefficients (d) the $135^{\circ}$ diagonal coefficients of the detail layer.](big_gray.pdf){width="2.3in" height="1.5in"} The detail layer with high frequency coefficients is divided into 16 directions. Panels of the radar matrix in Fig. 3 are arranged in the clockwise direction, and each angular panel occupies $22.5^{\circ}$. Coefficients in each panel represent signals on corresponding moving directions in trajectories of people. In order to increase the stability and reliability of extracted features, the blue dashed line parallel to the horizontal axis in Fig. 3 cuts the energy, and coefficients with too low energy are removed. Fig. 4 shows the reconstructed signals by picking up coefficients in corresponding directions, which are presented as textures of moving trajectories in the grey-scale maps. Panel 1, 16, 8 and 9 in blue are selected as the $45^{\circ}$ direction shown in Fig. 4(c), representing people moving further away from the radar in 1.25 seconds, while panel 4, 5, 12 and 13 in gray are extracted for the $135^{\circ}$ direction in Fig. 4(d) representing people moving closer to the radar during this time. Panel 6, 7, 14 and 15 in red representing the $90^{\circ}$ direction in Fig. 4(b) occupy most of the energy due to the static clutter, as well as reflections from people on the spot. Considering superposed multipaths for stronger textures, the energy for each direction are calculated in the curvelet domain and in reconstructed signals respectively to represent the comprehensive information of people in the corresponding direction. Distance Bin based Features --------------------------- To get detailed features for complementing curvelet transform based statistical features and further analyze the superposed and obstructed signal in dense people counting, several features are extracted from each signal. Clutter signals are removed firstly using a running average based method \[5\] to analyze the valid signals reflected from people, and feature extraction is operated on the refined data in red, shown in Fig. 1. Due to the high sampling rate in received radar signals, the redundant information contained in these samples will cause over-fitting. To select the representative information for the number of people and reduce over-fitting, each signal with 1280 sampling points is divided into several bins along the propagating distance, with each length of *S$_{d}$*. ![ Refined radar signal for 4 people in a queue.](suojian_final.pdf){width="3in" height="1.2in"} The maximum amplitude in each distance bin is selected as a feature to represent a candidate point for the presence of people. However, the number of local maximum amplitudes can not represent the number of people when they stand closely. As shown in Fig. 5, the first red circle is obviously detected for the presence of 1 person, but there are 2 persons standing closely. In this case, it is impossible to distinguish the number of people from the amplitude due to the multipaths from different people. But the energy of different people are superposed and distinct significantly in a distance bin, therefore the energy is calculated as the complementary feature by squaring sampled signals and integrating them over each bin. Since that the transmitted power of UWB radar is limited and the relatively high noise is accumulated over its wideband. The signal-to-noise ratio (SNR) is low, thus considering the energy of each bin also takes noises into calculation and meanwhile amplifies them. In dense queueing counting, the reflected signal from obstructed people is severely attenuated and comparable with the noise in observation, as Fig. 5 describes. Amplitudes marked by two orange circles are noises in the environment, but their amplitudes are comparable with reflected signals from people, marked in the second and third red circles. Therefore, the noise is removed using the hard threshold analysis of the curvelet transform \[13\]. Then the energy and maximum amplitude of denoised signals are also extracted as features. The length of each distance bin $S_d$ is of great importance in identifying the detailed features. To discriminate each person from the superposed multipath signals in congested environments, $S_d$ should be smaller than a certain physical parameter, for example, the height or the shoulder width of a person \[7\]. In order to obtain sufficient features in different scales to better describe the detailed information, the distance bin is chosen for 125, 250 and 500 millimeters respectively. In this letter, the spatial resolution of the radar system for the minimum distinguishable distance between two adjacent sampling points is 3.9 millimeters. Therefore, the number of sampling points in each distance bin is set as 32, 64 and 128 respectively. To maintain the length of the distance bin, radar system with higher spatial resolution needs more sampling points in each distance bin, while radar system with lower resolution needs less sampling points. For the radar matrix with multiple signals, these features are averaged respectively. The distance bin based features are then directly combined with the curvelet transform based features as the hybrid features, which are defined more clearly in Table [slowromancap1@]{}. [ll]{} Terms & Definition\ -------------------------- Features of coarse layer in the curvelet domain -------------------------- : Hybrid CTF-DBF features & ---------------------------------- Mean and energy of curvelet coefficients in the coarse layer ---------------------------------- : Hybrid CTF-DBF features \ ------------------------ Features of fine layer in the curvelet domain ------------------------ : Hybrid CTF-DBF features & -------------------------------------------- Top five maximum values and energy of curvelet coefficients in the fine layer -------------------------------------------- : Hybrid CTF-DBF features \ -------------------------- Features of detail layer in the curvelet domain -------------------------- : Hybrid CTF-DBF features & ------------------------------------------------------ Energy of the $90^{\circ}$ vertical coefficients, $45^{\circ}$ diagonal coefficients and $135^{\circ}$ diagonal coefficients ------------------------------------------------------ : Hybrid CTF-DBF features \ ----------------------------- Features of detail layer in the reconstructed signal ----------------------------- : Hybrid CTF-DBF features & ------------------------------------------------------ Energy of the reconstructed signal with the $90^{\circ}$ vertical coefficients, $45^{\circ}$ diagonal coefficients and $135^{\circ}$ diagonal coefficients ------------------------------------------------------ : Hybrid CTF-DBF features \ --------------------------- Number of sampling points in a bin *$S_d$* --------------------------- : Hybrid CTF-DBF features & ------------------------------------------ The number of sampling points in a distance bin, with domain{32, 64, 128} ------------------------------------------ : Hybrid CTF-DBF features \ Maximum Amplitude *$A_k$ / $A_d$* & ------------------------------------------- The maximum amplitude of each distance bin for signals with and without noises with corresponding *$S_d$* ------------------------------------------- : Hybrid CTF-DBF features \ Energy *$E_{k}$ / $E_{d}$* & -------------------------------------- The energy of each distance bin for signals with and without noises, with corresponding *$S_d$* -------------------------------------- : Hybrid CTF-DBF features \ Experimental Results ==================== Performance on Different Classifiers ------------------------------------ The hybrid CTF-DBF feature samples extracted from the dataset constructed in this letter with size 1 $\times$ 300 are used as input for classification. To verify the effectiveness of the hybrid feature extraction method, four selected classifiers are compared, including decision tree, AdaBoost, random forest and neural network. The decision tree is a rooted tree structure to divide the cases into two subtrees in each node. In this letter, the random forest classifier constructs 200 decision trees to increase the classification ability. The AdaBoost consists 50 base estimators, with the SAMME.R classification algorithm and a linear loss function. The neural network has three hidden layer with 100, 200 and 100 neurons respectively as well as a ReLU activation function, and is optimized by Adam algorithm. The feature samples are divided into the training set and the testing set. 80% randomly chosen samples (2688 samples in scenarios 1 and 2 respectively, and 2048 samples in scenario 3) are used as the training set for the supervised training on each classifier, and the remaining 20% feature samples are as the testing set to test the classifier and calculate the classification error. The calculation for each classifier is repeated for 20 times with randomly chosen training data, and the average accuracy, precision, recall and F1 score \[14\] are computed, shown in Table [slowromancap2@]{}, [slowromancap3@]{} and [slowromancap4@]{}. ------------------- ----------- ----------- ----------- ----------- Decision Tree 76.7% 87.6% 87.0% 87.3% AdaBoost 80.5% 90.9% 91.9% 91.4% **Random Forest** **97.6%** **97.5%** **99.5%** **98.5%** Neural Network 95.1% 97.8% 95.3% 96.5% ------------------- ----------- ----------- ----------- ----------- : Classification performance comparison of different classifiers for 0-20 people randomly walking in the constrained area with 3 persons per square meter. ------------------- ----------- ----------- ----------- ----------- Decision Tree 76.6% 87.6% 87.0% 87.3% AdaBoost 81.2% 90.9% 91.9% 91.4% **Random Forest** **97.5%** **97.4%** **99.4%** **98.4%** Neural Network 94.9% 98.4% 95.0% 96.7% ------------------- ----------- ----------- ----------- ----------- : Classification performance comparison of different classifiers for 0-20 people randomly walking in the constrained area with 4 persons per square meter. ------------------- ----------- ----------- ------------ ----------- Decision Tree 83.8% 88.9% 90.0% 89.4% AdaBoost 87.3% 93.1% 92.5% 92.8% **Random Forest** **98.7%** **99.4%** **100.0%** **99.7%** Neural Network 97.8% 99.5% 99.2% 99.4% ------------------- ----------- ----------- ------------ ----------- : Classification performance comparison of different classifiers for 0-15 people in the queue. As Table [slowromancap2@]{}, [slowromancap3@]{} and [slowromancap4@]{} describe, the accuracies of random forest and neural network in three dense scenarios are all above 94%, proving the effectiveness and robustness of the hybrid features in dense people counting. Random forest performs the highest accuracies of 97.6% and 97.5% in the constrained area with 3 and 4 persons per square meter respectively, and has a best mean accuracy (98.7%) in the queueing counting. The accuracy, precision, recall and F1 are all above 97% on random forest, demonstrating the extremely satisfactory classification performance. The classification accuracies on 3 persons per square meter are similar to that of 4 persons per square meter on four classifiers, indicating that the hybrid features extraction method is robust and insensitive to the dense environments. Performance Comparison with Other Features ------------------------------------------ In order to verify the superiority of the proposed hybrid features, three other features are used for comparison, including the cluster features proposed in \[7\], the activity features in \[10\] and the features learnt automatically from LeNet-5 convolutional neural network (CNN) \[11\]. The cluster features are composed by the detected amplitudes and distances of the corresponding cluster, with the size of 1$\times$1280. The activity features consist “activity event” and “activity duration”, extracted with the size of 1$\times$2957. The CNN features are extracted by using the LeNet-5 neural network, which is trained on radar samples for 20 epoches. Features are extracted from the fully connected layer with the size of 1$\times$500. The comparison results are conducted with random forest in three dense scenarios, described in Fig. 6. ![ Classification accuracies comparison on different features in three dense scenarios.](features_comparison.png){width="3.3in" height="2.2in"} As illustrated, the classification accuracies of proposed hybrid features are obviously better than those of three other features, and have a distinct advantage of stable performance in three dense scenarios, especially in more complex scene of 4 persons per square meter. Results reveal that the proposed hybrid features are significantly superior to other features in dense people counting. Conclusion ========== In this letter, a hybrid CTF-DBF feature extraction method for dense people counting with IR-UWB radar is proposed. Features with multiple scales and multiple orientations are extracted from the radar matrix by applying the curvelet transform. Moreover, the distance bin is introduced to divide each row of the matrix into several bins along the propagating distance to select features as supplementary information. The dataset in three dense scenarios is constructed, and four classifiers are compared. Counting accuracies are all above 97% with random forest. Moreover, three other features are compared to verify the superiority of the hybrid features. Comparison results prove the effectiveness and robustness of the proposed method in dense scenarios. In the future work, more radar samples will be collected in more complex scenarios to validate the robustness of the proposed method. [1]{} F. Fioranelli, M. Ritchie and H. Griffiths, “Classification of Unarmed/Armed Personnel Using the NetRAD Multistatic Radar for Micro-Doppler and Singular Value Decomposition Features,” in *IEEE Geoscience and Remote Sensing Letters*, vol. 12, no. 9, pp. 1933-1937, Sept. 2015. Y. Kim, S. Ha and J. Kwon, “Human Detection Using Doppler Radar Based on Physical Characteristics of Targets,” in *IEEE Geoscience and Remote Sensing Letters*, vol. 12, no. 2, pp. 289-293, Feb. 2015. H. Lv et al., “An Adaptive-MSSA-Based Algorithm for Detection of Trapped Victims Using UWB Radar,” in *IEEE Geoscience and Remote Sensing Letters*, vol. 12, no. 9, pp. 1808-1812, Sept. 2015. Y. Zhong, Y. Yang, X. Zhu, E. Dutkiewicz, Z. Zhou and T. Jiang, “Device-Free Sensing for Personnel Detection in a Foliage Environment,” in *IEEE Geoscience and Remote Sensing Letters*, vol. 14, no. 6, pp. 921-925, June 2017. J. W. Choi, S. S. Nam, and S. H. Cho, “Multi-Human Detection Algorithm Based on an Impulse Radio Ultra-Wideband Radar System,” in *IEEE Access*, vol. 4, pp. 10300-10309, 2016. S. Bartoletti, A. Conti and M. Z. Win, “Device-Free Counting via Wideband Signals,” in *IEEE Journal on Selected Areas in Communications*, vol. 35, no. 5, pp. 1163-1174, May 2017. J. W. Choi, D. H. Yim and S. H. Cho, “People Counting Based on an IR-UWB Radar Sensor,” in *IEEE Sensors Journal*, vol. 17, no. 17, pp. 5717-5727, Sept.1, 1 2017. H. Idrees, I. Saleemi, C. Seibert and M. Shah, “Multi-source Multi-scale Counting in Extremely Dense Crowd Images,” *2013 IEEE Conference on Computer Vision and Pattern Recognition*, Portland, OR, 2013, pp. 2547-2554. E. Cianca, M. De Sanctis and S. Di Domenico, “Radios as Sensors,” in *IEEE Internet of Things Journal*, vol. 4, no. 2, pp. 363-373, April 2017. Jin He and A. Arora, “A regression-based radar-mote system for people counting,” 2014 *IEEE International Conference on Pervasive Computing and Communications (PerCom)*, Budapest, 2014. L. N. Smith, E. M. Hand, and T. Doster, “Gradual DropIn of Layers to Train Very Deep Neural Networks,” in *IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, 2016. J. Li, L. Liu, Z. Zeng and F. Liu, “Advanced Signal Processing for Vital Sign Extraction With Applications in UWB Radar Detection of Trapped Victims in Complex Environments,” in *IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing*, vol. 7, no. 3, pp. 783-791, March 2014. Z. Zhang, X. Zhang, H. Yu and X. Pan, “Noise suppression based on a fast discrete curvelet transform,” in *Journal of Geophysics and Engineering*, 2010, 7(1):105. J. Zheng, Q. Xu, J. Chen and C. Zhang, “The On-Orbit Noncloud-Covered Water Region Extraction for Ship Detection Based on Relative Spectral Reflectance,” in *IEEE Geoscience and Remote Sensing Letters*, 2018. [^1]: *Corresponding author: Lin Zhang*
--- abstract: 'We present a new microscopic ODE-based model for pedestrian dynamics: the Gradient Navigation Model. The model uses a superposition of gradients of distance functions to directly change the direction of the velocity vector. The velocity is then integrated to obtain the location. The approach differs fundamentally from force based models needing only three equations to derive the ODE system, as opposed to four in, e.g., the Social Force Model. Also, as a result, pedestrians are no longer subject to inertia. Several other advantages ensue: Model induced oscillations are avoided completely since no actual forces are present. The derivatives in the equations of motion are smooth and therefore allow the use of fast and accurate high order numerical integrators. At the same time, existence and uniqueness of the solution to the ODE system follow almost directly from the smoothness properties. In addition, we introduce a method to calibrate parameters by theoretical arguments based on empirically validated assumptions rather than by numerical tests. These parameters, combined with the accurate integration, yield simulation results with no collisions of pedestrians. Several empirically observed system phenomena emerge without the need to recalibrate the parameter set for each scenario: obstacle avoidance, lane formation, stop-and-go waves and congestion at bottlenecks. The density evolution in the latter is shown to be quantitatively close to controlled experiments. Likewise, we observe a dependence of the crowd velocity on the local density that compares well with benchmark fundamental diagrams.' author: - Felix Dietrich - Gerta Köster bibliography: - 'Literature.bib' title: Gradient Navigation Model for Pedestrian Dynamics --- Introduction ============ Pedestrian flows are dynamical systems. Numerous models exist [@hamacher-2001; @antonini-2006; @chraibi-2011] on both the macroscopic [@hughes-2001; @hoogendoorn-2004] and the microscopic level [@helbing-1995; @chraibi-2010; @seitz-2012]. In the latter, two approaches seem to dominate: ordinary differential equation (ODE) models and cellular automata (CA). ODE models are particularly well suited to describe dynamical systems because they can formally and concisely describe the change of a system over time. The mathematical theory for ODEs is rich, both on the analytic and the numerical side. In CA models, pedestrians are confined to the cells of a grid. They move from cell to cell according to certain rules. This is computationally efficient, but there is only little theory available [@boccara-2003]. Many CA models employ a floor field to steer individuals around obstacles [@burstedde-2001; @ezaki-2012]. The use of floor fields for pedestrian navigation in ODE models is only sparingly described in literature. In [@helbing-1995], pedestrians steer towards the edges of a polygonal path, in [@hoogendoorn-2003] optimal control is applied. In addition, most of the ODE models are derived from molecular dynamics where the direction of motion is gradually changed by the application of a force. This leads to various problems mostly caused by inertia [@chraibi-2011; @koster-2013]. Cellular automata and more recent models in continuous space, like the Optimal Steps Model [@seitz-2012], deviate from this approach and directly modify the direction of motion. This is also true for some ODE models in robotics, where movement is very controlled and precise and thus inertia is negligible [@starke-2002; @starke-2011]. The direct change of the velocity constitutes a strong deviation from molecular dynamics and hence from force based models. This paper proposes an application of this model type to pedestrian dynamics: the Gradient Navigation Model (GNM). The GNM is a system of ODEs that describe movement and navigation of pedestrians on a microscopic level. Similar to CA models, pedestrians steer directly towards the direction of steepest decent on a given navigation function. This function combines a floor field and local information like the presence of obstacles and other pedestrians in the vicinity. The paper is structured as follows: In the model section, three main assumptions about pedestrian dynamics are stated. They lead to a system of differential equations. A brief mathematical analysis of the model is given in the next section where we use a plausibility argument to reduce the number of free parameters in the ODE system from four to two. We constructed the model functions so that they are smooth. Thus, using standard mathematical arguments, existence and uniqueness of the solution follow directly. In the simulations section the calibrated model is validated against several scenarios from empirical research: congestion in front of a bottleneck, lane formation, stop-and-go waves and speed-density relations. We also demonstrate computational efficiency using high-order accurate numerical solvers like MATLABs [@dormand-1980b] that need the smoothness of the solution to perform correctly. We conclude with a discussion of the results and possible next steps. \[sec-GNM\]Model ================ The Gradient Navigation Model (GNM) is composed of a set of ordinary differential equations to determine the position $x_i\in{\mathbb{R}}^2$ of each pedestrian $i$ at any point in time. A navigational vector $\vec{N}_i$ is used to describe an individual’s direction of motion. The model is constructed using three main assumptions. Assumption 1 : Crowd behavior in normal situations is governed by the decisions of the individuals rather than by physical, interpersonal contact forces. This assumption is based on the observation that even in very dense but otherwise normal situations, people try not to push each other but rather stand and wait. It enables us to neglect physical forces altogether and focus on personal goals. If needed in the future, this assumption could be weakened and additional physics could be added similar to [@hoogendoorn-2003] who split up what they call the *physical model* and the *control model*. Note that this assumption sets the GNM apart from models for situations of very high densities, where pedestrian flow becomes similar to a fluid [@hughes-2001; @helbing-2007]. Assumption 2 : Pedestrians want to reach their respective targets in as little time as possible based on their information about the environment. Most models for pedestrian motion are designed with this assumption. Differences remain regarding the optimality criteria for ‘little time’ as well as the amount of information each pedestrian possesses. [@helbing-1995] uses a polygonal path around obstacles for navigation, [@hoogendoorn-2004b] solve a Hamilton-Jacobi equation, incorporating other pedestrians. In this paper, we use the eikonal equation similar to [@hughes-2001; @hartmann-2010] to find shortest arrival times $\sigma$ of a wave emanating from the target region. This allows us to compute the part of the direction of motion $\vec{N}_{i,T}$ that minimizes the time to the target: $$\label{eq:NT} \vec{N}_{i,T}=-\nabla \sigma$$ Assumption 3 : Pedestrians alter their desired velocity as a reaction to the presence of surrounding pedestrians and obstacles. They do so after a certain reaction time. The relation of speed and local density has been studied numerous times and its existence is well accepted. The actual form of this relation, however, differs between cultures, even between different scenarios [@seyfried-2005; @chattaraj-2009; @jelic-2012]. Note that assumption 3 not only claims the existence of such a relation but makes it part of the thought process. In our model, we implement this by modifying the desired direction of motion with a vector $\vec{N}_{i,P}$ so that pedestrians keep a certain distance from each other and from obstacles. In models using velocity obstacles, this issue is addressed further [@fiorini-1998; @shiller-2001; @berg-2011; @curtis-2014]. Attractants like windows of a store or street performers could also be modelled as proposed by [@molnar-1996 p. 49], but are not considered in this paper. $$\label{eq:NP} \vec{N}_{i,P}=-(\underbrace{\sum_{j\neq i} \nabla P_{i,j}}_\text{influence of pedestrians} + \underbrace{\sum_B \nabla P_{i,B}}_\text{influence of obstacles})$$ $\nabla P_{i,j}$ and $\nabla P_{i,B}$ are gradients of functions that are based on the distance to another pedestrian $j$ and obstacle $B$ respectively. Their norm decreases monotonically with increasing distance. To model this, we introduce a smooth exponential function with compact support $R>0$ and maximum $p/e>0$ (see Fig. \[fig:calibrate\_potentials\]): $$\label{eq:h} h(r;R,p)=\begin{cases} p\ \text{exp}{ \left( \frac{1}{(r/R)^2-1} \right) }&|r/R|<1\\ 0&\text{otherwise} \end{cases}$$ ![\[fig:calibrate\_potentials\]The graph of $h$, which depends on the distance between pedestrians $r$ as well the maximal value $p/e$ and support $R$ of $h$.](fig1_calibrate_potentials.pdf){width="40.00000%"} To take the viewing angle of pedestrians into account, we scale $\nabla P_{i,j}$ by $$s_{i,j}=\tilde{g}(\text{cos}(\kappa\phi_{i,\sigma}-\kappa\phi_j))$$ The function $\tilde{g}$ is a shifted logistic function (see appendix \[eq:tildeg\]) and $(\phi_{i,\sigma}-\phi_j)$ is the angle between the direction $\vec{N}_{i,T}$ and the vector from $x_i$ to $x_j$ (see Fig. \[fig:viewingdirection\]). $\kappa$ is a positive constant to set the angle of view to $\approx 200\deg$. ![\[fig:viewingdirection\]Isolines of the function $s_{i,j}h(\|x_i-x_j\|;1,1)$ with $x_i=0$ and $x_j\in{\mathbb{R}}^2$. This function represents the field of view of a pedestrian in the origin together with his or her comfort zone. If $x_j$ is close and in front of $x_i$, the function values are maximal, meaning least comfort for $x_i$.](fig2_viewingdirection.pdf){width="40.00000%"} Using $h$ and $s_{i,j}$ (see Fig. \[fig:viewingdirection\]), the gradients in Eq. \[eq:NP\] are now defined by $$\begin{aligned} \label{eq:gradient_deltaP} \nabla P_{i,j}&=&h_\epsilon(\|x_i-x_j\|;p_j,R_j)s_{i,j}\frac{x_j-x_i}{\|x_j-x_i\|}\\\label{eq:gradient_deltaO} \nabla P_{i,B}&=&h_\epsilon(\|x_i-x_B\|;p_B,R_B)\frac{x_B-x_i}{\|x_B-x_i\|}\end{aligned}$$ where $p_j,R_j,p_B,R_B$ are positive constants that represent comfort zones between pedestrian $i$, pedestrian $j$ and obstacle $B$. To avoid (mostly artificially induced) situations when pedestrians stand exactly on top of each other [@koster-2013], we replace $h$ by $h_\epsilon$: $$h_\epsilon(\|x_i-x_j\|;p,R)=h(\|x_i-x_j\|;p,R)-h(\|x_i-x_j\|;p,\epsilon)$$ where $\epsilon>0$ is a small constant. For $\epsilon\to 0$, $h_\epsilon(\cdot;p,R)\to h(\cdot;p,R)$. For $\|x_i-x_j\|=0$, we also define $\nabla P_{i,j}=0$ and $\nabla P_{i,B}=0$. To validate the second part of assumption 3, we use the result of [@moussaid-2009b]: pedestrians undergo a certain reaction time $\tau$ between an event that changes their motivation and their action. The relaxed speed adaptation is modeled by a multiplicative, time dependent, scalar variable $w:{\mathbb{R}}^+_0\to{\mathbb{R}}$, which we call relaxed speed. Its derivative with respect to time, $\dot{w}$, is similar to acceleration in one dimension. Eq. \[eq:NT\] and \[eq:NP\] enable us to construct a relation between the desired direction $\vec{N}$ of a pedestrian and the underlying floor field as well as other pedestrians: $$\label{eq:naviation_vector} \vec{N}=g(g(\vec{N}_T)+g(\vec{N}_P))$$ The function $g:{\mathbb{R}}^2\to{\mathbb{R}}^2$ scales the length of a given vector to lie in the interval $[0,1]$. For the exact formula see appendix A. Note that with definition (\[eq:naviation\_vector\]), $N$ must not always have length one, but can also be shorter. This enables us to scale it with the desired speed of a pedestrian to get the velocity vector: $$\dot{\vec{x}}=\vec{N} w$$ With initial conditions $\vec{x}_0=\vec{x}(0)$ and $w_0=w(0)$ the Gradient Navigation Model is given by the equations of motion for every pedestrian $i$: $$\label{eq:GNMequations} \begin{array}{rcl} \dot{\vec{x}}_i(t)&=& w_i(t)\vec{N}_i(\vec{x}_i,t)\\\ \dot{w}_i(t)&=&\frac{1}{\tau}{ \left( v_i(\rho(\vec{x}_i))\|\vec{N}_i(\vec{x}_i,t)\|-w_i(t) \right) }\\ \end{array}$$ The position $\vec{x}_i:{\mathbb{R}}\to{\mathbb{R}}^2$ and the one-dimensional relaxed speed $w_i:{\mathbb{R}}\to{\mathbb{R}}$ are functions of time $t$. $v_i(\rho(\vec{x}_i))$ represents the individuals’ desired speeds that depends on the local crowd density $\rho(\vec{x}_i)$ (see assumption 3). Since the reason for the relation between velocity and density is still an open question [@seyfried-2005; @jelic-2012], we choose a very simple relation in this paper: we use $v_i(\rho)$ constant and normally distributed with mean $1.34$ and standard deviation $0.26$, i.e. $v_i(\rho)=v_i^\text{des}\sim N(1.34,0.26)$. The choice of this distribution is based on a meta-study of several experiments [@weidmann-1993]. With these equations, the direction of pedestrian $i$ changes independently of physical constraints, similar to heuristics in [@moussaid-2011], many CA models and the Optimal Steps Model [@seitz-2012]. The speed in the desired direction is determined by the norm of the navigation function $\vec{N}_i$ and the relaxed speed $w_i$. \[subsec:staticnavigationfield\]The navigation field ==================================================== Similar to [@hughes-2001] and later [@hoogendoorn-2003; @hartmann-2010], we use the solution $\sigma:{\mathbb{R}}^2\to{\mathbb{R}}$ to the eikonal equation (\[eq:eikonal\_equation\]) to steer pedestrians to their targets. $\sigma$ represents first arrival times (or walking costs) in a given domain $\Omega\subset{\mathbb{R}}^2$: $$\label{eq:eikonal_equation} \begin{array}{rcl} G(x)\|\nabla \sigma(x)\| &=& 1,\ x\in\Omega\\ \sigma(x)&=&0,\ x\in\Gamma\subset\partial\Omega \end{array}$$ $\Gamma\subset\partial\Omega$ is the union of the boundaries of all possible target regions for one pedestrian. Static properties of a geometry (for example rough terrain or an obstacle) can be modelled by modifying the speed function $G:{\mathbb{R}}^2\to (0,+\infty)$. [@hartmann-2014b] include the pedestrian density in $G$. This enables pedestrians to locate congestions and then take a different exit route. [@treuille-2006] used the eikonal equation to steer very large virtual crowds. If $G(x)=1\ \forall x$, $\sigma$ represents the length of the shortest path to the closest target region. This does not take into account that pedestrians can not get arbitrarily close to obstacles. Therefore, we slow down the wave close to obstacles by reducing $G$ in the immediate vicinity of walls. The influence of walls on $\sigma$ is chosen similar to $\|\nabla P_{i,B}\|$, so that pedestrians incorporate the distance to walls into their route. Being a solution to the eikonal equation (\[eq:eikonal\_equation\]), the floor field $\sigma$ is Lipschitz-continuous [@evans-1997]. In the given ODE setting, however, it is desirable to smooth $\sigma$ to ensure differentiability and thus existence of the gradient at all points in the geometry. We employ mollification theory [@evans-1997] with a mollifier $\eta$ (similar to $h$ in Eq. \[eq:h\]) on compact support $B(x)$ to get a mollified $\nabla\sigma$, which we call $\nabla\tilde{\sigma}$: $$\label{eq:smooth_nablasigma} \nabla\tilde{\sigma}(x)=\nabla(\eta * \sigma)(x)=\int_{B(x)} \nabla\eta(y)\sigma(x-y)dy\in C^\infty({\mathbb{R}}^2,{\mathbb{R}}^2)$$ \[sec-ANA\]Mathematical Analysis and Calibration ================================================ Existence and uniqueness of a solution to Eq. \[eq:GNMequations\] follows from the theorem of Picard and Lindeloef when using the method of vanishing viscosity to solve the eikonal equation [@evans-1997] and mollification theory to smooth $\nabla\sigma$ (see Eq. \[eq:smooth\_nablasigma\]). The system of equations in Eq. \[eq:GNMequations\] contains several parameters. [@moussaid-2009b; @johansson-2007] conducted experiments to find the parameters $\tau$ (relaxation constant, $\approx 0.5$) and $\kappa$ (viewing angle, $\approx 200\deg$, which corresponds to a value of $\kappa\approx 0.6$ here). The following free parameters remain: - $p_p$ and $R_p$ define maximum and support of the norm of the pedestrian gradient $\|\nabla P_{i,j}\|$ - $p_B$ and $R_B$ define maximum and support of the norm of the obstacle gradient $\|\nabla P_{i,B}\|$ We use an additional assumption to find relations between these four free parameters: Assumption 4 : A pedestrian who is enclosed by four stationary other persons on the one side and by a wall on the other side, and who wants to move parallel to the wall, does not move in any direction (see Fig. \[fig:calibrate\_model\]). This scenario is very common in pedestian simulations and involves many elements that are explicitly modeled: other pedestrians, walls and a target direction. The setup also includes other scenarios: when the wall is replaced by two other pedestrians, the one in the center also does not move if assumption 4 holds. This is because the vertical movement is canceled out by the symmetry of the scenario. (1,1) (0,0)[![\[fig:calibrate\_model\](Color online) Setup used to reduce the number of parameters. The pedestrian in the center (gray) wants to reach a target on the right (black, thick arrow, $-\nabla\sigma$) but is enclosed by four other pedestrians who are not moving and a wall (thick line at the bottom). Together, the pedestrians and the wall act on the gray pedestrian via $-\nabla\delta$ (sum of red, slim arrows). The red cross on the wall marks the position on the wall that is closest to the gray pedestrian.](fig3_calibrate_model.pdf "fig:"){width="\unitlength"}]{} (0.3871858,0.09985346)[(0,0)\[lb\]]{} (0.82093761,0.79565877)[(0,0)\[lb\]]{} Using assumption 4, we can simplify the system of equations (\[eq:GNMequations\]) to find dependencies between parameters. First, the direction vectors $\vec{N}_{i,T}$ and $\vec{N}_{i,P}$ are computed based on the given scenario. The gray pedestrian wants to walk parallel to the wall in positive x-direction, that means $$\begin{aligned} \label{eq:calibrate_sigma} \vec{N}_{i,T}(x_i)=-\nabla\sigma(x_i)&=&\left[\begin{array}{c} 1\\0 \end{array}\right]\end{aligned}$$ The remaining function $\vec{N}_{i,P}$ is composed of the repulsive effect of the four enclosing pedestrians and the wall. We simplify equations \[eq:gradient\_deltaP\] and \[eq:gradient\_deltaO\] by taking the limit $\epsilon\to 0$, which is reasonable since the pedestrians do not overlap in the scenario. $$\begin{aligned} \label{eq:calibrate_delta} \vec{N}_{i,P}=\underbrace{\sum_{i=1}^4 h_p(\|x_\text{gray}-x_i\|)s_{\text{gray},j}\frac{x_\text{gray}-x_j}{\|x_\text{gray}-x_j\|}}_{\text{influence of the `white' pedestrians}}\nonumber\\ +\underbrace{h_B(\|x_\text{gray}-x_B\|)\left[\begin{array}{c} 0\\1 \end{array}\right]}_{\text{influence of the wall}}\end{aligned}$$ Using Eq. (\[eq:calibrate\_sigma\]), (\[eq:calibrate\_delta\]) and assumption 4 the system of differential equations (\[eq:GNMequations\]) for the pedestrian in the center yield $$\begin{aligned} \label{eq:calibration_xdot}\dot{\vec{x}}_i=w \vec{N}_i=-w g(g(\vec{N}_{i,T})+g(\vec{N}_{i,P}))&=&0\\ \dot{w}=\frac{1}{\tau}(v(\rho)\|N\|-w)&=&0\end{aligned}$$ The second equality yields $$w=v(\rho)\|\vec{N}\|\implies w\geq 0$$ Since assumption 4 does not imply $w=0$, Eq. (\[eq:calibration\_xdot\]) holds true generally if $$\label{eq:calibrate_sigmadelta} g(\vec{N}_{i,T})=-g(\vec{N}_{i,P})$$ Since all $\phi_i$, $x_i$, $x_\text{gray}$ and $x_B$ are known in the given scenario, the only free variables in Eq. (\[eq:calibrate\_sigmadelta\]) are the free parameters of the model: the height and width of $h_p$ (named $p_p$ and $R_p$), as well as $h_B$ (named $p_B$ and $R_B$). With only two equations for four parameters, system (\[eq:calibrate\_sigmadelta\]) is underdetermined and thus we choose $R_B=0.25$ (according to [@weidmann-1993]) and $R_p=\sqrt{3}r(\rho_\text{max})$, where $r(\rho_\text{max})$ is the distance of pedestrians in a dense lattice with pedestrian density $\rho_\text{max}$. This choice for $R_p$ ensures that pedestrians adjacent to the enclosing ones have no influence on the one in the center. Note that if this condition is weakened in assumption 4, the model behaves differently on a macroscopic scale (see Fig. \[fig:FD2D\_speed\_twoneighbors\]). With two of the four parameters fixed, we use Eq. (\[eq:calibrate\_sigmadelta\]) to fix the remaining two. Table \[tab:calibrate\_parameters\] shows numerical values of all parameters, assuming $\rho_{max}=7 P/m^2$, which leads to $r(\rho_{max})\approx 0.41$. Parameter Value Description ----------- ------- --------------------- $\kappa$ 0.6 Viewing angle $\tau$ 0.5 Relaxation constant $p_p$ 3.59 Height of $h_p$ $p_B$ 9.96 Height of $h_B$ $R_p$ 0.70 Width of $h_p$ $R_B$ 0.25 Width of $h_B$ : \[tab:calibrate\_parameters\]Numerical values of all parameters of the Gradient Navigation Model using assumption 4 and $\rho_{max}=7P/m^2$. The first two were determined by experiment [@johansson-2007; @moussaid-2009b]. \[sec-VALIDATION\]Simulations ============================= To solve Eq. (\[eq:GNMequations\]) numerically, we use the step-size controlling Dormand-Prince-45 integration scheme [@dormand-1980b] with $\text{tol}_\text{abs}=10^{-5}$ and $\text{tol}_\text{rel}=10^{-4}$. Employing this scheme is possible because the derivatives are designed to depend smoothly on $x$, $w$ and $t$. Unless otherwise stated, all simulations use the parameters given in Tab. \[tab:calibrate\_parameters\]. The desired speeds $v_i^\text{des}$ are normally distributed with mean $1.34ms^{-1}$ and standard deviation $0.26ms^{-1}$ as observed in experiments [@weidmann-1993]. $v_i^\text{des}$ is cut off at $0.3ms^{-1}$ and $3.0ms^{-1}$ to avoid negative or unreasonably high values. We used the fast marching method [@sethian-1999] to solve the eikonal equation (Eq. \[eq:eikonal\_equation\]). The mollification of $\nabla\sigma$ (Eq. \[eq:smooth\_nablasigma\]) is computed using Gauss-Legendre quadrature with $21\times21$ grid points. All simulations were conducted on a machine with an Intel Xeon (R) X5672 Processor, 3.20 Ghz and with the Java-based simulator VADERE. Simulations of scenarios with over 1000 pedestrians were possible in real time under these conditions. We validate the model quantitiatively by comparing the flow rates of 180 simulated pedestrians in a bottleneck scenario (see Fig. \[fig:liddle\_cone\]) of different widths with experimentally determined data from [@kretz-2006; @seyfried-2009; @liddle-2011]. The length of the bottleneck is $4m$ in all runs. Fig. \[fig:flow\_liddle\] shows that, regarding flow rates, the simulation is in good quantitative agreement with data from [@kretz-2006; @seyfried-2009; @liddle-2011] for all bottleneck widths. ![\[fig:flow\_liddle\](Color online) Flow rate of the GNM compared to experiments of Kretz [@kretz-2006], Seyfried [@seyfried-2009] and Liddle [@liddle-2011]. We use the parameters from Tab. \[tab:calibrate\_parameters\] and the normal distribution $N(1.34ms^{-1},0.26ms^{-1})$ to find desired velocities as proposed by [@weidmann-1993].](fig4_flow_liddle.pdf){width="35.00000%"} Also, the formation of a crowd in front of a bottleneck matches observations well (see Fig. \[fig:liddle\_cone\]): in front of the bottleneck, they form a cone as observed by [@kretz-2006; @seyfried-2009b; @schadschneider-2011b]. Note that this is different from the behaviour described in [@helbing-2000] that tries to capture the dynamics in stress situations. Our simulations suggest that the desired velocity is the most important parameter for this experiment: when we change its distribution to $N(1.57ms^{-1},0.15ms^{-1})$ as found by [@gerhardt-2011], the flow is $\approx 1s^{-1}$ higher for small widths and $\approx 1s^{-1}$ lower for larger widths. ![\[fig:liddle\_cone\]The pedestrians in the GNM simulation form a cone in front of the bottleneck as observed by [@kretz-2006; @seyfried-2009b; @schadschneider-2011b].](fig5_liddle_cone.pdf){width="40.00000%"} The GNM can be calibrated to match the relation of speed and density in a given fundamental diagram. Fig. \[fig:FD2D\_speed\_oneneighbor\] shows that for the calibration with only one layer of neighbors, pedestrians do not slow down with increasing densities as quickly as suggested in [@weidmann-1993]. When calibrating with one additional layer of pedestrians in the scenario shown in Fig. \[fig:calibrate\_model\], the curves match much better (see Fig. \[fig:FD2D\_speed\_twoneighbors\]). We use the method introduced by [@liddle-2011] to measure local density. ![\[fig:FD2D\_speed\_oneneighbor\](Color online) Speed-density relation in unidirectional flow compared to experimental data from metastudy (Weidmann, [@weidmann-1993]). The corridor was $40m$ long and $4m$ wide with periodic boundary conditions. Each cross (labeled ‘simulation’) represents a local measurement at the position of a pedestrian. We use the method introduced by [@liddle-2011] to measure local density. The parameter set in these simulations was fixed with the procedure shown in Fig. \[fig:calibrate\_model\] and thus incorporates one layer of four pedestrians.](fig6_FD2D_speed_oneneighbor.pdf){width="40.00000%"} ![\[fig:FD2D\_speed\_twoneighbors\](Color online) Speed-density relation in unidirectional flow compared to experimental data from metastudy (Weidmann, [@weidmann-1993]). The corridor was $40m$ long and $4m$ wide with periodic boundary conditions. Each cross (labeled ‘simulation’) representsa local measurement at the position of a pedestrian. We use the method introduced by [@liddle-2011] to measure local density. The parameter set in these simulations was adjusted with a similar procedure as in Fig. \[fig:calibrate\_model\] to incorporate neighbors of neighbors in the computation of $\nabla\delta$: $R_p=1.0$, $p_p=1.79$, $R_B=0.25$ and $p_B=11.3$.](fig7_FD2D_speed_twoneighbors.pdf){width="40.00000%"} [@gaididei-2013; @marschler-2013] compute the deviation of distances between drivers to analyze stop-and-go waves in car traffic. No deviation implies no stop-and-go waves since all distances are equal. A large deviation hints at the existence of a wave since there must be regions with large and regions with small distances between drivers. For pedestrian dynamics [@helbing-2007; @portz-2011; @jelic-2012] found stop-and-go waves experimentally. Similar to the wave analysis in traffic, we use the deviation of individual speeds to measure stop-and-go waves. Fig. \[fig:stopandgo\_mu\_sigma\] and \[fig:stopandgo\_scenario\] shows that the GNM also produces stop-and-go waves when a certain global density is reached. ![\[fig:stopandgo\_mu\_sigma\]Normalized standard deviation $\sigma(v)/0.26$ (line) and mean $\mu(v)/1.34$ (dashed line) of individual speeds in a unidirectional walkway with differing global densities $\rho$. Both the data points of the simulations and zero-phase digital filtering curves (width: five data points) are shown. The peak of the standard deviation at $\rho=4Pm^{-2}$ indicates stop-and-go waves: even though the mean speed decreases, the speed differences increase, which means that there are regions with low as well as high speeds present at the same time.](fig8_stopandgo_mu_sigma.pdf){width="40.00000%"} ![\[fig:stopandgo\_scenario\]Snapshot of a unidirectional pathway with global density $\rho=4P/m^2$, dimension $50m\times 4m$, periodic boundary conditions, after $120$ simulated seconds and walking direction to the right. The normal and dashed lines mark slower and faster pedestrians, respectively: a stop-and-go wave.](fig9_stopandgo_scenario.pdf){width="40.00000%"} The model also captures lane formation in bidirectional flow out of uniform initial conditions, as observed experimentally by [@zhang-2012b]. In the simulation, pedestrians walk bidirectionally in a 10m wide and 150m long pathway at a pedestrian density of $0.3Pm^{-1}$. They start on uniformly distributed positions at the left / right side and walk towards a target on the respective other end. Fig. \[fig:lanes\_25m\] shows that several lanes form. Due to the different desired velocities, many of them brake up after some time. When simulating with densities higher than $1Pm^{-2}$ in the whole pathway, pedestrians block each other and all movement stops. ![\[fig:lanes\_25m\]Formation of six lanes in bidirectional flow. Filled circles represent pedestrians walking to the left, empty circles represent pedestrians walking to the right. The walkway is 10m wide and 150m long. The snapshot shows a section of 25m.](fig10_lanes_25m.pdf){width="45.00000%"} conclusion ========== We introduced a new ODE based microscopic model for pedestrian dynamics, the Gradient Navigation Model. We demonstrated that the model very well reproduces important crowd phenomena, such as bottleneck scenarios, lane formation, stop-and-go waves and the speed-density relation. In the case of bottlenecks and the speed-density relation good agreement with experimental data was achieved. Calibration of the model parameters was performed using plausible assumptions on the outcome of benchmark scenarios rather than numerical tests. Recalibration for different scenarios was unnecessary. One main goal for the model was to find a concise formulation with as few equations as possible and, at the same time, certain smoothness properties so that existence, uniqueness and smoothness of the solution would follow directly. The GNM only needs three equations, as opposed to four in force based models, to describe motion of one pedestrian. In addition, we proposed a floor field to steer pedestrians instead of constructing paths or guiding lines. The floor field was computed by solving the eikonal equation using Sethian’s highly efficient fast marching algorithm [@sethian-1999]. To achieve smoothness, mollification techniques were employed. The smoothness also enabled us to use numerical schemes of high order making the GNM computationally very efficient. Two of the methods we introduced can easily be carried over to other models: The plausibility arguments that allowed us to calibrate free parameters hold independently of the model. The mollification techniques that led to the smooth functions could also be used by other differential equation based models like the Social Force Model [@helbing-1995; @koster-2013] or the Generalized Centrifugal Force Model [@chraibi-2010]. Some of the most recent enhancements in crowd modeling rely on a floor field to steer pedestrians towards the target. Among them are steering around crowd clusters [@hughes-2001; @hartmann-2014b] and more sophisticated navigation on the tactical and strategic level [@hoogendoorn-2004]. These developments can be employed in the GNM without any change to the equations of motion. Some empirical observations, such as stop-and-go traffic [@helbing-2007; @schadschneider-2011b] are not yet well understood, neither from the experimental nor the theoretical point of view. In a mathematical model stability issues and bifurcations often are at the root of such phenomena. The concise mathematical formulation of the GNM as an ODE systems facilitates stability analysis and the investigation of bifurcations, both tasks that we are currently working on. This work was partially funded by the German Ministry of Research through the project MEPKA (Grant No. 17PNT028). Support from the TopMath Graduate Center of TUM Graduate School at Technische Universität München, Germany, and from the TopMath Program at the Elite Network of Bavaria is gratefully acknowledged. We thank Mohcine Chraibi for his advice on the validation of the model. Appendix A {#appendix-a .unnumbered} ========== \[sec:smoothg\]Vector normalizer -------------------------------- In order to design a function that smoothly scales a given vector to a length in $[0,1]$, a smooth ramp function is needed. The following chain of definitions is adopted from [@koster-2013]: Let $r:{\mathbb{R}}\to{\mathbb{R}}$ be the ramp function defined by $$r(x)=\begin{cases} 0&\text{for }x\leq 0\\ x&\text{for }x\in(0,1)\\ 1&\text{for }x\geq 1\\ \end{cases}$$ Then, a smooth version $r_\text{moll,p}$ with mollification parameter $p$ is given by $$\begin{aligned} \text{moll}(x,R,p)&=&\begin{cases} e\cdot exp(\frac{1}{(\|x\|/R)^{2p}-1})&\text{for }\|x\|<R\\ 0&\text{for }\|x\|\geq R \end{cases}\\ r_\text{moll,p}(x)&=&\text{moll}(x,1,p)\cdot x +(1-\text{moll}(x,1,p))\end{aligned}$$ where $p>1$. For this paper, we used $p=3$. The following two statements hold: 1. $r_\text{moll,p}\in C^\infty({\mathbb{R}})$ 2. $r_\text{moll,p}(x)=r(x)\ \forall x\in {\mathbb{R}}\setminus (0,1)$ \(i) holds since the standard mollifier is smooth: $\text{moll}(x,R,p)\in C^\infty$ [@evans-1997]. (ii) is trivial from the definitions of $r$ and $r_{\text{moll,p}}$. The desired scaling function $g$ can now be defined as follows: $$\begin{aligned} g:{\mathbb{R}}^2&\to&{\mathbb{R}}^2\\ x&\mapsto&\begin{cases} (0,0)^T&\text{for }\|x\|=0\\ x/\|x\|\cdot r_{moll,p}(\|x\|)&\text{for }\|x\|>0 \end{cases}\end{aligned}$$ For a similar, one-dimensional version, for example for smoothing $max(0,x)$ with $x\in[-1,1]$, the logistic function can be used: $$\label{eq:tildeg} \tilde{g}(x;x_0,R)=\frac{1}{1 + e^{-(x-x_0) / R}}$$ with $x,x_0,R\in\mathbb{R}$. In this paper, we choose $x_0=0.3$ and $R=0.03$ to smooth the influence of the viewing angle.
--- address: - 'Ivan Yu. Mogilnykh Sobolev Institute of Mathematics, pr. ac. Koptyuga 4, 630090, Novosibirsk, Russia' - 'Faina I. Solov’eva Sobolev Institute of Mathematics, pr. ac. Koptyuga 4, 630090, Novosibirsk, Russia' author: - '[I. Yu. Mogilnykh, F. I. Solov’eva]{}' title: 'On components of a Kerdock code and the dual of the BCH code $C_{1,3}$' --- [^1] [^2] > [Abstract. ]{} In the paper we investigate the structure of $i$-components of two classes of codes: Kerdock codes and the duals of the primitive cyclic BCH code with designed distance 5 of length $n=2^m-1$, for odd $m$. We prove that for any admissible length a punctured Kerdock code consists of two $i$-components and the dual of BCH code is $i$-component for any $i$. We give an alternative proof for the fact that the restriction of the Hamming scheme to a doubly shortened Kerdock code is an association scheme [@vanCaen]. > > [**Keywords:**]{} Kerdock code, shortened Kerdock code, punctured Kerdock code, Reed-Muller code, uniformly packed code, dual code, association scheme, t-design Introduction ============ Let ${{\mathbb{F}}}^n$ be the vector space of dimension $n$ over the Galois field $GF(2)$. Denote by ${\bf 0}^n$ and ${\bf 1}^n$ the all-zero and all-one vectors in ${{\mathbb{F}}}^n$ respectively. The Hamming distance $d(x,y)$ between vectors $x,y \in {{\mathbb{F}}}^n$ is the number of positions at which the corresponding symbols in $x$ and $y$ are different. [*The Hamming weight*]{} $w(x)$ of a vector $x$ is $d(x,{\bf 0}^n)$. A [*code*]{} of length $n$ is a subset of ${{\mathbb{F}}}^n$. Vectors of a code are called [*codewords*]{}. The [*size*]{} of a code is the number of its codewords. The [*code distance*]{} (or [*minimum distance*]{}) of a code is the minimum value of the Hamming distance between two different codewords from the code. The [*kernel*]{} $Ker(C)$ of a code $C$ is $\{x:x+C=C\}$. Obviously, the code $C$ is a union of cosets of $Ker(C)$. The code obtained from a code $C$ by deleting one coordinate position is called the [*punctured code*]{}. Such code we denote by $C^*$ and doubly punctured code by $C^{**}$. The [*shortened code*]{} of $C$ is obtained by selecting the subcode of $C$ having zeros at a certain position and deleting this position. We denote such code by $C^{\prime}$. Doubly shortened code we denote by $C^{\prime\prime}$. For a code $C$ denote by $I(C)$ the set of distances between its codewords: $I(C)=\{d(x,y):x,y \in C\}$ and by $C_i$ denote the set of its codewords of weight $i$: $C_i=\{x\in C: w(x)=i\}$. All other necessary definitions and notions can be found in [@MWSl]. Given a code $C$ with minimum distance $d$ consider the graph $G_i(C)$ with the set of codewords as the set of vertices and the set of edges $\{(x,y):d(x,y)=d, x_i\neq y_i\}$. A connected component of the graph $G_i(C)$ is called the [*$i$-component*]{} of the code. If the minimum distance $d$ is greater then $2$ then changing the value in $i$th coordinate position in all vectors of any $i$-component by the opposite one in the code leads to a code with the same parameters: length, size and code distance. Therefore, we can obtain an exponential number (as a function of the number of $i$-components in the code) of different codes with the same parameters. Such approach was earlier successfully developed for the class of perfect codes. The method of $i$-components allowed to construct a large class of pairwise nonequivalent perfect codes and was used to study various code properties, see the survey [@Sol]. Punctured Preparata codes, perfect codes with code distance 3 and the primitive cyclic BCH code $C_{1,3}$ with designed distance 5 of length $2^m-1$, odd $m$ are known to be uniformly packed [@SZZ1971], [@BZZ]. Therefore, the fixed weight codewords of the extensions of these codes form 3-designs, which was proved by Semakov, Zinoviev and Zaitsev in [@SZZ1971]. An analogous property holds for duals of codes from these classes. Let $C^{\perp}$ be a formally dual code to a code $C$ with code distance $d$, i.e. their weight distributions are related by McWilliams identities [@MWSl]. In Theorem 9, Ch. 9, [@MWSl] it was shown that the set of codewords of any fixed weight in $C^{\perp}$ is $(d-\bar{s})$-design, where $\bar{s}$ denotes the number of different nontrivial (not equal to $0$ and $n$) weights of the codewords of $C^{\perp}$. It is well-known that a Kerdock code and a Preparata code of the same length are formally dual. Therefore, the fixed weight codewords of a Kerdock code are $3$-designs and the code $C^{\perp}_{1,3}$ orthogonal to $C_{1,3}$ of length $2^m-1$, $m$-odd, are $2$-designs respectively. The aforementioned codes are related to association schemes. Let $X$ be a set, and there are $n+1$ relations $R_i$, $i\in I$ that partition $X\times X$. The pair $(X,\{R_i\}_{i \in I})$ is called an [*association scheme*]{}, if there are $\delta_{i,j}^k(X)$, such that - The relation $\{(x,x):x \in X\}$ is $R_j$ for some $j \in I$. - For any $i$, the relation $R_{i}^{-1}=\{(y,x):(x,y) \in R_i\}$ is $R_j$ for some $j\in I$. - For any $i,j,k\in I$ and $x,y$ in $X$, $(x,y) \in R_i$ the following holds: $$\delta_{i,j}^k(X)=|\{z:z \in X, (x,z)\in R_j, (y,z)\in R_k \}|.$$ The numbers $\delta_{i,j}^k(X)$, $i,j,k\in I$ are called [*intersection numbers*]{} of the association scheme. Let $C$ be a binary code. Consider the partition of the cartesian square $C\times C$ into distance relations, i.e. two pairs of codewords are in the same relation if and only if the Hamming distances between the pairs coincide. Such partition is called [*the restriction*]{} of the Hamming scheme to the code $C$, see [@Del]. There are several cases where the restriction gives an association scheme. In this case, the code with this property is called distance-regular, see [@SolTok]. Using linear programming bound, Delsarte in [@Del] showed that the restriction of the Hamming scheme to a shortened Kerdock code is an association scheme. An analogous fact for Kerdock codes was proved in [@SolTok] by finding the intersection numbers of the restricted scheme directly. In work [@vanCaen], see also [@Abdukhalikov], it is shown that the restriction to a doubly shortened Kerdock code is also an association scheme. The latter fact contributes to a significant part of the current paper concerning components of a Kerdock code, however we give an alternative combinatorial proof for this fact as we essentially need a convenient way of finding the intersection numbers of the scheme. Delsarte (Theorem 6.10, [@Del]) proved that the restriction of the Hamming scheme to the dual of any linear uniformly packed code (in particular, the code $C^{\perp}_{1,3}$, which is dual of the BCH code $C_{1,3}$) is an association scheme. In this paper we show that the punctured Kerdock code have two $i$-components for any coordinate position $i$, while the dual of a linear uniformly packed code with parameters of BCH code $C_{1,3}$ is $i$-component for any coordinate position $i$. Components of Kerdock code ========================== In the section we fix $n$ to be $2^m$, for even $m$, $m\geq 4$. A [*Kerdock code*]{} $K$ is a binary code of length $n$, and minimum distance $d=(n-\sqrt{n})/2$, consisting of the first order Reed–Muller code RM$(1,m)$ and $2^{m-1}-1$ its cosets such that the weights of the codewords in a coset are $d$ or $n-d$. These codes were firstly constructed in [@Kerdock] and further generalizations were obtained in [@Kantor], [@Ham]. The weight distribution of a Kerdock code is well-known and is related with the weight distribution of a Preparata code via McWilliams identities [@MWSl]. i The number of codewords of weight i --------------- ------------------------------------- 0 1 d $n(n-2)/2$ $\frac{n}{2}$ $2n-2$ n-d $n(n-2)/2$ n 1 In order to prove that a Kerdock code consists of two $i$-component we use the following properties of the code, that come from its definition. Without loss of generality, ${\bf 0}^n$ is in a Kerdock code. (K1) Any code $K$ is a union of $n/2$ cosets of RM$(1,m)$. (K2) It is true that $K_{n/2}\bigcup \{ {\bf 0}^n, {\bf 1}^n \}= \mbox{ RM}(1,m)$. (K3) The distance between codewords from different cosets of RM$(1,m)$ in the code $K$ is either $d$ or $n-d$. (K4) Nonzero distances between codewords in any coset are either $n/2$ or $n$. (K5) RM$(1,m) \subseteq Ker(K)$. The property below follows from (K2)-(K5): (K6) If for $x,y \in K$ we have $w(x+y)=n/2$ then $x+y\in K$. \[TMW\][@MWSl]\[Theorem 9, Ch. 9\] Let $C$ be a code of length $n$ and minimum distance $d$, $C^{\perp}$ be a code which is formally dual to $C$, $\bar{s}=|I(C^{\perp})\setminus\{0, n\}|$. Then the set of codewords of any fixed nonzero weight in $C^{\perp}$ is $(d-\bar{s})$-design. Theorem \[TMW\] applied to Preparata and Kerdock codes implies the following: (K7)[@MWSl] $K_d$, $K_{n/2}$, $K_{n-d}$ are 3-designs. In order to proceed further we need the following lemma. \[magic\] Let $x$ be a vector of weight $i$, $D$ be $1-(n,j,\lambda_1)$-design. Let the distance between $x$ and vectors of $D$ take values $k_1,\ldots, k_s$ with multiplicities $\delta^{k_1},\ldots,\delta^{k_s}$ respectively. Then the following formula holds: $$\label{lemma1*} \sum_{l= 1}^{s}\delta^{k_l}\cdot \frac{i+j-k_l}{2}=i\lambda_1$$ and $\delta^{k_1},\delta^{k_2}$ are uniquely defined by $\delta^{k_3},\ldots,\delta^{k_s}$. Let the distance between the vector $x$ and an arbitrary vector $y$ from $D$ be $k_l$, then there are $$\frac{i+j-k_l}{2}$$ common unit coordinates for $x$ and $y$, $l=1,2,\ldots,k_s$. On the other hand, there are exactly $\lambda_1$ vectors of $D$ that have a prefixed coordinate to be $1$. Double counting of $$\sum_{y\in D}|\{i: x_i=y_i=1\}|$$ gives $\sum_{l= 1}^{s}\delta^{k_l}\cdot \frac{i+j-k_l}{2}=i\lambda_1 $. Finally $\delta^{k_1},\delta^{k_2}$ are uniquely defined by (\[lemma1\*\]) taking into account that $\sum_{l= 1}^{s}\delta^{k_l}=|D|$, where $|D|=\lambda_1 \frac{n}{j}.$ Note that $I(K^{\prime\prime})=\{0,d,n/2,n-d\}$, as we exclude the all-one vector in $K^{\prime}$. \[ShKass\] The restriction of the Hamming scheme to a doubly shortened Kerdock code $K^{\prime\prime}$ is an association scheme. In the proof of the current theorem we use the following convention. By $\delta^k_{i,j}(x)$ we denote the number of codewords of weight $j$ in $K^{\prime\prime}$ at distance $k$ from the weight $i$ codeword $x$ in $K^{\prime\prime}$. Obviously, the restriction of the Hamming scheme to $K^{\prime\prime}$ is an association scheme if $\delta^k_{i,j}(x)$ for all $i, j, k \in I(K^{\prime\prime})$ are shown to be independent on the choice of a codeword $x$ of weight $i$ regardless of translation of $K^{\prime\prime}$ by its codeword. The proof below relies only on properties (K1)-(K7) of a Kerdock code $K$ that are independent on the translation of the code. \[lemma2\] The number $\delta^k_{i,j}(x)$ does not depend on the choice of a codeword $x$ in $K^{\prime\prime}_i$ if $i$ or $j$ equals to $n/2$. The property (K4) implies that the distances between codewords from $K^{\prime\prime}_{n/2}$ and $K^{\prime\prime}_{d}$ or $K^{\prime\prime}_{n-d}$ cannot be $n/2$. Moreover (K7) implies that the sets of the fixed weight codewords of a doubly shortened Kerdock code are 1-designs, so by Lemma \[magic\], the intersection numbers $\delta_{i, j}^d(x)$ and $\delta_{i, j}^{n-d}(x)$ are uniquely determined and do not depend on a choice of $x$ if $i$ and $j$ are not equal to $n/2$ simultaneously. Finally, $RM(1,m)^{\prime}$ is a linear Hadamard code, so the set of nonzero codewords $K^{\prime\prime}_{n/2}$ of its shortening $(RM(1,m)_{n/2})^{\prime\prime}$ are also at distance $n/2$ apart pairwise, so $\delta_{n/2,n/2}^k(x)$ is $n/4-1$ if and only if $k=n/2$ and is zero otherwise. \[lemma3\] Let $n/2\in \{i,j,k\}$. Then the number $\delta^k_{i,j}(x)$ does not depend on the choice of a codeword $x$ of weight $i$. We show that $\delta_{i, j}^{n/2}(x)$=$\delta_{i, n/2}^{j}(x)$. Consider the set $\{z\in K^{\prime\prime}_j, d(z,x)=n/2\}$. By definition it is of the size $\delta^{n/2}_{i,j}(x)$. Consider the translation of the set by $x\in K^{\prime\prime}_{n/2}$. Since $x$ is of weight $n/2$, the property (P6) implies that $x+z$ is a codeword of the doubly shortened Kerdock code $K^{\prime\prime}$. The substitution $z^{\prime}=z+x$ gives the equality $$\{z+x:z\in K^{\prime\prime}_j, d(z,x)=n/2\} =\{z^{\prime}\in K_{n/2}^{\prime\prime}, d(z^{\prime},x)=j\}.$$ The cardinality of the right hand side is $\delta^{j}_{i,n/2}(x)$, so $\delta^{n/2}_{i,j}(x)=\delta^{j}_{i,n/2}(x)$ and the number is independent on $x$ by Lemma \[lemma2\]. \[lemma4\] The number $\delta^k_{i,j}(x)$ does not depend on the choice of a codeword $x$ of weight $i$ for $i,j,k \in I(K^{\prime\prime})$. Since $I(K^{\prime\prime})=\{0,d,n/2,n-d\}$, the nonzero distances between codewords from $K^{\prime\prime}_i$ and $K^{\prime\prime}_j$ take not more than three nontrivial values. The property (K7) implies that $K_j^{\prime\prime}$ is a 1-design and by Lemma \[lemma3\] the number $\delta_{i,j}^{n/2}(x)$ of codewords at distance $n/2$ in $K_j^{\prime\prime}$ from $x$ is independent on choice of $x$ in $K_i^{\prime\prime}$, so the numbers $\delta^d_{i,j}(x)$ and $\delta^{n-d}_{i,j}(x)$ are independent on $x$ by Lemma \[magic\]. The considerations in the beginning of the proof of the theorem and Lemma \[lemma4\] imply that the restriction of the Hamming scheme to $K^{\prime\prime}$ is an association scheme. In order to find components of the punctured Kerdock code, we need one more lemma. \[lemma5\] Let $C$ be a code of length $n'$ such that the restriction of the Hamming scheme to its codewords is an association scheme. Let $I(C)$ be such that $I(C) \cap \{n'-i: i \in I(C)\}=\varnothing$. Then the restriction of the Hamming scheme to the code ${\overline C}=C\bigcup({\bf 1}^{n'}+C)$ is an association scheme. If $i$ is in $I(C)$, denote by $i'$ the number $n'-i$. If there are given three distances from $I({\overline C})$ and even belonging to $I(C)$ is even then the corresponding intersection number of ${\overline C}$ is zero: $$\delta_{i',j}^k({\overline C})=\delta_{i,j'}^k({\overline C})=\delta_{i,j}^{k'}({\overline C})=\delta_{i',j'}^{k'}({\overline C})=0.$$ Otherwise, the intersection number of ${\overline C}$ coincides with that of $C$: $$\label{eqal}\delta_{i',j'}^k({\overline C})=\delta_{i',j}^{k'}({\overline C})=\delta_{i,j'}^{k'}({\overline C})=\delta_{i,j}^k({\overline C})=\delta_{i,j}^k( C).$$ \[Comp\] Let $K^*$ be a punctured Kerdock code, $i \in \{1,\ldots, n-1\}$. The code $K^*$ consists of two $i$-components and codewords are in the same component if their puncturings in $i$th position have weights of the same parity. Consider any two coordinates $i, j$ of a Kerdock code of length $n$. Proving that there are just two $i$-components in $K^{*}_j$ is equivalent to showing that the minimum distance graph of the doubly punctured Kerdock code $K^{**}_{ij}$ has two connected components (which are actually even and odd weight codewords). Recall [@A] that the minimum distance graph of a code is the graph with vertex set being codewords and edgeset being pairs of codewords at code distance. The minimum distance of the code $K^{**}$ is even and equal to $d-2$. The even weight codewords of $K^{**}_{ij}$ are obtained from codewords of $K$ having 0 or 1 simultaneously in $i$th and $j$th positions by puncturing in these positions and the odd weight codewords of $K^{**}_{ij}$ are obtained from the codewords of $K$ having both 0 and 1 in $i$th or $j$th positions by puncturing in these positions. Moreover, the odd weight subcode $K^{**}_{ij}$ is obtained as a translation of even weight subcode $K^{**}_{ij}$. Indeed, let $x$ be in $RM(1,m)$, having 0 in $i$th position and 1 in $j$th position (there is such vector in the code $RM(1,m)$ since codewords of $RM(1,m)$ of weight $n/2$ form 3-design). Since $x$ is in $Ker(K)$, the addition of even weight codewords of $K^{**}_{ij}$ with the codeword $x^{**}$ obtained from $x$ by puncturing in $i$th and $j$th position is the odd weight subcode of $K^{**}_{ij}$. In view of the above, it is enough to show the connectedness of the minimum distance graph of the even weight subcode of $K^{**}$, whose codewords have weights from $\{0,d-2,d,n/2-2,n/2,n-d-2,n-d,n-2\}$. The proof significatively relies on the fact that the restriction of the Hamming scheme to ${\overline {K^{\prime\prime}}}$ is an association scheme which follows from Theorem \[ShKass\] and Lemma \[lemma5\]. We show that certain intersection numbers of the restriction of the Hamming scheme to ${\overline {K^{\prime\prime}}}$ are nonzeros. The following equalities hold: $$\label{eqL1} \delta_{d-2,n/2}^{d-2}({\overline {K^{\prime\prime}}})=\frac{n^2-6n-2nd+8d}{4(n-2d)}.$$ $$\label{eqL2} \delta_{d-2,n/2}^{n-d-2}({\overline {K^{\prime\prime}}})=\frac{n^2-2nd+2n}{4(n-2d)}.$$ By equality (\[eqal\]), we know that $\delta_{n-d,n/2}^{n-k-2}(K^{\prime\prime})=\delta_{d-2,n/2}^{k}({\overline {K^{\prime\prime}}})$ for $k=d,n-d$. It is easy to see that the nonzero codewords of the code $RM(1,m)^{\prime\prime}$ form $1-(n-2,n/2,n/4)$-design, since there are exactly $2n-2$ nonzero codewords of $RM(1,m)$ of weight $n/2$ which form 3-design. From (K3) we have that $\delta_{n-d,n/2}^{n-d}(K^{\prime\prime})+\delta_{n-d,n/2}^{d}(K^{\prime\prime})$ is the number of nonzero codewords of $RM(1,m)^{\prime\prime}$, so it is $n/2-1$. Therefore, we obtain the following equality from Lemma \[magic\]: $$\delta_{n-d,n/2}^{n-d}(K^{\prime\prime})\frac{n}{4}+(n/2-1-\delta_{n-d,n/2}^{n-d}(K^{\prime\prime}))(\frac{3n}{4}-d)=\frac{n}{4}(n-d).$$ and we find that $\delta_{n-d,n/2}^{n-d}(K^{\prime\prime})=\frac{n^2-6n-2nd+8d}{4(n-2d)}$, $\delta_{n-d,n/2}^{d}(K^{\prime\prime})=\frac{n^2-2nd+2n}{4(n-2d)}.$ From the values given by (\[eqL1\]) and (\[eqL2\]) we see that $\delta_{d-2,n/2}^{d-2}({\overline {K^{\prime\prime}}})$ and $\delta_{d-2,n/2}^{n-d-2}({\overline {K^{\prime\prime}}})$ are nonzeros, which is equivalent to $$\label{eq2} \delta_{d-2,n/2}^{d-2}({\overline {K^{\prime\prime}}}) \neq 0, \,\, \delta_{n/2,n-d-2}^{d-2}({\overline {K^{\prime\prime}}})\neq 0.$$ Consider the codewords of ${\overline {K^{\prime\prime}}}_{d-2}$. Obviously, the codewords cannot be at distance $n/2$ pairwise apart, which follows, for example, from the Plotkin bound. Therefore there are codewords of weight $d-2$ at distance $d$ apart and $\delta_{d-2,d-2}^d({\overline {K^{\prime\prime}}})\neq 0$, which is equivalent to $$\label{eq1} \delta_{d-2,d}^{d-2}({\overline {K^{\prime\prime}}})\neq 0.$$ From (\[eq2\]) we see that any codeword of ${\overline {K^{\prime\prime}}}_{n/2}$ is at distance $d-2$ from at least one codeword of $K_{d-2}$ and a codeword of ${\overline {K^{\prime\prime}}}_{n-d-2}$ is at distance $d-2$ from at least one codeword of ${\overline {K^{\prime\prime}}}_{n/2}$. Therefore, ${\overline {K^{\prime\prime}}}_{d-2}$, ${\overline {K^{\prime\prime}}}_{n/2}$, ${\overline {K^{\prime\prime}}}_{n-d-2}$ are in one connected component of the minimum distance graph of ${\overline {K^{\prime\prime}}}$. Taking into account the equality (\[eqal\]) this fact is equivalent to the fact that the codewords of ${\overline {K^{\prime\prime}}}_{n-d}$, ${\overline {K^{\prime\prime}}}_{n/2-2}$ and ${\overline {K^{\prime\prime}}}_{d}$ belong to one component. Finally, the inequality (\[eq1\]) implies that ${\overline {K^{\prime\prime}}}_{d-2}$ and ${\overline {K^{\prime\prime}}}_{d}$ are in one component, which implies that the codewords of weights $\{0,d-2,d,n/2-2,n/2,n-d-2,n-d,n-2\}$ are in one connected component, which is exactly the minimum distance graph of ${\overline {K^{\prime\prime}}}$. [**Remark 1**]{}. Theorems \[ShKass\] and \[Comp\] are true for some other Kerdock-related codes. In particular, by considerations similar to those in proof of Theorem \[ShKass\] one can show that a Kerdock and a shortened Kerdock codes produce association schemes, which gives an alternative (combinatorial) proof for the well-known facts from [@Del] and [@SolTok]. Analogously to the proof of Theorem \[Comp\], one can prove that the $i$-components of a Kerdock code coincide with the Kerdock code or equivalently, the minimum distance graph of a punctured Kerdock code is connected. [**Remark 2**]{}. According to Theorem \[Comp\], new Kerdock codes cannot be constructed by means of traditional switchings. For convenience we set $i=n-1$. By the proof Theorem \[Comp\] we know that two codewords are in one $(n-1)$-component of the punctured Kerdock code $K^{*}_n$ if and only if their puncturings in $(n-1)$th coordinate position have weights of the same parity. Therefore, the codewords of the Kerdock code $K$ could be represented as $K^{00}$, $K^{11}$, $K^{01}$, $K^{10}$, where $K^{ab}=\{x\in K: x_{n-1}=a, x_n=b\}$, with $K^{00}\cup K^{11}$ corresponding to one $(n-1)$-component of $K^{*}_n$ and $K^{01}\cup K^{10}$ to the other one. Moreover, the “odd weight” component is the translation of the “even weight” one, i.e. there is a codeword $(x'01)$ of $RM(1,m)$ such that $(K^{01}\cup K^{10})+(x'01)=K^{00}\cup K^{11}$. Now the switching $K=K^{00}\cup K^{11}\cup ((x'01)+(K^{00}\cup K^{11}))$ to $K'=K^{00}\cup K^{11}\cup ((x'10)+(K^{00}\cup K^{11}))$ gives an equivalent code which is obtained from $K$ by permuting $(n-1)$th and $n$th coordinate positions. Components of codes dual to BCH codes ===================================== In the section we fix $n=2^m$, $m$ odd. We investigate the $i$-components of the dual code $C_{1,3}^{\perp}$ of a primitive cyclic BCH code $C_{1,3}$ with zeros $\alpha$ and $\alpha ^3$ with designed distance 5 by $i$-components, of length $n-1=2^m-1$, $m$ odd, here $\alpha $ is a primitive element of the Galois field $GF(2^m)$. The code shares many similar properties with a Kerdock code. We prove that $C_{1,3}^{\perp}$ is an $i$-component for any coordinate position $i$. Further we use the following properties of the code $C^{\perp}_{1,3}$. (B1) [@MWSl] The minimum distance of the code $C^{\perp}_{1,3}$ is $d=\frac{n-\sqrt{2n}}{2}$. The code $C^{\perp}_{1,3}$ has the following weight distribution: i The number of codewords of weight i --------------- ----------------------------------------- 0 1 d $(n-1)(\frac{n}{4}+\sqrt{\frac{n}{8}})$ $\frac{n}{2}$ $(n-1)(\frac{n}{2}+1)$ n-d $(n-1)(\frac{n}{4}-\sqrt{\frac{n}{8}})$ The fact below follows from Theorem \[TMW\] and (B1). (B2) Fixed weight codewords of $C^{\perp}_{1,3}$ form a 2-design. The code $C_{1,3}$ is uniformly packed [@BZZ]. In [@Del], Theorem 6.10 it was shown that any code that is dual to a linear uniformly packed code gives an association scheme. (B3)[@Del] The restriction of the Hamming scheme to $C^{\perp}_{1,3}$ is an association scheme. \[BCH1\] Let $C$ be the punctured (in any coordinate position) code of the code $C^{\perp}_{1,3}$. Then any codeword of weight $d$ is at distance $d-1$ from at least one codeword of weight $d-1$. Let $C_{d-1}$ be the set of codewords of the punctured code of $C^{\perp}_{1,3}$ of weight $d-1$. Suppose that $x$ is a codeword of weight $d$ such that $d(x,C_{d-1})>d-1$. Then $d(x,C_{d-1}) \in \{\frac{n}{2}-1, n-d-1\}$. Since the vectors of $C_{d-1}$ form 1-design which follows from the property (B2), we can use Lemma \[magic\] to count the number $\delta^{\frac{n}{2}-1}$ of the codewords of $C_{d-1}$ at distance $\frac{n}{2}-1$ from $x$: $$\delta^{\frac{n}{2}-1} (d-\frac{n}{4}) + (|C_{d-1}| - \delta^{\frac{n}{2}-1})\frac{3d-n}{2}=\lambda_1\cdot d,$$ where $|C_{d-1}|=\lambda_1\frac{n-2}{d-1}$. It is easy to see that $$\frac{\delta^{\frac{n}{2}-1}}{ |C_{d-1}|} =\frac{2(n^2-2n+8d-3nd-2)}{(n-2)(n-2d)}>1,$$ a contradiction. \[BCH2\] The minimum weight codewords of $C^{\perp}_{1,3}$ span the code. The code $C^{\perp}_{1,3}$ is the direct sum of the Hadamard codes $C^{\perp}_{1}$ and $C^{\perp}_{3}$, both of which consist of $n-1$ nonzero codewords having weight $n/2$. The number of codewords of weight $d$ in $C^{\perp}_{1,3}$ is greater then $n$ (see (B1)). Therefore one can find three codewords in codes $C^{\perp}_{1}$ and $C^{\perp}_{3}$ with distances $d$ or $n/2$ pairwise, e.g. $x,x^{\prime}\in C^{\perp}_{1}$ and $y\in C^{\perp}_{3}$, such that $d(x,x^{\prime})= n/2$ and $d(x,y)=d(x^{\prime},y)=d$. Hence, by property (B3), we have that the intersection number $\delta_{d,n/2}^d(C^{\perp}_{1,3})$ is nonzero, i.e. any codeword of weight $n/2$ is at distance $d$ from at least one codeword of weight $d$ in $C^{\perp}_{1,3}$. The number of codewords of weight $n-d$ is less than the number of codewords of weight $d$, therefore any codeword of weight $n-d$ is at distance $d$ from at least one codeword of weight $n/2$ or $d$. So, the codewords of weight $d$ generate the code $C^{\perp}_{1,3}$. \[theodualBCH\] A code $C^{\perp}_{1,3}$ of length $n=2^m-1$, $m$ odd, consists of one $i$-component for any coordinate position $i$. By Lemma \[BCH1\] any codeword of $C^{\perp}_{1,3}$ of weight $d$ with $0$ in the $i$th coordinate position is at distance $d$ from a codeword of weight $d$ with $1$ in the $i$th coordinate position. By Lemma \[BCH2\], this implies that the set of all codewords of weight $d$ having $1$ in the $i$th coordinate position generates the code $C^{\perp}_{1,3}$, i.e. the code $C^{\perp}_{1,3}$ is an $i$-component for any $i\in \{1,2,\ldots,n-1\}.$ Note that the properties (B1)-(B3) and the proof of Theorem \[theodualBCH\] are the same for any code that is dual to a linear uniformly packed code with the same parameters as the BCH code. In particular, the cyclic code $C_{1,2^j +1}^{\perp}$, $(j,m)=1$ corresponding to the Gold function, $n-1=2^m-1$, $m$ odd as well as the duals of other linear codes obtained from almost bent functions (AB-functions) are uniformly packed [@CCZ] and therefore each of them is an $i$-component for any $i$. \[theodualBCH\] The dual of a linear uniformly packed code with parameters of BCH code $C_{1,3}$ of length $n-1=2^m$, $m$-odd is an $i$-component for any coordinate position $i$. [**Conclusion.**]{} We considered duals of two such well-known classes of uniformly packed codes as Preparata and 2-error correcting BCH code. The dual codes have large minimum distance, few nonzero weights and are related to designs and association schemes. We proved that $i$-components of these codes are maximum. It would be natural to study the structure of $i$-components of Preparata codes that are formal duals of Kerdock codes. For $n=15$ these classes meet in the self-dual Nordstrom-Robinson code that has two $i$-components for any coordinate position $i$. With the help of a computer, we showed that $C_{1,3}^{\perp}$ of length $2^m-1$ is an $i$-component for any $i$ for even $m$ also for $m=6, 8, 10$ and the BCH code $C_{1,3}$ consists of two $i$-components for any coordinate position $i$ for any $m$: $5\leq m\leq 8$. Another challenging problem is finding $i$-components of the BCH codes $C_{1,3}$ for any $m$ and their duals for even $m$. [8]{} Avgustinovich, S.V., To the Structure of Minimum Distance Graphs of Perfect Binary (n, 3)-Codes, [*Diskretn. Anal. Issled. Oper.*]{}, V. 5, N. 4, 1998, P. 3–5. MacWilliams F. J., Sloane N. J. A., [*The Theory of Error-Correcting Codes, North-Holland Publishing Company*]{}, 1977, pp. 762. Kerdock A. M., A class of low-rate non-linear binary codes, [*Inform. Control*]{}, V. 20, N. 2, 1972, P. 182–187. Kantor W. M., An exponential number of generalized Kerdock codes, [*Inform. Control*]{}, 53, No. 1-2, 1982, P. 74–80. Bassalygo L.A., Zaitsev G.A., Zinoviev V.A.: Uniformly packed codes, [*Probl. Inf. Transm.*]{}, V. 10, N. 1, 1974, P. 6–9. Carlet C., Charpin P., Zinoviev V.A.: Codes, bent functions and permutations suitable for DES-like cryptosystems, [*Des. Codes Cryptogr*]{}. V. 15, 1998, P. 125–156. An Algebraic Approach to the Association Schemes of Coding Theory, [ *Philips Res. Rep. Suppl.*]{}, 1973. V. 10. 1973. P. 1–97. A.R. Hammons, P.V. Kumar, A.R. Calderbank, N.J.A. Sloane, P. Sole, The -linearity of Kerdock, Preparata, Goethals, and related codes, [*IEEE Trans. Inform. Theory*]{}, 40, 1994, P. 301–319. Solo[v’]{}eva F. I., Survey on perfect codes, [*Mathematical Problems of Cybernetics*]{}, 2013, P. 5–34 (in Russian). Solo[v’]{}eva F. I., Tokareva N.N., Distance regularity of Kerdock codes, [*Siberian Mathematical Journal*]{}, V. 49, N. 3, 2008. P. 539–548. Semakov N. V., Zinov’ev V. A., Zaitsev G. V., Uniformly Packed Codes, [*Probl. Peredachi Inform.*]{}, 1971, Vol. 7, N. 1, pp. 38–50 (in Russian). De Caen D., van Dam E.R., Association schemes related to Kasami codes and Kerdock sets, [*Des. Codes. Cryptogr.*]{}, V. 18, 1999, P. 89–102. Abdukhalikov K. S., Bannai E., Suda S., Association schemes related to universally optimal configurations, Kerdock codes and extremal Euclidean line-sets, [*J. Comb. Theory*]{}, Ser. A, V. 116, N. 2, 2009, P. 434–448. [^1]: © 2018 I. Yu. Mogilnykh, F. I. Solov’eva [^2]: This work was funded by the Russian Science Foundation under grant 18-11-00136.
--- abstract: 'Self-attention is a useful mechanism to build generative models for language and images. It determines the importance of context elements by comparing each element to the current time step. In this paper, we show that a very lightweight convolution can perform competitively to the best reported self-attention results. Next, we introduce dynamic convolutions which are simpler and more efficient than self-attention. We predict separate convolution kernels based solely on the current time-step in order to determine the importance of context elements. The number of operations required by this approach scales linearly in the input length, whereas self-attention is quadratic. Experiments on large-scale machine translation, language modeling and abstractive summarization show that dynamic convolutions improve over strong self-attention models. On the WMT’14 English-German test set dynamic convolutions achieve a new state of the art of 29.7 BLEU.[^1]' author: - | Felix Wu[^2]\ Cornell University\ Angela Fan, Alexei Baevski, Yann N. Dauphin, Michael Auli\ Facebook AI Research\ bibliography: - 'master.bib' title: | Pay less attention\ with Lightweight and Dynamic Convolutions --- Introduction ============ Background {#sec:background} ========== Lightweight Convolutions {#sec:sdconv} ======================== Dynamic convolutions {#sec:tvsdconv} ==================== Experimental setup {#sec:setup} ================== Results {#sec:results} ======= Conclusion {#sec:conclusion} ========== Supplementary Material {#supplementary-material .unnumbered} ====================== [^1]: Code and pre-trained models available at <http://github.com/pytorch/fairseq> [^2]: Work done during an internship at Facebook.
--- abstract: 'We identify, discuss, and correct two mistakes in [@Pcoun]. The first one is located in [@Pcoun Remark 3.3] and slightly affects [@Pcoun Lemma 3.6]. The second mistake is in the proofs of [@Pcoun Proposition 5.1] and [@Pcoun Theorem 5.3] (all the assertions of the proposition and the theorem remain true, but the proofs need to be modified). We also clarify a confusion in [@Pcoun Remark 11.3], leading to an improvement of [@Pcoun Theorem 11.2]. No other results of [@Pcoun] are affected.' address: 'Institute of Mathematics, Czech Academy of Sciences, Žitná 25, 115 67 Prague 1, Czech Republic' author: - Leonid Positselski title: | Corrigenda to\ “Flat ring epimorphisms of countable type” --- Directed Unions of Gabriel Topologies ===================================== There are two assertions in the first paragraph of [@Pcoun Remark 3.3] concerning directed unions of topologies of right ideals in an associative ring $R$: - for any nonempty set $\Xi$ of right linear topologies ${\mathbb F}$ on $R$ such that for every ${\mathbb F}_1$, ${\mathbb F}_2\in\Xi$ there exists ${\mathbb F}\in\Xi$ with ${\mathbb F}_1\cup{\mathbb F}_2\subset{\mathbb F}$, the directed union $\bigcup_{{\mathbb F}\in\Xi}{\mathbb F}$ is a right linear topology on $R$; - in the same context, if ${\mathbb F}$ is a right Gabriel topology for every ${\mathbb F}\in\Xi$, then $\bigcup_{{\mathbb F}\in\Xi}{\mathbb F}$ is also a right Gabriel topology on $R$. The first assertion, concerning right linear topologies, is correct. The second one, concerning Gabriel topologies, is wrong. Nevertheless, the following version of ~~(2)~~ is correct: - in the context of (1), if ${\mathbb F}$ is a right Gabriel topology with a base of finitely generated right ideals for every ${\mathbb F}\in\Xi$, then $\bigcup_{{\mathbb F}\in\Xi}{\mathbb F}$ is also a right Gabriel topology with a base of finitely generated right ideals. Accordingly, the problem with ~~(2)~~ does not affect the second paragraph of [@Pcoun Remark 3.3], which remains valid as stated: - in the context of (1), if ${\mathbb F}$ is a perfect right Gabriel topology for every ${\mathbb F}\in\Xi$, then $\bigcup_{{\mathbb F}\in\Xi}{\mathbb F}$ is also a perfect right Gabriel topology on $R$. A discussion of the problem with ~~(2)~~ follows below. \[gabriel-counterex\] Let $R$ be a commutative ring and $I\subset R$ be an ideal with a set of generators $s_i\in R$. Denote by ${\mathbb G}_I$ the collection of all ideals $J\subset R$ satisfying the following condition: for every $s\in I$ there exists $m\ge1$ such that $s^m\in J$, or equivalently, for every index $i$ there exists $m\ge1$ such that $s_i^m\in J$. Then ${\mathbb G}_I$ is a Gabriel topology on $R$. In fact, the Gabriel topology ${\mathbb G}_I$ corresponds to the following torsion class ${\mathsf T}_I$ in $R{{\operatorname{\mathsf{--mod}}}}$: an $R$[-]{}module $M$ belongs to ${\mathsf T}_I$ if for every $b\in M$ and $s\in I$ there exists $m\ge1$ such that $s^mb=0$ in $M$. Such $R$[-]{}modules are called “$I$[-]{}torsion” in [@Pcta Sections 6–7]. Now let $R=k[x_1,x_2,\dotsc,y_1,y_2,\dots]$ denote the ring of polynomials in a countably infinite number of variables (separated into two countably infinite sorts) over a field $k$. Let $J_0=(x_1,x_2,\dotsc)\subset R$ denote the ideal generated by the variables $x_i$ in $R$. For every $n\ge1$, let $J_n\subset R$ denote the ideal generated by the sequence of elements $y_1y_2\dotsm y_nx_i$, $i\ge1$. Clearly, one has $J_0\supset J_1\supset J_2\supset\dotsb$, hence ${\mathsf T}_{J_0}\subset{\mathsf T}_{J_1}\subset{\mathsf T}_{J_2}\subset\dotsb\subset R{{\operatorname{\mathsf{--mod}}}}$ and ${\mathbb G}_{J_0}\subset{\mathbb G}_{J_1}\subset{\mathbb G}_{J_2}\subset\dotsb$. Let $\Xi$ denote the directed set of Gabriel topologies ${\mathbb G}_{J_n}$, $n\ge1$, on the ring $R$, and let ${\mathbb H}=\bigcup_{n=0}^\infty{\mathbb G}_{J_n}$ be the union of $\Xi$. We observe that ${\mathbb H}$ is *not* a Gabriel topology on $R$. Indeed, let $I\subset R$ be the ideal generated by the sequence of elements $x_iy_i$, $i\ge1$. Then $I\notin{\mathbb H}$, since for every $n\ge0$ there exists $k=n+1$ such that for every $m\ge1$ the element $(y_1y_2\dotsm y_n x_k)^m$ does not belong to $I$. Still, we have $J_0\in{\mathbb G}_0\subset{\mathbb H}$, and for every element $s\in J_0$ the colon ideal $(I:s)$ belongs to ${\mathbb H}$. To check the latter assertion, pick an integer $n\ge1$ such that $s$ belongs to the ideal $(x_1,\dotsc,x_n)\subset R$. Then $y_1y_2\dotsm y_n\in(I:s)$ and $(y_1y_2\dotsm y_n)\supset J_n \in{\mathbb G}_{J_n}\subset{\mathbb H}$. So the filter of ideals ${\mathbb H}$ in $R$ does not satisfy (T4). The following lemma is to be compared with [@Pcoun Lemma 3.1]. \[T4-generators\] Let ${\mathbb F}$ be a right linear topology on an associative ring $R$, let $I$ and $J\subset R$ be two right ideals, and let $s_j\in R$ be a set of generators of the ideal $J$. Then one has $(I:s)\in{\mathbb F}$ for all $s\in J$ if and only if $(I:s_j)\in{\mathbb F}$ for every generator $s_j$. Suppose $s=s_1r_1+\dotsb+s_mr_m$ with $r_i\in R$ and $s_i\in J$. Set $K_i=(I:s_i)$ and $H=(K_1:r_1)\cap\dotsb\cap(K_m:r_m)\subset R$. Then $sH\subset s_1K_1+\dotsb+s_mK_m\subset I$. Assume that $K_i\in{\mathbb F}$ for every $i=1$, …, $m$; then $(K_i:r_i)\in{\mathbb F}$ by (T3), hence $H\in{\mathbb F}$ by (T2). Since $H\subset(I:s)$, it follows that $(I:s)\in{\mathbb F}$ by (T1). Let $\lambda$ be an infinite cardinal. A poset $\Xi$ is said to be *$\lambda$[-]{}directed* if for any its subset $\Upsilon\subset\Xi$ of the cardinality less than $\lambda$ there exists an element $\xi\in\Xi$ such that $\xi\ge\upsilon$ for all $\upsilon\in\Upsilon$. \[lambda-directed-lambda-generated\] Let $R$ be an associative ring, $\lambda$ be an infinite cardinal, and $\Xi$ be a $\lambda$[-]{}directed (by inclusion) set of right Gabriel topologies on $R$. Assume that every ${\mathbb G}\in\Xi$ has a base consisting of right ideals with less than $\lambda$ generators. Then ${\mathbb H}=\bigcup_{{\mathbb G}\in\Xi}{\mathbb G}$ is a right Gabriel topology on $R$ (with a base consisting of right ideals with less than $\lambda$ generators). To check that ${\mathbb H}$ satisfies (T4), consider a right ideal $I\subset R$ and a right ideal $J\in{\mathbb H}$. Then there exist ${\mathbb G}_0\in\Xi$ such that $J\in{\mathbb G}_0$, and $J'\subset J$ such that $J'$ has less than $\lambda$ generators $s_j$ and $J'\in{\mathbb G}_0$. Assume that $(I:s)\in{\mathbb H}$ for every $s\in J$. Then there exist ${\mathbb G}_j\in\Xi$ such that $(I:s_j)\in{\mathbb G}_j$ for every $j$. Since $\Xi$ is $\lambda$[-]{}directed, there is ${\mathbb G}\in\Xi$ such that ${\mathbb G}_0\subset{\mathbb G}$ and ${\mathbb G}_j\subset{\mathbb G}$ for every $j$. Hence $J'\in{\mathbb G}$ and $(I:s_j)\in{\mathbb G}$ for every $j$. By Lemma \[T4-generators\], it follows that $(I:s)\in{\mathbb G}$ for every $s\in J'$. Since ${\mathbb G}$ is a Gabriel topology, we can conclude that $I\in{\mathbb G}\subset{\mathbb H}$. Specializing to the case of the countable cardinal $\lambda=\omega$, we see from Corollary \[lambda-directed-lambda-generated\] that the assertion (2$^{\mathrm f}$) is correct. The problem with ~~(2)~~ (demonstrated in Counterexample \[gabriel-counterex\]) slightly affects [@Pcoun Lemma 3.6], which should be restated as follows. Let $R$ be an associative ring, $\Xi$ be a directed set of right Gabriel topologies on $R$, and ${\mathbb G}=\bigcup_{{\mathbb H}\in\Xi}{\mathbb H}$ be their union. Assume that ${\mathbb G}$ is a tight Gabriel topology on $R$ (e.g., this always holds when every Gabriel topology ${\mathbb H}\in\Xi$ has a base of finitely generated right ideals, or more generally when $\Xi$ is $\lambda$[-]{}directed and every ${\mathbb H}\in\Xi$ has a base consisting of right ideals with less than $\lambda$ generators). Let $N$ be a right $R$[-]{}module such that $t_{\mathbb H}(N)=t_{\mathbb G}(N)$ for all ${\mathbb H}\in\Xi$. Then there is a natural isomorphism of right $R$[-]{}modules $N_{\mathbb G}\simeq\varinjlim_{{\mathbb H}\in\Xi}N_{\mathbb H}$. Accordingly, the assumption that ${\mathbb G}=\bigcup_{{\mathbb H}\in\Xi}{\mathbb H}$ is a Gabriel topology is also needed in [@Pcoun Remark 3.7]. In the proof of [@Pcoun Proposition 3.9], we sometimes refrained from mentioning that the right Gabriel topologies we were dealing with had to have a base of finitely generated right ideals. In view of the problem with ~~(2)~~, this needs to be mentioned throughout the proof of [@Pcoun Proposition 3.9]. With this correction in mind, the proof is valid, and the assertion of [@Pcoun Proposition 3.9] remains unaffected. All the other results of [@Pcoun Section 3] are likewise unaffected. Additive Kan Extensions ======================= Let $R$ be an associative ring and ${\mathbb F}$ be a right linear topology on $R$. Let ${\mathsf Q}_{\mathbb F}$ be the full subcategory of cyclic discrete right modules $R/I$, $I\in{\mathbb F}$, in the category of discrete right $R$[-]{}modules ${{\operatorname{\mathsf{discr--}}}}R$. Then 1. any left exact additive functor $M{\colon}{\mathsf Q}_{\mathbb F}^{{\mathsf{op}}}{\longrightarrow}{\mathsf{Ab}}$ (in the sense of [@Pcoun Section 5]) can be extended to a functor $G_M{\colon}({{\operatorname{\mathsf{discr--}}}}R)^{{\mathsf{op}}}{\longrightarrow}{\mathsf{Ab}}$ taking colimits in ${{\operatorname{\mathsf{discr--}}}}R$ to limits in ${\mathsf{Ab}}$; 2. any right exact additive functor $C{\colon}{\mathsf Q}_{\mathbb F}{\longrightarrow}{\mathsf{Ab}}$ (in the sense of [@Pcoun Section 5]) can be extended to a functor $F_C{\colon}{{\operatorname{\mathsf{discr--}}}}R{\longrightarrow}{\mathsf{Ab}}$ preserving colimits. The assertion (1) is used in the proof (or rather, in one of the proofs) of [@Pcoun Proposition 5.1]. The assertion (2) is used in one of the proofs of [@Pcoun Theorem 5.3]. Both the assertions (1) and (2) are correct. However, the constructions of the functors $G_M$ and $F_C$ given in [@Pcoun proofs of Proposition 5.1 and Theorem 5.3] are wrong. They need to be modified as explained below. This problem is closely related to [@AR Example 1.24(4)]. For any category ${\mathsf A}$, a small full subcategory ${\mathsf Q}\subset{\mathsf A}$, a cocomplete category ${\mathsf B}$, and a functor $F{\colon}{\mathsf Q}{\longrightarrow}{\mathsf B}$, the functor $F$ can be extended to a functor $\widetilde F{\colon}{\mathsf A}{\longrightarrow}{\mathsf B}$ using the construction of the Kan extension [@McL Section X]. By the definition, for any object $N\in{\mathsf A}$, we put $$\widetilde F(N)=\varinjlim\nolimits_{Q\to N}F(Q)$$ where the colimit is taken over the diagram whose vertices are all the morphisms $Q{\longrightarrow}N$ in ${\mathsf A}$ with $Q\in{\mathsf Q}$ and arrows are the commutative triangles $Q'{\longrightarrow}Q''{\longrightarrow}N$ in ${\mathsf A}$ with $Q'$, $Q''\in{\mathsf Q}$. Now suppose that ${\mathsf A}$ and ${\mathsf B}$ are additive categories (so ${\mathsf Q}$ is a preadditive category), and $F$ is an additive functor. Then the functor $\widetilde F$ does not need to be additive, as the following example demonstrates. Let $R=k$ be a field endowed with the discrete topology ${\mathbb F}=\{(0),(1)\}$. Let ${\mathsf A}={{\operatorname{\mathsf{discr--}}}}R$ be the category of $k$[-]{}vector spaces, ${\mathsf B}={\mathsf{Ab}}$ be the category of abelian groups, and ${\mathsf Q}={\mathsf Q}_{\mathbb F}\subset{\mathsf A}$ be the full subcategory of vector spaces of dimension $\le1$ (cf. [@AR Example 1.24(4)]). Let $F{\colon}{\mathsf Q}{\longrightarrow}{\mathsf B}$ be the forgetful functor assigning to a vector space its underlying abelian group. Then the functor $\widetilde F{\colon}{\mathsf A}{\longrightarrow}{\mathsf B}$ assigns to a vector space $V$ the underlying abelian group of the $k$[-]{}vector space with a basis indexed by the set of all one-dimensional vector subspaces in $V$. So the functor $\widetilde F$ is *not* additive (and *not* isomorphic to the forgetful functor ${\mathsf A}{\longrightarrow}{\mathsf B}$). The above example shows that the construction of the functor $F_C$ in [@Pcoun proof of Theorem 5.3] is wrong (in that it does not produce a coproduct-preserving or right exact functor, contrary to what is claimed). The construction of the functor $G_M$ in [@Pcoun proof of Proposition 5.1] is wrong for the same reason. The correct constructions are explained below. Let ${\mathsf A}$ and ${\mathsf B}$ be additive categories and ${\mathsf Q}\subset{\mathsf A}$ be a full subcategory. Assume for simplicity that $0\in{\mathsf Q}$, and denote by ${\mathsf Q}^+\subset{\mathsf A}$ the full subcategory consisting of all the objects $Q_1\oplus Q_2$, where $Q_1$, $Q_2\in{\mathsf Q}$. Let $F{\colon}{\mathsf Q}{\longrightarrow}{\mathsf B}$ be an additive functor. Then there exists a unique additive functor $F^+{\colon}{\mathsf Q}^+{\longrightarrow}{\mathsf B}$ such that $F^+|_{\mathsf Q}=F$, defined by the rule $F^+(Q_1\oplus Q_2)= F(Q_1)\oplus F(Q_2)$. \[additive-Kan\] Let ${\mathsf A}$ be an additive category, $0\in{\mathsf Q}\subset{\mathsf A}$ be a small full subcategory, ${\mathsf B}$ be a cocomplete additive category, and $F{\colon}{\mathsf Q}{\longrightarrow}{\mathsf B}$ be an additive functor. Let $\widetilde{F^+}:{\mathsf A}{\longrightarrow}{\mathsf B}$ be the Kan extension of the functor $F^+{\colon}{\mathsf Q}^+{\longrightarrow}{\mathsf B}$. Then the functor $\widetilde{F^+}$ is additive. Let $M$ and $N\in{\mathsf A}$ be two objects, and let $f$, $g{\colon}M \rightrightarrows N$ be a pair of parallel morphisms. We have to check that $\widetilde{F^+}(f+g)=\widetilde{F^+}(f) +\widetilde{F^+}(g)$. The object $\widetilde{F^+}(M)\in{\mathsf B}$ is the colimit of the objects $F(Q_1)\oplus F(Q_2)$ taken over the diagram whose vertices are all the morphisms $Q_1\oplus Q_2{\longrightarrow}M$ with $Q_1$, $Q_2\in{\mathsf Q}$ and whose arrows are all the commutative triangles $Q_1'\oplus Q_2' {\longrightarrow}Q_1''\oplus Q_2''{\longrightarrow}M$, where $Q_1'\oplus Q_2' {\longrightarrow}Q_1''\oplus Q_2''$ is a $(2\times2)$[-]{}matrix of morphisms between objects of ${\mathsf Q}$. Therefore, it suffices to check that for every morphism $h{\colon}Q_1\oplus Q_2{\longrightarrow}M$ with $Q_1$, $Q_2\in{\mathsf Q}$ one has $\widetilde{F^+}((f+g)\circ h)=\widetilde{F^+}(f\circ h)+ \widetilde{F^+}(g\circ h)$. In turn, it suffices to check the latter condition in the case of a pair of objects $(Q_1,Q_2)=(Q,0)$, i. e., for a morphism $h{\colon}Q{\longrightarrow}M$ with $Q\in{\mathsf Q}$. Now we consider the object $(Q,Q)\in{\mathsf Q}^+$ and the morphism $(f\circ h,\>g\circ h){\colon}(Q,Q){\longrightarrow}N$ in ${\mathsf A}$. There are three natural morphisms $(1,0)$, $(0,1)$, and $(1,1){\colon}Q{\longrightarrow}(Q,Q)$ in ${\mathsf Q}^+$. One has $f\circ h=(f\circ h,\>g\circ h)\circ(1,0)$, and similarly $g\circ h=(f\circ h,\>g\circ h)\circ(0,1)$ and $(f+g)\circ h=(f\circ h,\>g\circ h)\circ(1,1)$. By construction of the object $\widetilde{F^+}(N)$, it follows that $\widetilde{F^+}(f\circ h)=\widetilde{F^+}(f\circ h,\>g\circ h) \circ F^+(1,0)$, and similarly $\widetilde{F^+}(g\circ h)= \widetilde{F^+}(f\circ h,\>g\circ h)\circ F^+(0,1)$ and $\widetilde{F^+}((f+g)\circ h)=\widetilde{F^+}(f\circ h,\>g\circ h) \circ F^+(1,1)$. It remains to observe that $F^+(1,1)=F^+(1,0)+F^+(0,1)$, since the functor $F^+$ is additive. The following lemma does not depend on any additivity assumptions. \[Kan-preserves-colimits\] Let ${\mathsf A}$ be a category, ${\mathsf Q}\subset{\mathsf A}$ be a small full subcategory, ${\mathsf B}$ be a cocomplete category, $F{\colon}{\mathsf Q}{\longrightarrow}{\mathsf B}$ be a functor, ${\mathsf X}$ be a small category, and $D{\colon}{\mathsf X}{\longrightarrow}{\mathsf A}$ be a diagram which has a colimit in ${\mathsf A}$. Assume that for every object $Q\in{\mathsf Q}$ the functor $\operatorname{Hom}_{\mathsf A}(Q,{-}){\colon}{\mathsf A}{\longrightarrow}{\mathsf{Sets}}$ preserves the colimit of the diagram $D$. Then the functor $\widetilde F{\colon}{\mathsf A}{\longrightarrow}{\mathsf B}$ also preserves the colimit of the diagram $D$. Let $B\in{\mathsf B}$ be an object. Then, for every object $A\in{\mathsf A}$, the set of all morphisms $\widetilde F(A){\longrightarrow}B$ in ${\mathsf B}$ is naturally bijective to the set of all rules assigning to every morphism $Q{\longrightarrow}A$ in ${\mathsf A}$ with an object $Q\in{\mathsf Q}$ a morphism $F(Q){\longrightarrow}B$ in ${\mathsf B}$ in a way compatible with all the morphisms in ${\mathsf Q}$. It follows that, under the assumptions of the lemma, both the sets of morphisms $\operatorname{Hom}_{\mathsf B}(\varinjlim_{x\in{\mathsf X}} F(D(x)),\>B)$ and $\operatorname{Hom}_{\mathsf B}(F(\varinjlim_{x\in{\mathsf X}}D(x)),B)$ are naturally bijective to the set of all rules assigning to every morphism $Q{\longrightarrow}D(x)$ in ${\mathsf A}$ with objects $Q\in{\mathsf Q}$ and $x\in{\mathsf X}$ a morphism $F(Q){\longrightarrow}B$ in ${\mathsf B}$ in a way compatible with all the morphisms in ${\mathsf Q}$ and ${\mathsf X}$. Alternatively, one can argue as follows. Consider the category ${\mathsf C}={\mathsf{Sets}}^{{\mathsf Q}^{{\mathsf{op}}}}$ of presheaves of sets on the small category ${\mathsf Q}$. By the Yoneda lemma, ${\mathsf Q}$ is naturally a full subcategory in ${\mathsf C}$. There is a natural functor $H{\colon}{\mathsf A}{\longrightarrow}{\mathsf C}$ assigning to an object $A\in{\mathsf A}$ the presheaf $H(A)=\operatorname{Hom}_{\mathsf A}({-},A)|_{\mathsf Q}$. The functor $H$ forms a commutative triangle diagram with the fully faithful functors ${\mathsf Q}{\longrightarrow}{\mathsf A}$ and ${\mathsf Q}{\longrightarrow}{\mathsf C}$. By assumption, the functor $H$ preserves the colimit of the diagram $D$. Let $G{\colon}{\mathsf C}{\longrightarrow}{\mathsf B}$ be the Kan extension of the functor $F{\colon}{\mathsf Q}{\longrightarrow}{\mathsf B}$ with respect to the Yoneda embedding ${\mathsf Q}{\longrightarrow}{\mathsf C}$. The functor $G$ has a right adjoint functor $R$ assigning to an object $B\in{\mathsf B}$ the presheaf $R(B)=\operatorname{Hom}_{\mathsf B}(F({-}),B)$. It follows that the functor $G$ preserves all colimits. Finally, it remains to observe that $\widetilde F=G\circ H$. Now we can return to the situation at hand. Let ${\mathbb F}$ be a right linear topology on an associative ring $R$. Set ${\mathsf A}={{\operatorname{\mathsf{discr--}}}}R$,  ${\mathsf Q}={\mathsf Q}_{\mathbb F}\subset{\mathsf A}$, and ${\mathsf B}={\mathsf{Ab}}$ or ${\mathsf{Ab}}^{{\mathsf{op}}}$. Given an additive functor $M{\colon}{\mathsf Q}_{\mathbb F}{\longrightarrow}{\mathsf{Ab}}^{{\mathsf{op}}}$, we set $$G_M({\mathcal N})=\varprojlim\nolimits_{R/I_1\oplus R/I_2\to{\mathcal N}} M(R/I_1)\oplus M(R/I_2) \qquad \text{for every ${\mathcal N}\in{{\operatorname{\mathsf{discr--}}}}R$},$$ where the projective limit is taken over the diagram formed by all the morphisms of discrete right $R$[-]{}modules $R/I_1\oplus R/I_2 {\longrightarrow}{\mathcal N}$ (indexing the vertices of the diagram) and all the commutative triangles $R/I_1\oplus R/I_2{\longrightarrow}R/J_1\oplus R/J_2{\longrightarrow}{\mathcal N}$ (indexing the arrows), where $I_1$, $I_2$, $J_1$, $J_2\in{\mathbb F}$ and $R/I_1\oplus R/I_2{\longrightarrow}R/J_1\oplus R/J_2$ ranges over all the $(2\times2)$[-]{}matrices of morphisms in ${\mathsf Q}_{\mathbb F}$. By Proposition \[additive-Kan\], the functor $G_M=\widetilde{M^+}{\colon}{{\operatorname{\mathsf{discr--}}}}R{\longrightarrow}{\mathsf{Ab}}^{{\mathsf{op}}}$ is additive; hence it preserves finite (co)products. Applying Lemma \[Kan-preserves-colimits\] for the diagram representing a coproduct in ${{\operatorname{\mathsf{discr--}}}}R$ as the filtered colimit of its finite subcoproducts and recalling that the right $R$[-]{}modules $R/I_1\oplus R/I_2$ are finitely generated, we conclude that the functor $G_M$ takes coproducts in ${{\operatorname{\mathsf{discr--}}}}R$ to products in ${\mathsf{Ab}}$. Checking that the functor $G_M$ takes cokernels in ${{\operatorname{\mathsf{discr--}}}}R$ to kernels in ${\mathsf{Ab}}$ if and only if the functor $M$ is left exact in the sense of [@Pcoun Section 5] is a straightforward diagram-chasing exercise. This proves the assertion (1). Given an additive functor $C{\colon}{\mathsf Q}_{\mathbb F}{\longrightarrow}{\mathsf{Ab}}$, we set $$F_C({\mathcal N})=\varinjlim\nolimits_{R/I_1\oplus R/I_2\to{\mathcal N}} C(R/I_1)\oplus C(R/I_2) \qquad\text{for every ${\mathcal N}\in{{\operatorname{\mathsf{discr--}}}}R$},$$ where the inductive limit is taken over the same diagram as in the previuos construction. By Proposition \[additive-Kan\], the functor $F_C=\widetilde{C^+}{\colon}{{\operatorname{\mathsf{discr--}}}}R{\longrightarrow}{\mathsf{Ab}}$ is additive; hence it preserves finite coproducts. The same argument based on Lemma \[Kan-preserves-colimits\] as above shows that the functor $F_C$ preserves infinite coproducts, too. Finally, a diagram-chasing exercise shown that the functor $F_C$ preserves cokernels if and only if the functor $C$ is right exact in the sense of [@Pcoun Section 5]. This proves the assertion (2). Injective Flat Ring Epimorphisms of Projective Dimension 1 ========================================================== \[injective-flat-epi-secn\] The exposition in [@Pcoun Remark 11.3] is correct, but confused. Particularly confused is the fourth, last paragraph of that remark (while the first two paragraphs are OK). Let us discuss the situation anew and with updated references. Let $u{\colon}R{\longrightarrow}U$ be an injective ring epimorphism. Following [@Pcoun first paragraph of Remark 11.3], we consider the $R$[-]{}$R$[-]{}bimodule $K=U/R$, and denote by ${\mathfrak S}=\operatorname{Hom}_R(K,K)^{{\mathrm{op}}}$ the opposite ring to the ring of endomorphisms of the left $R$[-]{}module $K$. So the ring ${\mathfrak S}$ acts in $K$ on the right, making $K$ an $R$[-]{}${\mathfrak S}$[-]{}bimodule; and the right action of $R$ in $K$ induces a ring homomorphism $R{\longrightarrow}{\mathfrak S}$. We endow the ring ${\mathfrak S}$ with the right linear topology ${{\boldsymbol{\mathfrak F}}}$ with a base ${\boldsymbol{\mathfrak B}}$ formed by the annihilators of finitely generated left $R$[-]{}submodules in $K$. Then ${\mathfrak S}$ is a complete, separated topological ring [@PS1 Theorem 7.1] and $K$ is a discrete right ${\mathfrak S}$[-]{}module [@PS1 Proposition 7.3] (see also [@Pproperf Section 1.13]). The topological ring ${\mathfrak S}$ is discussed at length in [@BP2 Section 4] and [@BP3 Sections 14[-]{}-15] (where it is denoted by ${\mathfrak R}$). Assume that $U$ is a flat left $R$[-]{}module. Following [@Pcoun second paragraph of Remark 11.3], let ${\mathbb G}$ be the perfect Gabriel topology of all right ideals $I\subset R$ such that $R/I{\otimes}_RU=0$ (or equivalently, $U=IU$). Let ${\mathfrak R}$ denote the completion of $R$ with respect to ${\mathbb G}$, viewed as a complete, separated topological ring in its projective limit topology ${{\boldsymbol{\mathfrak G}}}$; and let $\rho{\colon}R{\longrightarrow}{\mathfrak R}$ be the completion map. Then $K=U/R$ is a discrete right $R$[-]{}module (with respect to the topology ${\mathbb G}$ on $R$), since $U/R{\otimes}_RU=0$. Consequently, the right action of $R$ in $K$ extends uniquely to a discrete right action of ${\mathfrak R}$. Hence the ring homomorphism $R{\longrightarrow}{\mathfrak S}$ factorizes as $R{\longrightarrow}{\mathfrak R}{\longrightarrow}{\mathfrak S}$. Since the annihilator of any finite set of elements in $K$ is an open right ideal in ${\mathfrak R}$, the ring homomorphism $\sigma{\colon}{\mathfrak R}{\longrightarrow}{\mathfrak S}$ is continuous. \[topology-comparison\] Any open right ideal $I\in{\mathbb G}$ of the ring $R$ contains the preimage $(\sigma\rho)^{-1}({\mathfrak J})\subset R$ of some open right ideal ${\mathfrak J}\in{{\boldsymbol{\mathfrak F}}}$ of the ring ${\mathfrak S}$. We know that $U=IU$, that is, there exist some elements $s_1$, …, $s_n\in I$ and $v_1$, …, $v_n\in U$ such that $\sum_{k=1}^n s_kv_k=1$ in the ring $U$. Denote by ${\mathfrak J}\subset{\mathfrak S}$ the annihilator of the finite collection of cosets $v_1+R$, …, $v_n+R\in U/R=K$. Then ${\mathfrak J}$ is an open right ideal in ${\mathfrak S}$ by the definition of the topology ${{\boldsymbol{\mathfrak F}}}$ on ${\mathfrak S}$. The ring homomorphism $\sigma\rho{\colon}R{\longrightarrow}{\mathfrak S}$ is induced by the right action of $R$ in $U/R$. Let $r\in R$ be an element such that $\sigma\rho(r)\in{\mathfrak J}$. This means that the cosets $v_1+R$, …, $v_n+R$ are annihilated by the right action of $r$, that is, the elements $v_1r$, …, $v_nr$ belong to $R\subset U$. Hence we have $r=\sum_{k=1}^n s_kv_kr\in\sum_{k=1}^n s_kR \subset I$, as desired. \[injective-induced-topology\] For any injective left flat ring epimorphism $u{\colon}R{\longrightarrow}U$, the ring homomorphism $\sigma{\colon}{\mathfrak R}{\longrightarrow}{\mathfrak S}$ is injective, and the topology ${{\boldsymbol{\mathfrak G}}}$ on ${\mathfrak R}$ coincides with the topology induced on the ring ${\mathfrak R}$ as a subring in ${\mathfrak S}$ via $\sigma$. The preimages of open right ideals ${\mathfrak J}\in{{\boldsymbol{\mathfrak F}}}$ of the ring ${\mathfrak S}$ under the map $\sigma\rho{\colon}R{\longrightarrow}{\mathfrak S}$ are open right ideals in $R$, because the ring homomorphism $\sigma\rho$ is continuous. The open right ideals $(\sigma\rho)^{-1}({\mathfrak J})\subset R$ also form a base of the topology ${\mathbb G}$ on the ring $R$ by Lemma \[topology-comparison\]. Passing to the projective limit of the directed diagram of injective maps $R/(\sigma\rho)^{-1}({\mathfrak J}){\longrightarrow}{\mathfrak S}/{\mathfrak J}$ indexed by ${\mathfrak J}\in{{\boldsymbol{\mathfrak F}}}$, we obtain the map $\sigma{\colon}{\mathfrak R}{\longrightarrow}{\mathfrak S}$. Thus the map $\sigma$ is injective. The topology ${{\boldsymbol{\mathfrak G}}}$ on ${\mathfrak R}$ is the topology of projective limit of discrete abelian groups $\varprojlim_{{\mathfrak J}\in{{\boldsymbol{\mathfrak F}}}}R/(\sigma\rho)^{-1}({\mathfrak J})$, while the topology ${{\boldsymbol{\mathfrak F}}}$ on ${\mathfrak S}$ is the topology of projective limit of discrete abelian groups $\varprojlim_{{\mathfrak J}\in{{\boldsymbol{\mathfrak F}}}}{\mathfrak S}/{\mathfrak J}$. The map $\sigma$ is the projective limit of an injective morphism from the former diagram of abelian groups to the latter one. By the definition of the topology of projective limit, it follows that the topology on ${\mathfrak R}$ is induced from its embedding into ${\mathfrak S}$ via $\sigma$. The following theorem is a clarified version of [@Pcoun Remark 11.3], and it also includes an improved version of [@Pcoun Theorem 11.2]. Let $u{\colon}R{\longrightarrow}U$ be an injective ring epimorphism such that $U$ is a flat left $R$[-]{}module of projective dimension not exceeding $1$. Then the map $\sigma{\colon}{\mathfrak R}{\longrightarrow}{\mathfrak S}$ is an isomorphism of topological rings. Furthermore, the left $R$[-]{}module morphism $$\beta_{u,X}{\colon}\Delta_u(R[X]){\mskip.5\thinmuskip\relbar\joinrel\relbar\joinrel \rightarrow\mskip.5\thinmuskip\relax}{\mathfrak R}[[X]]$$ is an isomorphism for any set $X$. The forgetful functor ${\mathfrak R}{{\operatorname{\mathsf{--contra}}}}{\longrightarrow}R{{\operatorname{\mathsf{--mod}}}}$ is fully faithful, and its essential image coincides with the full subcategory $R{{\operatorname{\mathsf{--mod}}}}_{u{{\operatorname{\mathsf{-ctra}}}}}\subset R{{\operatorname{\mathsf{--mod}}}}$; so there is an equivalence of abelian categories ${\mathfrak R}{{\operatorname{\mathsf{--contra}}}}\simeq R{{\operatorname{\mathsf{--mod}}}}_{u{{\operatorname{\mathsf{-ctra}}}}}$. We follow [@Pcoun second and third paragraphs of Remark 11.3] taking into account the discussion above. The injective topological ring homomorphism $\sigma{\colon}{\mathfrak R}{\longrightarrow}{\mathfrak S}$ (see Corollary \[injective-induced-topology\]) induces, for every set $X$, an injective map of sets $\sigma[[X]]{\colon}{\mathfrak R}[[X]]{\longrightarrow}{\mathfrak S}[[X]]$. In fact, we have a commutative triangle diagram of ring homomorphisms $R{\longrightarrow}{\mathfrak R}{\longrightarrow}{\mathfrak S}$, so $\sigma[[X]]$ is an $R$[-]{}module morphism. Every left ${\mathfrak S}$[-]{}contamodule has an underlying left ${\mathfrak R}$[-]{}contramodule structure. By [@Pcoun Proposition 10.2], the underlying left $R$[-]{}module of any left ${\mathfrak R}$[-]{}contramodule is a $u$[-]{}contramodule. In particular, ${\mathfrak S}[[X]]$ is a $u$[-]{}contramodule left $R$[-]{}module. Hence, by [@Pcoun Lemma 10.1], there exists a unique $R$[-]{}module morphism $\Delta_u(R[X]){\longrightarrow}{\mathfrak S}[[X]]$ forming a commutative triangle diagram with the adjunction map $\delta_{u,R[X]}{\colon}R[X]{\longrightarrow}\Delta_u(R[X])$ and the map $R[X]{\longrightarrow}{\mathfrak S}[[X]]$ induced by the ring homomorphism $\sigma\rho{\colon}R{\longrightarrow}{\mathfrak S}$. The composition of two maps $\sigma[[X]]\circ\beta_{u,X}{\colon}\Delta_u(R[X]){\longrightarrow}{\mathfrak R}[[X]]{\longrightarrow}{\mathfrak S}[[X]]$ has this diagram commutativity property. So does the isomorphism $\Delta_u(R[X]])\simeq{\mathfrak S}[[X]]$ constructed in [@BP3 direct proof of Theorem 14.2] (we remind the reader that the topological ring denoted by ${\mathfrak S}$ here is denoted by ${\mathfrak R}$ in [@BP3]). Thus the composition $\sigma[[X]]\circ\beta_{u,X}$ is isomorphism. Since the map $\sigma[[X]]$ is injective, it follows that both the maps $\beta_{u,X}$ and $\sigma[[X]]$ are isomorphisms. In particular, the map $\sigma$ is an isomorphism of rings, and in view of Corollary \[injective-induced-topology\] we can conclude that $\sigma$ is an isomorphism of topological rings. We have proved the first two assertions of the theorem. The remaining assertions follow in view of [@Pcoun Lemma 11.1] or [@BP3 Theorem 14.2]. **Acknowledgement.** The author is grateful to J. Št’ovíček for suggesting the above alternative proof of Lemma \[Kan-preserves-colimits\]. Section \[injective-flat-epi-secn\] owes its existence to S. Bazzoni, who kindly invited me to visit her in Padova in November–December 2019 and clarified my confusion by suggesting Lemma \[topology-comparison\] and its proof. [9]{} L. Positselski. Flat ring epimorphisms of countable type. *Glasgow Math. Journ.* **62**, \#2, p. 383–439, 2020. `arXiv:1808.00937 [math.RA]` J. Adámek, J. Rosický. Locally presentable and accessible categories. London Math. Society Lecture Note Series 189, Cambridge University Press, 1994. S. Bazzoni, L. Positselski. Matlis category equivalences for a ring epimorphism. *Journ. of Pure and Appl. Algebra*, published online at `https://doi.org/10.1016/j.jpaa.2020.106398` in April 2020. `arXiv:1907.04973 [math.RA]` S. Bazzoni, L. Positselski. Covers and direct limits: A contramodule-based approach. Electronic preprint `arXiv:1907.05537v2 [math.CT]`. S. MacLane. Categories for the working mathematician. Graduate Texts in Mathematics, 5. Springer-Verlag, New York–Berlin, 1971–1998. L. Positselski. Contraadjusted modules, contramodules, and reduced cotorsion modules. *Moscow Math. Journ.* **17**, \#3, p. 385–455, 2017. `arXiv:1605.03934 [math.CT]` L. Positselski. Contramodules over pro-perfect topological rings. Electronic preprint `arXiv:1807.10671v3 [math.CT]`. =5200 L. Positselski, J. Št’ovíček. The tilting-cotilting correspondence. *Internat. Math. Research Notices*, published online at `https://doi.org/10.1093/imrn/rnz116` in July 2019. `arXiv:1710.02230 [math.CT]` =3375
--- author: - 'Eugene Vorontsov[^1]' - Pavlo Molchanov - Wonmin Byeon - Shalini De Mello - Varun Jampani - 'Ming-Yu Liu' - Samuel Kadoury - Jan Kautz bibliography: - 'main.bib' title: 'Boosting segmentation with weak supervision from image-to-image translation' --- @affilsepx[, ]{} [^1]: Work done while author was interning at NVIDIA.
--- abstract: 'Tunable beam splitter (TBS) is a fundamental component which has been widely used in optical experiments. We realize a polarization-independent orbital-angular-momentum-preserving TBS based on the combination of modified polarization beam splitters and half-wave plates. Greater than 30 dB of the extinction ratio of tunableness, lower than $6\%$ of polarization dependence and more than 20 dB of the extinction ratio of OAM preservation show the relatively good performance of the TBS. In addition, the TBS can save about 3/4 of the optical elements compared with the existing scheme to implement the same function[@yang2016experimental], which makes it have great advantages in scalable applications. Using this TBS, we experimentally built a Sagnac interferometer with the mean visibility of more than $99\%$, which demonstrates its potential applications in quantum information process, such as quantum cryptography.' author: - 'Ya-Ping Li' - 'Fang-Xiang Wang' - Wei Chen - 'Guo-Wei Zhang' - 'Zhen-Qiang Yin' - 'De-Yong He' - Shuang Wang - 'Guang-Can Guo' - 'Zheng-Fu Han' title: 'A resource-saving realization of the polarization-independent orbital-angular-momentum-preserving tunable beam splitter' --- Orbital angular momentum (OAM) has recently attracted a growing interest as a high-dimensional resource for quantum information. Beams of OAM-carrying photons have an azimuthal phase dependence in the form of $e^{il\phi}$, where topological charge $l$ can take any integer value[@franke2008advances][@yao2011orbital][@willner2015optical]. Due to its unique property, OAM light can be applied to many fields, such as quantum entanglement[@weihs2001entanglement][@leach2010quantum], quantum simulation[@cardano2015quantum][@luo2015quantum] and quantum communication[@molina2004triggered][@vallone2014free]. However, dedicated techniques are necessary for manipulating and transmitting OAM of photons. Up to now, researchers have designed many optical elements for the translation and manipulation of OAM light, such as the OAM fiber[@ramachandran2009generation][@bozinovic2013terabit]and Q-plate[@marrucci2006optical][@d2012deterministic]. Meanwhile, tailored optical devices which can achieve fundamental optical functions are necessary as well. Among these devices, tunable beam splitter (TBS) is an essential element to compose complex optical structures[@higgins2007entanglement][@ma2011quantum]. There are three major methods to implement TBS. The first common realization of TBS is using the combination of a polarization beam splitter (PBS) and a half-wave plate (HWP), which has high extinction ratio, while it is polarization-dependent[@marcikic2003long]. Another type of polarization-independent TBS employs Mach-Zehender interferometers (MZIs) with high-speed modulators[@ma2011high]. Although this method can realize high-speed modulation, it is sensitive to external environment disturbance in different light path, such as the vibrations and temperature variations. Moreover, it has a relative low extinction ratio in some specific high-precision applications[@ma2011high]. Recently, Yang et al. realizes a polarization-independent TBS using the MZI composed of beam displacers (BD) and HWPs, in which the TBS has a relatively high polarization independence and high interference visibility. But it is sensitive to the phase in different paths and it has a complicated construction[@yang2016experimental]. Here, we propose a polarization-independent OAM-preserving TBS based on HWPs and modified PBSs. The relatively low polarization dependence combining high extinction ratio of the TBS can save about 3/4 of the number of optical elements compared with the work in [@yang2016experimental]. The realization of Sagnac interferometer with interference visibility of above $99\%$ based on the TBS demonstrates its relatively good performance. ![Schematic diagrams of PBS and TBS. HWP, half wave plate; PBS, polarization beam splitter[]{data-label="fig:PBS_TBS"}](PBS_TBS.eps) The structure of the TBS is shown in Fig. \[fig:PBS\_TBS\] (a). The TBS is composed of two modified PBSs and three HWPs, in which $HWP_{\uppercase\expandafter{\romannumeral2}}$ is used for adjusting the splitting ratio and $HWP_{\uppercase\expandafter{\romannumeral1}}$, $HWP_{\uppercase\expandafter{\romannumeral3}}$ are used to eliminate polarization dependence of the TBS. For a common cubic beam splitter (BS) or PBS, it changes the sign of topological charge $l$ of OAM light after a reflection. Therefore, the transmission matrix of BS or PBS is no longer a unitary matrix for OAM-carrying light, which may introduce inconvenience in some experiments[@mafu2013higher][@zhang2016engineering]. The modified PBS used in the TBS is composed of two rhombic prisms (Sunlight Technology Co., H-K9L material), as shown in Fig. \[fig:PBS\_TBS\] (b). While a horizontal (vertical) polarization state enters into modified PBS from $Port_1$, it will exit from $Port_4$ ($Port_3$) after twice reflections and once transmission (only twice reflections). Therefore when photons that carry an OAM of $l\hbar$ enter into the PBS, the topological charge $l$ is preserved. Meanwhile the HWP does not change the topological charge. Thus, the TBS will be OAM-preserving. The property of polarization-independent of the TBS will be discussed in detail below. In a word, when an OAM photon with a quantum number of $l$ and arbitrary polarization state enters into the TBS from $Port_1$ or $Port_2$, it will exit from $Port_5$ and $Port_6$, preserving the original polarization state carrying OAM as shown in Fig. \[fig:PBS\_TBS\] (a). The transmission matrix of the TBS can be specified using Dirac notation. When photons enter into $PBS_{\uppercase\expandafter{\romannumeral1}}$ from $Port_1$, and exit from $Port_3$ and $Port_4$, as shown in Fig. \[fig:PBS\_TBS\] (b). The operator of the $PBS_{\uppercase\expandafter{\romannumeral1}}$ can be described as: $$M_{PBS_{\uppercase\expandafter{\romannumeral1}}} = \Ket{h\otimes l,4}\Bra{h\otimes l,1}+\Ket{v\otimes l,3}\Bra{v\otimes l,1}$$ where 1 (3 and 4) means the $Port_1$ ($Port_3$ and $Port_4$) of the PBS. $h\otimes l$ ($v\otimes l$) denotes the incident horizontal (vertical) polarization state with OAM of $l\hbar$. Then the light enters into $HWP_{\uppercase\expandafter{\romannumeral2}}$ from $Port_3$ ($Port_4$) of $PBS_{\uppercase\expandafter{\romannumeral1}}$, whose operator can be described as: $$\begin{aligned} M_{HWP_{\uppercase\expandafter{\romannumeral2}}-3} &= cos2\theta\Ket{h\otimes l,3}\Bra{h\otimes l,3}+sin2\theta\Ket{h\otimes l,3}\Bra{v\otimes l,3} \\ &+sin2\theta\Ket{v\otimes l,3}\Bra{h\otimes l,3}-cos2\theta\Ket{v\otimes l,3}\Bra{v\otimes l,3} \end{aligned}$$ $$\begin{aligned} M_{HWP_{\uppercase\expandafter{\romannumeral2}}-4} &= cos2\theta\Ket{h\otimes l,4}\Bra{h\otimes l,4}+sin2\theta\Ket{h\otimes l,4}\Bra{v\otimes l,4} \\ &+sin2\theta\Ket{v\otimes l,4}\Bra{h\otimes l,4}-cos2\theta\Ket{v\otimes l,4}\Bra{v\otimes l,4} \end{aligned}$$ where $\theta$ is the angle between the fast axis of HWP and the horizontal axis. $HWP_{\uppercase\expandafter{\romannumeral2}-3}$ ($HWP_{\uppercase\expandafter{\romannumeral2}-4}$) means the light enters into $HWP_{\uppercase\expandafter{\romannumeral2}}$ from $Port_3$ ($Port_4$). When the light enters into the $PBS_{\uppercase\expandafter{\romannumeral2}}$ from $Port_3$ and $Port_4$, the operator of $PBS_{\uppercase\expandafter{\romannumeral2}}$ can be described as: $$\begin{aligned} M_{PBS_{\uppercase\expandafter{\romannumeral2}}} = \Ket{h\otimes l,6}\Bra{h\otimes l,3}-\Ket{v\otimes l,5}\Bra{v\otimes l,3} +\Ket{h\otimes l,5}\Bra{h\otimes l,4}+\Ket{v\otimes l,6}\Bra{v\otimes l,4} \end{aligned}$$ where the minus of the second item is due to the film coated on the left prism (here we assume the light reflected by coated film will introduce phase $\pi$). It is noted that the film is coated on the right prism in $PBS_{\uppercase\expandafter{\romannumeral1}}$. The operator of $HWP_{\uppercase\expandafter{\romannumeral3}}$ with $\theta$ equals to $45^\circ$ will be $$M_{HWP_{\uppercase\expandafter{\romannumeral3}}} = \Ket{h\otimes l,6}\Bra{v\otimes l,6} + \Ket{v\otimes l,6}\Bra{h\otimes l,6}$$ Thus, we can deduce the operator of the TBS: $$\begin{aligned} M_{TBS}&=(1+M_{HWP_{\uppercase\expandafter{\romannumeral3}}})M_{PBS_{\uppercase\expandafter{\romannumeral2}}}(M_{HWP_{\uppercase\expandafter{\romannumeral2}-3}}+M_{HWP_{\uppercase\expandafter{\romannumeral2}-4}})M_{PBS_{\uppercase\expandafter{\romannumeral1}}} \\ &=cos2\theta\Ket{h\otimes l,5}\Bra{h\otimes l,1}+sin2\theta\Ket{h\otimes l,6}\Bra{h\otimes l,1}\\ &-cos2\theta\Ket{v\otimes l,5}\Bra{v\otimes l,1}+sin2\theta\Ket{v\otimes l,6}\Bra{v\otimes l,1} \end{aligned}$$ When an arbitrary polarization state carrying OAM of $l\hbar$ enters into the TBS from $Port_1$, the incident state can be described as: $$\begin{aligned} \Ket{In}=\alpha\Ket{h\otimes l,1}+\beta\Ket{v\otimes l,1}\end{aligned}$$ where $\alpha$ and $\beta$ are complex constants, and $\|{\alpha}\|^2+\|{\beta}\|^2=1.$ Thus, the output state should be: $$\begin{aligned} \Ket{Out} & =M_{TBS}\Ket{In} =cos2\theta(\alpha\Ket{h\otimes l,5}+\beta\Ket{v\otimes l,5}) +sin2\theta(\alpha\Ket{h\otimes l,6}+\beta\Ket{v\otimes l,6}) \label{out}\end{aligned}$$ According to Eq. (\[out\]), the output states from $Port_5$ and $Port_6$ remain their original polarization and OAM states. The splitting ratio is determined by the $\theta$ of $HWP_{\uppercase\expandafter{\romannumeral2}}$. The same conclusion can be obtained by using a similar process when light enters into the TBS from $Port_2$. ![(a) The experimental setup to measure the splitting ratio and the polarization independence. (b) 4F optical system for process tomography (figure not to scale). (c) The structure of the Sagnac interferometer. LD, laser diode; D, optical power meter; M, mirror; SLM, spatial light modulator; Lens, plano-vertex lens[]{data-label="fig:Exp_Setup0321"}](experiment_setup_20170928.eps){width="\linewidth"} To verify the performance of the TBS, three experiments are conducted respectively as shown in Fig. \[fig:Exp\_Setup0321\]. The experimental setup to test the split-ratio tunableness of the TBS is shown in Fig. \[fig:Exp\_Setup0321\] (a). A horizontal polarized light beam emits from a continuous-wave laser with the wavelength of 780 nm enters into $HWP_{0}$ driven by a DC servo motor (PR50CC, Newport Co.), which can generate arbitrary linear polarization states. Here, we test the split-ratio tunableness when $\Ket{H}$ and $\Ket{V}$ enter into the TBS, respectively. For a fixed incident polarization state, we detect the light intensity by optical power meters $D_1$ and $D_2$ in the sample rate of 100 Hz when rotating the $HWP_{\uppercase\expandafter{\romannumeral2}}$ in the range of zero to $180^\circ$ with the precision of $0.1^\circ$. The intensities of the output light changed with the angle of $HWP_{\uppercase\expandafter{\romannumeral2}}$ are shown in Fig. \[fig:Spli\_Ratio\]. When the angle of $HWP_{\uppercase\expandafter{\romannumeral2}}$ is $0^\circ$ in Fig. \[fig:Spli\_Ratio\] (a), the intensity detected by $D_1$ and $D_2$ are maximum and minimum, respectively. It is a balanced beam splitter when the angle of $HWP_{\uppercase\expandafter{\romannumeral2}}$ is $22.5^\circ$, as shown in the intersection of the two curves. The criterion for evaluating the performance of the split-ratio tunableness of TBS is the extinction ratio (ER), which is defined as: $$ER=\frac{I_{max}}{I_{min}} \label{ER}$$ where $I_{max}$ and $I_{min}$ refer to the maximum and minimum intensity of the output ports respectively. According to Eq. (\[ER\]), the mean, maximum and minimum ERs of tunableness of the TBS are 34 dB, 30.7 dB and 40.1 dB, respectively. ![The tunable capacity of the TBS when the states (a) $\Ket{H}$ and (b) $\Ket{V}$ are prepared in $LD_1$, respectively. The states (c) $\Ket{H}$ and (d) $\Ket{V}$ are corresponding to the states prepared in $LD_2$.[]{data-label="fig:Spli_Ratio"}](Spli_Ratio.eps){width="\linewidth"} To verify the polarization independence of the TBS, we test the variation of the splitting ratio with the incident polarization states as shown in Fig. \[fig:Exp\_Setup0321\] (a). In the tests, various linear polarization states are prepared by rotating the $HWP_{0}$ in the range of zero to $180^\circ$ with the accuracy of $0.1^\circ$, and the light intensities are detected by optical power meters in the sample rate of 100 Hz. The results are shown in Fig. \[fig:DOPD\]. To evaluate the performance of the polarization independence, we define the splitting ratio (SR) and polarization dependence (PD) as follows: $$SR=\frac{R}{R+T}$$ $$PD= \frac{|SR_{exp}-SR_{th}|}{SR_{th}} \label{DOPD}$$ where R (T) denotes the intensity of reflected (transmission) light, and $SR_{exp}$ means the experimental splitting ratio with a maximum deviation from theoretical value, which is the worst case. $SR_{th}$ is the theoretical splitting ratio. In the experiment, we set the reflected light to be the weak light when TBS is in the unbalanced beam splitting. Therefore, the range of $SR_{th}$ will be 0.1 to 0.5 in our tests. The smaller the value of PD, the better the performance of polarization independence. ![The polarization dependence of the TBS. The splitting ratios change with $HWP_0$ when light from (a) $LD_1$ and (b) $LD_2$.[]{data-label="fig:DOPD"}](DOPD.eps){width="\linewidth"} According to Eq. (\[DOPD\]), the measurement results of the polarization dependence in different splitting ratios are shown in Tab. \[splitting ratio\]. The worst case of the polarization dependence is lower than $6\%$ with a triple standard deviation of below $0.2\%$. With the decrease of SRs, PDs are gradually rising according to the experimental results. This is due to the more obvious influence of the intensity fluctuation to weak light. $SR$ $50: 50$ $60: 40$ $70: 30$ $80: 20$ $90: 10$ -------- -------------------- -------------------- -------------------- -------------------- -------------------- $PD_1$ $1.73\%\pm 0.06\%$ $2\%\pm 0.10\%$ $2.63\%\pm 0.10\%$ $3.68\%\pm 0.19\%$ $5.75\%\pm 0.15\%$ $PD_2$ $2\%\pm 0.07\%$ $2.42\%\pm 0.14\%$ $2.88\%\pm 0.10\%$ $3.69\%\pm 0.21\%$ $5.35\%\pm 0.14\%$ : **Polarization dependence in different splliting ratios** \[splitting ratio\] To verify the OAM preservation performance of the TBS, a process tomography based on a 4F optical system has been proposed as shown in Fig. \[fig:Exp\_Setup0321\] (b). A 780 nm continuous-wave diode laser illuminates a spatial light modulator ($SLM_1$) to generate kinds of OAM light. A 4F optical system consisting of two plano-convex lenses with the focal length of 750 mm and a spatial filter with the aperture of 8 mm are employed to isolate the first order of the beam diffracted by the $SLM_1$. After transmitting through the PBS and TBS, OAM light is demodulated by $SLM_2$ and coupled into a single mode fiber (SMF) to be detected. If the forked hologram loaded by $SLM_2$ is identical to the $SLM_1$ ($l=l_1-l_2=0$), the OAM light will totally be converted to Gaussian beam theoretically[@mair2001entanglement]. However, imperfect devices will lead to the crosstalk from other OAM modes which can be seen as a criterion to evaluate the performance of OAM preservation of optical elements. In the process tomography, we prepare the OAM light with $l=0$, $\pm1$, $\pm2$, $\pm3$, $\pm4$ respectively, and then measure it with the same sets. The experimental results of the PBS and TBS are shown in Fig. \[fig:OAM\]. In order to assess the OAM preservation performance, we define the ER of OAM preservation as follows: $$ER_{OAM}=\frac{I_i}{\sum_{k \ne i} I_k} \label{EXT_OAM}$$ where $I_i$ and $I_k$ refer to the detected light intensity when the hologram loaded by $SLM_2$ is the identical order and other orders of OAM light, respectively. According to Eq. (\[EXT\_OAM\]), the ERs of more than 20 dB are obtained, which are close to the situations without PBS or TBS. These results demonstrate the PBS and TBS can preserve the OAM light well. ![The process tomography of (a) the PBS and (b) the TBS in the 4F optical system, respectively.[]{data-label="fig:OAM"}](OAM.eps){width="\linewidth"} . Interferometer is a kind of important element in optical systems. Therefore, we built a Sagnac interferometer using this TBS to evaluate its performance in practical applications, as shown in Fig. \[fig:Exp\_Setup0321\] (c). In the Sagnac interferometer, the light emitted from $Port_{5}$ ($Port_{6}$) will be reflected by the mirrors $M_1$ and $M_2$ and returns to the TBS from $Port_{6}$ ($Port_{5}$), in which the two components throughing the same path form a closed loop. We test the visibility of the Sagnac interferometer during 30 minutes with the different incident polarization states. The interference visibility is defined as: $$V=\frac{I_{max}-I_{min}}{I_{max}+I_{min}}$$ where $I_{max}$ and $I_{min}$ are the maximum intensity and minimum intensity of the output ports, respectively. As shown in Fig. \[fig:Interference5050\] (a), the interference visibility of above $99\%$ with the triple standard deviations of below $0.2\%$ is obtained, which prove its good stability when the light enters into $Port_1$ and $Port_2$, individually. Then we test the interference visibility of different incident polarization state by rotating $HWP_0$ in a period from $0^\circ$ to $90^\circ$ with a step of $0.1^\circ$. The mean interference visibility for two input ports are all greater than $99\%$ as shown in Fig. \[fig:Interference5050\] (b). The visibility of the incident polarization states $\Ket{H}$ and $\Ket{V}$ are closed, while the interference visibility reaches its maximum and minimum value when the incident polarization states are $\Ket{H+V}$ and $\Ket{H-V}$, respectively, as whown in Fig. \[fig:Interference5050\] (b). This is due to the insertion loss of different polarization states in different ports and extinction ratio of PBS. The other curve of incident light from $Port_2$ is reflexive associative with the curve from $Port_1$, which is reasonable since the state $\Ket{H+V}$ from $Port_1$ is corresponding to the state $\Ket{H-V}$ from $Port_2$. ![(a) The test of stability of the Sagnac interferometer. (b) The test of polarization independence of Sagnac interferometer.[]{data-label="fig:Interference5050"}](Interference5050.eps){width="\linewidth"} In conclusion, we realize a polarization-independent OAM-preserving TBS which needs much less optical devices comparing with the existing schemes. We experimentally evaluated the key parameters of the scheme, and demonstrated that the ERs of tunableness are greater than 30 dB, polarization dependence are lower than $6\%$ and ERs of OAM preservation are more than 20 dB. Using this TBS, we built a Sagnac interferometer with the mean visibility of above $99\%$, which makes it have potential to be utilized in kinds of quantum information processing. This resource-saving structure has the potential advantage to simplify the optical systems and be applied to the scalable applications[@wang2016scalable][@krenn2017entanglement]. It is noted that the scheme can be implemented more compactly with emerging techniques such as integrated optics. This work has been supported by the National Natural Science Foundation of China (Grant Nos. 61675189, 61627820, 61622506, 61475148, 61575183), the National Key Research And Development Program of China (Grant Nos.2016YFA0302600, 2016YFA0301702), the “Strategic Priority Research Program(B)” of the Chinese Academy of Sciences (Grant No. XDB01030100). [10]{} R. Yang, J. Li, X.-B. Song, T. Gao, Y.-R. Li, Y.-J. Zhang, X.-X. Chen, and Y.-X. Gong, Experimental realization of a 2× 2 polarization-independent split-ratio-tunable optical beam splitter, Optics Express **24**, 28519 (2016). S. Franke-Arnold, L. Allen, and M. Padgett, Advances in optical angular momentum, Laser & Photonics Reviews **2**, 299 (2008). A. M. Yao and M. J. Padgett, Orbital angular momentum: origins, behavior and applications, Advances in Optics and Photonics **3**, 161 (2011). A. E. Willner, H. Huang, Y. Yan, Y. Ren, N. Ahmed, G. Xie, C. Bao, L. Li, Y. Cao, Z. Zhao *et al.*, Optical communications using orbital angular momentum beams, Advances in Optics and Photonics **7**, 66 (2015). G. Weihs and A. Zeilinger, Entanglement of the orbital angular momentum states of photon, Nature **412**, 313 (2001). J. Leach, B. Jack, J. Romero, A. K. Jha, A. M. Yao, S. Franke-Arnold, D. G. Ireland, R. W. Boyd, S. M. Barnett, and M. J. Padgett, Quantum correlations in optical angle–orbital angular momentum variables, Science **329**, 662 (2010). F. Cardano, F. Massa, H. Qassim, E. Karimi, S. Slussarenko, D. Paparo, C. de Lisio, F. Sciarrino, E. Santamato, R. W. Boyd *et al.*, Quantum walks and wavepacket dynamics on a lattice with twisted photons, Science advances **1**, e1500087 (2015). X.-W. Luo, X. Zhou, C.-F. Li, J.-S. Xu, G.-C. Guo, and Z.-W. Zhou, Quantum simulation of 2D topological physics in a 1D array of optical cavities, Nature communications **6** (2015). G. Molina-Terriza, A. Vaziri, J. [Ř]{}eh[á]{}[č]{}ek, Z. Hradil, and A. Zeilinger, Triggered qutrits for quantum communication protocols, Physical review letters **92**, 167903 (2004). G. Vallone, V. D’Ambrosio, A. Sponselli, S. Slussarenko, L. Marrucci, F. Sciarrino, and P. Villoresi, Free-space quantum key distribution by rotation-invariant twisted photons, Physical review letters **113**, 060503 (2014). S. Ramachandran, P. Kristensen, and M. F. Yan, Generation and propagation of radially polarized beams in optical fibers, Optics letters **34**, 2525 (2009). N. Bozinovic, Y. Yue, Y. Ren, M. Tur, P. Kristensen, H. Huang, A. E. Willner, and S. Ramachandran, Terabit-scale orbital angular momentum mode division multiplexing in fibers, Science **340**, 1545 (2013). L. Marrucci, C. Manzo, and D. Paparo, Optical spin-to-orbital angular momentum conversion in inhomogeneous anisotropic media, Physical review letters **96**, 163905 (2006). V. D’Ambrosio, E. Nagali, C. H. Monken, S. Slussarenko, L. Marrucci, and F. Sciarrino, Deterministic qubit transfer between orbital and spin angular momentum of single photons, Optics letters **37**, 172 (2012). B. L. Higgins, D. W. Berry, S. D. Bartlett, H. M. Wiseman, and G. J. Pryde, Entanglement-free Heisenberg-limited phase estimation, Nature **450**, 393 (2007). X.-s. Ma, B. Dakic, W. Naylor, A. Zeilinger, and P. Walther, Quantum simulation of the wavefunction to probe frustrated Heisenberg spin systems, Nature Physics **7**, 399 (2011). I. Marcikic, H. de Riedmatten, H. Zbinden, N. Gisin *et al.*, Long-distance teleportation of qubits at telecommunication wavelengths, Nature **421**, 509 (2003). X.-s. Ma, S. Zotter, N. Tetik, A. Qarry, T. Jennewein, and A. Zeilinger, A high-speed tunable beam splitter for feed-forward photonic quantum information processing, Optics Express **19**, 22723 (2011). M. Mafu, A. Dudley, S. Goyal, D. Giovannini, M. McLaren, M. J. Padgett, T. Konrad, F. Petruccione, N. L[ü]{}tkenhaus, and A. Forbes, Higher-dimensional orbital-angular-momentum-based quantum key distribution with mutually unbiased bases, Physical Review A **88**, 032305 (2013). Y. Zhang, F. S. Roux, T. Konrad, M. Agnew, J. Leach, and A. Forbes, Engineering two-photon high-dimensional states through quantum interference, Science advances **2**, e1501165 (2016). A. Mair, A. Vaziri, G. Weihs, and A. Zeilinger, Entanglement of the Angular Orbital Momentum States of the Photons, Nature **412**, 313 (2001). F.-X. Wang, W. Chen, Z.-Q. Yin, S. Wang, G.-C. Guo, and Z.-F. Han, Scalable orbital-angular-momentum sorting without destroying photon states, Physical Review A **94**, 033847 (2016). M. Krenn, A. Hochrainer, M. Lahiri, and A. Zeilinger, Entanglement by Path Identity, Physical review letters **118**, 080401 (2017).
--- abstract: | Gas detectors are one of the pillars of the research in fundamental physics. Since several years, a new concept of detectors, called Micro Pattern Gas Detectors (MPGD), allows to overcome many of the problems of other types of commonly used detectors, like drift chambers and microstrip detectors, reducing the discharge rate and increasing the radiation tolerance.\ Among these, one of the most commonly used is the Gas Electron Multiplier (GEM). GEMs have become an important reality for fundamental physics detectors. Commonly deployed as fast timing detectors and triggers, due to their fast response, high rate capability and high radiation hardness, they can also be used as trackers.\ The readout scheme is one of the most important features in tracking technology. The center of gravity technique allows to overcome the limit of the digital pads, whose spatial resolution is constrained by the pitch dimension. The presence of a high external magnetic field can distort the electronic cloud and affect the spatial resolution. The micro-TPC ($\mu-$TPC) reconstruction method allows to reconstruct the three dimensional particle position as in a traditional Time Projection Chamber, but within a drift gap of a few millimeters. This method brings these detectors into a new perspective for what concerns the spatial resolution in strong magnetic field.\ In this report, the basis of this new technique will be shown and it will be compared to the traditional center of gravity. The results of a series of test beam performed with $10 \times 10$ cm$^2$ planar prototypes in magnetic field will also be presented.\ This is one of the first implementations of this technique for GEM detectors in magnetic field and allows to reach unprecedented performance for gas detectors, up to a limit of $120$ $\mu$m at $1$ T, one of the world’s best results for MPGDs in strong magnetic field. The $\mu-$TPC reconstruction has been recently tested at very high rates in a test beam at the MAMI facility; preliminary results of the test will be presented. author: - | L. Lavezzi$^{*,a,f}$, M. Alexeev$^f$, A. Amoroso$^{f,l}$, R. Baldini Ferroli$^{a,c}$, M. Bertani$^c$, D. Bettoni$^b$, F. Bianchi$^{f,l}$, A. Calcaterra$^c$, N. Canale$^b$, M. Capodiferro$^{c,e}$, V. Carassiti$^b$, S. Cerioni$^c$, JY. Chai$^{a,f,h}$, S. Chiozzi$^b$, G. Cibinetto$^b$, F. Cossio$^{f,h}$, A. Cotta Ramusino$^b$, F. De Mori$^{f,l}$, M. Destefanis$^{f,l}$, J. Dong$^c$, F. Evangelisti$^b$, R. Farinelli$^{b,i}$, L. Fava$^f$, G. Felici$^c$, E. Fioravanti$^b$, I. Garzia$^{b,i}$, M. Gatta$^c$, M. Greco$^{f,l}$, CY. Leng$^{a,f,h}$, H. Li$^{a,f}$, M. Maggiora$^{f,l}$, R. Malaguti$^b$, S. Marcello$^{f,l}$, M. Melchiorri$^b$, G. Mezzadri$^{b,i}$, M. Mignone$^f$, G. Morello$^c$, S. Pacetti$^{d,k}$, P. Patteri$^c$, J. Pellegrino$^{f,l}$, A. Pelosi$^{c,e}$, A. Rivetti$^f$, M. D. Rolo$^f$, M. Savrié$^{b,i}$, M. Scodeggio$^{b,i}$, E. Soldani$^c$, S. Sosio$^{f,l}$, S. Spataro$^{f,l}$, E. Tskhadadze$^{c,g}$, S. Verma$^i$, R. Wheadon$^f$, L. Yan$^f$\ [^1] [^2] [^3] [^4] [^5] [^6] [^7] [^8] [^9] [^10] [^11] [^12] for the CGEM-IT Group title: 'Performance of the micro-TPC Reconstruction for GEM Detectors at High Rate' --- GEM, gas detector, $\mu-$TPC, high rate. Introduction ============ energy physics research requires a constant improvement in the machine performance. For example, the increasing accelerator luminosity, which grants the possibility to acquire big samples of data, forces the detectors to keep up with an always bigger particle rate. This reflects on the need to both choose detectors with a good capability to undergo strong radiation doses without relevant aging and to develop new reconstruction methods, able to cope with the [*crowded*]{} environment.\ In $1997$ F. Sauli [@sauli] invented the Gas Electron Multiplier (GEM) to allow gas-based trackers to work under higher particle rates [@pdg].\ In a standard gas-based tracker, the charged particle ionizes the gas producing electrons and positive ions. The electrons follow the electric drift field lines to a region where the electric field becomes so intense they undergo avalanche multiplication. The obtained amount of electrons is sufficient to induce a signal on the readout. In standard trackers the high electric field is generated by wires, but this creates a problem of discharge already at $10^3$ Hz mm$^{-2}$.\ A more robust way to obtain electron multiplication is the GEM: it consists of a thin ($\sim 50$ $\mu$m) polymeric foil, covered on both sides by two thinner ($\sim 3$ $\mu$m) layers of copper. The foil is pierced with thousands of double-conical holes, with an inner diameter of $50$ $\mu$m (see fig.\[fig:gem\]). ![Detail of a GEM foil [@sauli].[]{data-label="fig:gem"}](gem.png){width="0.8\columnwidth"} A voltage of some hundreds of Volts is applied between the copper layers and due to the tiny dimensions of the holes it creates an electric field of some kV/cm inside them. When the electrons resulting from the ionization of the gas move along the drift field lines and enter the holes, they meet an electric field intense enough to produce avalanche multiplication with a gain of some $10^4$. This makes GEM-based detectors more rate tolerant than wire-based ones. The effect is even more emphasized when more GEM foils are placed in series, instead of just one [@bachman]. The reconstruction methods ========================== Being tracking detectors, the position reconstruction is their primary goal. Two algorithms are currently available to reconstruct the particle position: the center of gravity method, commonly called [*charge centroid*]{} (CC) and the micro-TPC method ($\mu-$TPC). The choice between them depends strongly on the shape of the charge distribution on the readout plane.\ The standard layout of a triple-GEM detector is showed in fig.\[fig:triple\_gem\]. ![The different sections of a triple-GEM.[]{data-label="fig:triple_gem"}](triple_gem.png){width="0.7\columnwidth"} It consists of: - a cathode, at negative potential; - a drift gap, where the ionization happens; - a set of three GEM foils, with (transfer) gas gaps between them, to produce the multiplication; - an induction gap, where the induction of the signal on the final electrode, the anode, begins; - an anode, at ground, where the strips of the readout are placed and the signal is collected. Usually two views are available, for 2D reconstruction on the plane. If the anode plane is digitally readout, then the [*on/off*]{} information is the only one available from the strips and the pitch dominates the position resolution. If an analog readout is applied, the additional value of the charge deposited on each strip is available, allowing a more refined position determination. Moreover, if also the time of arrival of the signal is recorded a new degree of freedom opens up in the position reconstruction, as will be shown in the following.\ The shape of the charge distribution on the anode is determined by the physical effects which come into play along the electron path from the primary ionization points to the readout plane. The most important effects are the diffusion and the possible presence of the Lorentz force.\ The former is due to the motion inside the gas: the multiple scattering enlarges the electronic cloud and spreads it over more than one strip on the readout plane.\ The second effect is present if a magnetic field is applied, usually orthogonal to the electric drift field. The electron trajectories are bent and the charge distribution at the anode is not only enlarged but its shape goes also far from the Gaussian shape and is no longer parametrizable. Charge centroid --------------- The charge centroid consists in the weighted average of the firing strip positions, the weights being the charge measured on each strip (eq.\[eq:cc\]). $$x = \frac{\sum\limits_i x_i q_i}{\sum\limits_i q_i} \label{eq:cc}$$ It is the simplest reconstruction possible by knowing the strip positions and the charge values and is well performing when the charge distribution shape is Gaussian. For non Gaussian shapes it does not provide a good position resolution anymore. Inclined incident tracks at big angles and/or the presence of a strong magnetic field may create a situation where the CC method cannot guarantee a good spatial resolution. In these cases, when the CC fails, another method must be adopted: the $\mu-$TPC. $\mu-$TPC mode -------------- This method was firstly introduced in ATLAS, for the Micromegas detector [@utpc]. As the name says, the idea behind it is to use the GEM drift gap as a [*micro Time Projection Chamber*]{}. By measuring the time of arrival of the signal on each strip and by knowing the drift velocity in the specific gas under the working conditions it is possible to calculate the position of the primary ionization point.\ In fig.\[fig:utpc\] ![Sketch of the track reconstruction inside the drift gap with the $\mu-$TPC.[]{data-label="fig:utpc"}](microtpc.png){width="0.8\columnwidth"} the $\mu-$TPC concept is sketched. Once a cluster is found, from the $x$ position of the strip on the anode and the $z$ position of the primary ionization in the drift gap, couples of $(x, z)$ coordinates and $(dx,dz)$ errors are assigned to each strip and a fit with a straight line is performed. The $dx$ errors basically account for the uncertainty of the hit in the finite strip pitch plus a weight depending on the charge on the strip; the error $dz$ results from the propagation of the time measurement error. The best position measurement (eq.\[eq:utpc\]) corresponds to the track fit at half-gap, where the interpolated position estimate minimizes the error. $$\label{eq:utpc} x = \frac{\frac{gap}{2} - b}{a}$$ The whole procedure is possible if the time resolution of the detector is good enough to resolve the arrival times of the electron avalanche on different strips, and with a highly segmented readout plane.\ The $\mu-$TPC clustering method has been initially tested with inclined tracks and magnetic field up to $1$ T. Data samples with chambers at $10^\circ$, $20^\circ$, $30^\circ$ and $45^\circ$ w.r.t. the beam direction have been collected. A data driven correction procedure, based on the identification of the strip signals by induced charge (based on the time information and charge ratios) and subsequent weighting or suppression of the first and/or last strips in the cluster has been implemented. Fig.\[fig:resol\] ![Spatial resolution of the CC and $\mu-$TPC cluster reconstruction [*vs*]{} the incident angle of the track for Ar$:$iC$_4$H$_{10}$ ($90:10$) gas mixture at B $= 1$ T. Results obtained with a drift gap of $5$ mm, a drift field of $1.5$ kV/cm and a gain of $9000$ [@ric_ieee16].[]{data-label="fig:resol"}](res_ariso_B1T_angles.png){width="0.9\columnwidth"} shows the resolution for a $5$ mm gap prototype with Ar$:$iC$_4$H$_{10}$ ($90:10$) gas mixture as a function of the incident angle, along with the comparison with the CC performance: spatial resolution around $130$ $\mu$m is achievable for all the impact angles by combining the CC and the $\mu-$TPC results.\ This is the first implementation of the $\mu-$TPC algorithm for a GEM detector in a strong magnetic field. A more detailed description of these results can be found in [@ric_ieee16]. High rate test beam =================== As already anticipated, the behavior of the detector experiencing high particle rates changes.\ In every gas detector, when one ionization happens, the electron and the positive ion drift along the electric field lines; the electron drift velocity is high and it produces a fast signal, while the ion mobility is lower and at high rates there might be an accumulation of positive charge inside some areas of the detector. This space charge issue may create a distortion in the electric field lines and a consequent reduction of the gain. This can lead to a degradation of the spatial resolution and eventually to aging. The limit for wire detectors is known to be around $10^3$ Hz mm$^{-2}$ but the GEMs can resist to much higher rates [@pdg]. The environment and the setup ----------------------------- The effect of high rate on the $\mu-$TPC performance has been recently studied with a test beam at the MAinz MIcrotron, MAMI [@mami] facility, in Mainz, Germany. This test was necessary since the detector must not only be able to sustain high rates without damage, but also keep a level of performance unaltered by the challenging environment.\ The setup (shown in fig.\[fig:mamisetup\]) consisted in four triple-GEM planar chambers $10 \times 10$ cm$^2$ with a $5$ mm drift gap. The tested gas mixtures were Ar$:$iC$_4$H$_{10}$ ($90:10$) and Ar$:$CO$_2$ ($70:30$) without magnetic field. The chambers could rotate and the $\mu-$TPC studies were performed at an angle of $30^\circ$ w.r.t the beam direction: the angle is necessary since the $\mu-$TPC is not applicable at $0^\circ$. The electron beam had a size of a few mm and could reach high rates.\ ![Setup installed at MAMI facility.[]{data-label="fig:mamisetup"}](mami.png){width="0.8\columnwidth"} The results ----------- A key factor for GEM reliability is a stable gain value. Moreover, the main parameters the $\mu-$TPC depends on are the drift velocity and the time resolution. For these reasons, the variation with increasing particle rate of these variables has been studied.\ \ Fig.\[fig:charge\] shows that the cluster mean charge is constant up to $10^6$ Hz cm$^{-2}$, then it increases up to $10^7$ Hz cm$^{-2}$ and eventually it drops. This behavior has a big resemblance to the one shown for the gain in Sauli’s recent review on GEMs [@sauli2016], that we report here for simplicity (fig.\[fig:gain\]). When we [*scale*]{} the charge [*vs*]{} rate plot to a gain [*vs*]{} rate one, the resulting behavior is compatible with the one shown in fig.\[fig:gain\], even though a direct comparison of the values is not possible due to the different electrical settings. The gain is stable up to $10^6$ Hz cm$^{-2}$, increases up to $10^7$ Hz cm$^{-2}$ and drops afterwards.\ ![Charge [*vs*]{} rate.[]{data-label="fig:charge"}](charge.png){width="0.9\columnwidth"} ![Gain [*vs*]{} rate from [@sauli2016].[]{data-label="fig:gain"}](gain_sauli.png){width="0.8\columnwidth"} An explanation for this peculiar behavior is given in [@thuiner]: the space charge due to the positive ions modifies the electric field and increases the transparency of the GEM. The transparency is defined as the collection efficiency multiplied by the extraction efficiency, i.e. the probability that an electron drifts inside the GEM hole multiplied, after the avalanche, by the fraction of the electrons which exit from it. Usually the electric fields in the various gas gaps, especially the transfer gaps between the GEMs, must be conveniently optimized to find a compromise between the extraction efficiency of the previous GEM and the collection efficiency of the following one. The positive charge accumulation due to the high rates modifies the electric field in these regions in such a way that both these efficiencies are enhanced and the full triple-GEM becomes [*more transparent*]{}. This increases the effective gain.\ This situation is transitory and when the space charge is too high the gain suddenly falls to a lower value.\ \ The time resolution can be evaluated by distributing the time difference measured by two adjacent strips for the same event. This $\Delta t$ contains not only the resolution of the detector, but also the effect of the intrinsic time resolution of the electronics. In this test beam, the data were collected through the APV-25 ASIC [@apv], which samples the charge every $25$ ns. By deconvoluting the $\Delta t$ from the APV-25 contribution, the obtained resolution results in $8.4$ ns for the Ar$:$iC$_4$H$_{10}$ mixture.\ The $\Delta t$ [*vs*]{} rate is shown in fig.\[fig:time\]: it starts worsening only after $10^7$ Hz cm$^{-2}$.\ ![$\Delta t$ [*vs*]{} rate.[]{data-label="fig:time"}](time.png){width="0.9\columnwidth"} \ The last evaluated parameter was the drift velocity. It can be extracted from the data by plotting all the measured times and computing the difference between the leading and the falling edges of the obtained distribution. Its behavior as a function of the rate (fig.\[fig:velocity\]) shows that, again, there is a relevant change only after $10^7$ Hz cm$^{-2}$. At higher rates the electrons are slower and this is expected to affect the $\mu-$TPC performance.\ ![Drift velocity [*vs*]{} rate.[]{data-label="fig:velocity"}](velo_vs_rate.png){width="0.9\columnwidth"} Conclusion ========== The test at MAMI confirms that no relevant changes occur in the parameters that influence the $\mu-$TPC reconstruction resolution up to $10^7$ Hz cm$^{-2}$ and this gives good expectations on the $\mu-$TPC behavior up to this particle rate. More studies and the actual reconstruction of the collected data with the $\mu-$TPC mode are necessary to certify its applicability at these rate levels. This, however, was the first test on the limits of the $\mu-$TPC at high rates.\ As additional result, we observed that the gain showed the peculiar behavior seen in previous tests. Acknowledgment {#acknowledgment .unnumbered} ============== The authors wish to thank Werner Lauth (MAMI) for his help in the test beam realization, as well as Giovanni Bencivenni (LNF) and Eraldo Oliveri (CERN) for the fruitful discussion on the results.\ The research leading to these results has been performed within the BESIIICGEM Project, funded by the European Commission in the call H2020-MSCA-RISE-2014. [99]{} F. Sauli, *GEM: A new concept for electron amplification in gas detectors*, *Nucl. Instr. and Meth. A* [**386**]{} (1997) 53l-534 Particle Data Group 2016, http://pdg.lbl.gov/2016/reviews/rpp2016-rev-particle-detectors-accel.pdf S. Bachmann et al, *Discharge studies and prevention in the gas electron multiplier (GEM)*, *Nucl. Instr. and Meth. A* [**479**]{} (2002) 294 T. Alexopoulos et al., *Development of large size Micromegas detector for the upgrade of the ATLAS Muon system*, *Nucl. Instr. and Meth. A* [**617**]{} (2010) 161 presented by R. Farinelli at 2016 IEEE NSS/MIC\ proc. M. Alexeev et al., *Development and test of a $\mu$TPC cluster reconstruction for a triple GEM detector in strong magnetic field*, Nuclear Science Symposium, Medical Imaging Conference and Room-Temperature Semiconductor Detector Workshop (NSS/MIC/RTSD), 2016 MAMI facility, http://www.kph.uni-mainz.de/eng/108.php F. Sauli, *The gas electron multiplier (GEM): Operating principles and applications*, *Nucl. Instr. and Meth. A* [**805**]{} (2016) fig.43 P. Thuiner, CERN-THESIS-2016-199 M. Raymond et al., [The APV25 $0.25$ $\mu$m CMOS readout chip for the CMS tracker]{}, IEEE NSS Conference Record, (2000) 113-118 [^1]: $*$ speaker, e-mail: lia.lavezzi@to.infn.it [^2]: $^a$ Institute of High Energy Physics, Beijing, China [^3]: $^b$ INFN, Sezione di Ferrara, Italy [^4]: $^c$ INFN, Laboratori Nazionali di Frascati, Italy [^5]: $^d$ INFN, Sezione di Perugia, Italy [^6]: $^e$ INFN, Sezione di Roma, Italy [^7]: $^f$ INFN, Sezione di Torino, Italy [^8]: $^g$ Joint Institute for Nuclear Research (JINR), Dubna, Russia [^9]: $^h$ Politecnico di Torino, Italy [^10]: $^i$ Università di Ferrara, Italy [^11]: $^k$ Università di Perugia, Italy [^12]: $^l$ Università di Torino, Italy
--- abstract: 'This paper is primarily concerned with generalized reduced Verma modules over $\mathbb{Z}$-graded modular Lie superalgebras. Some properties of the generalized reduced Verma modules and the coinduced modules are obtained. Moreover, the invariant forms on the generalized reduced Verma modules are considered. In particular, we prove that the generalized reduced Verma module is isomorphic to the mixed product for modules of $\mathbb{Z}$-graded modular Lie superalgebras of Cartan type.' author: - | <span style="font-variant:small-caps;">Keli Zheng$^{1,2}$</span> <span style="font-variant:small-caps;">Yongzheng Zhang$^{2}$</span>[^1]\ \ *$^1$Department of Mathematics, Northeast Forestry University*\ *Harbin 150040, P.R. China*\ *$^2$School of Mathematics and Statistics*, *Northeast Normal University*\ *Changchun 130024, P.R. China.* date: --- **Keywords:** Modular Lie superalgebra, generalized reduced Verma module, coinduced module, invariant form, mixed product\ **2000 Mathematics Subject Classification:** 17B50, 17B10, 17B70 [^2] Introduction ============ As is well known, the representation theory plays an important role in the research of Lie algebras and Lie superalgebras (see [@ross; @H; @1; @M] for examples). The question about the structure of submodules of a Verma module arose in the original paper of Verma [@Verma]. As a natural generalization of Verma modules, the generalized Verma modules are modules induced, starting from arbitrary simple modules (not necessarily finite-dimensional), from a parabolic subalgebra and a complex semisimple Lie algebra (see [@V; @V1; @xin; @su2]). One of the main questions about generalized Verma modules is their structure, i.e., reducibility, submodules, equivalence, etc. The theory of generalized Verma modules is rather similar to that of Verma modules. Some results of Verma modules (see [@BGG; @D]) were extended to certain class of generalized Verma modules in [@R; @FM; @MS1; @KM1; @MO] (see also references therein). But only rather particular classes of generalized Verma modules were covered and the problem of how to say something in a general case remains open. The generalized reduced Verma module over modular Lie algebras was constructed in [@Farnsteiner]. Some properties of generalized reduced Verma module over modular Lie algebras were obtained (see [@Farnsteiner; @Farnsteiner1; @qiusen]). Since generalized reduced Verma modules are closely related to mixed products of modules, the structure of mixed products seems to be important and interesting. In [@shen1; @shen2; @shen3], Shen classified the $\mathbb{Z}$-graded irreducible representation of the $\mathbb{Z}$-graded Lie algebras of Cartan type. His approach rests on the notion of the mixed product. In [@qiusen] the graded modules of graded Cartan type Lie algebras which possess nondegenerate invariant form were determined by Chiu. In the case of modular Lie superalgebras of Cartan type, $\mathbb{Z}$-graded modules of the $\mathbb{Z}$-graded Lie superalgebras $W(n)$, $S(n)$ and $H(n)$, mixed products of modules of infinite-dimensional Lie superalgebras and $\mathbb{Z}$-graded modules of finite-dimensional Hamiltonian Lie superalgebras were obtained in [@zhang1; @zhang2; @zhang3; @zhang4], respectively. The aim of this paper is to partially generalize some beautiful results about generalized reduced Verma modules over modular Lie algebras in [@Farnsteiner; @Farnsteiner1; @qiusen]. In Section 2, we review some necessary notions. In Section 3, some relations between generalized reduced Verma modules and coinduced modules are given. In Section 4, the invariant forms on generalized reduced Verma modules are considered. In Section 5, we prove that generalized reduced Verma modules are isomorphic to mixed products for modules of $\mathbb{Z}$-graded modular Lie superalgebras of Cartan type. All Lie superalgebras and modules treated in the present paper are assumed to be finite dimensional. In [@M; @zhang] the reader could find all notations and notions of Lie superalgebras and modular representations which are not precisely defined in this paper. Preliminaries ============= Throughout this paper we will assume that $\mathbb{F}$ is a field of prime characteristic and $\mathbb{Z}_{2}=\{\bar{0},\bar{1}\}$ is the residue class ring mod $2$. Let $L=L_{\bar{0}}\oplus L_{\bar{1}}$ be a Lie superalgebra over $\mathbb{F}$. Then $\mathbb{F}$ has a trivial structure of a $\mathbb{Z}_{2}$-graded $L$-module: $\mathbb{F}_{\bar{0}}=\mathbb{F}$, $\mathbb{F}_{\bar{1}}=0$. Furthermore, we always assumed that the representation of $L$ in $\mathbb{F}$ is equal to zero. In addition to the standard notation $\mathbb{Z}$, we write $\mathbb{N}$ and $\mathbb{N}_{0}$ for the set of positive integers and the set of nonnegative integers, respectively. Denote by $\mathbb{N}_{0}^{k}$ the $k$-tuples with nonnegative integers as entries. For any Lie superalgebra $L$ over $\mathbb{F}$, let $U(L)$ denote the universal enveloping algebra of $L$. If $L=\oplus_{i\in \mathbb{Z}}L_{i}$ is a $\mathbb{Z}$-graded Lie superalgebra over $\mathbb{F}$, we customarily put $L^{+}=\oplus_{i> 0}L_{i}$ and $L^{-}=\oplus_{i< 0}L_{i}$. Then $L=L^{+}\oplus L_{0}\oplus L^{-}$ and $U(L)=U(L^{+})U(L_{0})U(L^{-})$. Without being mentioned explicitly, if $d(x)$ ($zd(x)$) occurs in some expression in this paper, we always regard $x$ as a $\mathbb{Z}_{2}$-homogeneous ($\mathbb{Z}$-homogeneous) element and $d(x)$ ($zd(x)$) as the $\mathbb{Z}_{2}$-degree ($\mathbb{Z}$-degree) of $x$. Let $V$ and $W$ be $L$-modules and suppose that $f$ is a $\mathbb{Z}_{2}$-homogeneous element of $\mathrm{Hom}_{\mathbb{F}}(V,W)$. The mapping $f$ is called a *homomorphism* of $L$-modules if $(x\cdot f)(v)=(-1)^{d(x)d(f)}f(x\cdot v)$ for all $x\in L$ and $v\in V$. The mapping $f$ is said to be an *isomorphism* of $L$-modules if $f$ is an homomorphism and if, furthermore, $f$ is a bijection. Let $V$ be an $L$-module. The vector space $V^{*}:=\mathrm{Hom}_{\mathbb{F}}(V,\mathbb{F})$ obtains the structure of an $L$-module by means of $(x\cdot f)(v)=-(-1)^{d(x)d(f)}f(x\cdot v)$, where $x\in L$, $v\in V$, $f\in V^{*}$. Clearly, $d(x\cdot f)=d(x)+d(f)$. We consider the subalgebra $K:=L_{0}\oplus L^{+}$ of a $\mathbb{Z}$-graded Lie superalgebra $L=\oplus_{i\in \mathbb{Z}}L_{i}$. Let $\{e_{1},\ldots,e_{k}\}$ be a basis of $L^{-}\cap L_{\bar{0}}$ and $\{\xi_{1},\ldots,\xi_{l}\}$ be a basis of $L^{-}\cap L_{\bar{1}}$. As $L^{-}\cap L_{\bar{0}}$ operates on $L$ by nilpotent transformation, there exist $m_{i}\in \mathbb{N}_{0}$, $1\leq i\leq k$ such that $$z_{i}:=e_{i}^{p^{m_{i}}}\in U(L^{-})\cap Z(U(L)), \quad 1\leq i\leq k,$$ where $Z(U(L))$ is the center of $U(L)$. In particular, $\{z_{i}\mid 1\leq i\leq k \}$ are homogeneous elements relative to the $\mathbb{Z}$-gradation inherited by $U(L_{\bar{0}})$. An application of P-B-W Theorem (see [@ross]) reveals that the subalgebra $\theta(L,K)$ of $U(L)$, which is generated by $K$ and $\{z_{1},\ldots,z_{k}\}$, is isomorphic to $\mathbb{F}[z_{1},\ldots,z_{k}]\otimes_{\mathbb{F}}U(K)$, where $\mathbb{F}[z_{1},\ldots,z_{k}]$ is a polynomial ring in $k$ indeterminates. Then an easy computation shows that $\theta(L,K)$ is a $\mathbb{Z}$-graded subalgebra of $U(L)$. Given $\alpha=(\alpha_{1},\ldots,\alpha_{k})\in\mathbb{N}_0^{k}$, we put $|\alpha|:=\sum_{i=1}^{m}\alpha_{i}$, $e^{\alpha}:=e_{1}^{\alpha_{1}}e_{2}^{\alpha_{2}}\cdots e_{k}^{\alpha_{k}}$ and $\pi:=(\pi_{1},\ldots,\pi_{k})=(p^{m_{1}}-1,\ldots,p^{m_{k}}-1)$. Set $$\mathbb{B}_{s}:=\left\{\langle i_{1},i_{2},\ldots,i_{s}\rangle\mid 1\leq i_{1}< i_{2}<\cdots< i_{s}\leq l\right\}$$ and $\mathbb{B}:=\bigcup_{s=0}^{l}\mathbb{B}_{s}$, where $\mathbb{B}_{0}:=\emptyset$ and $l\in \mathbb{N}$. For $u=\langle i_{1},i_{2},\ldots,i_{s}\rangle\in \mathbb{B}_{s}$, set $|u|:=s$, $|\emptyset|:=0$, $\xi^{\emptyset}:=1$, $\xi^{u}:=\xi_{i_{1}}\xi_{i_{2}}\cdots \xi_{i_{s}}$ and $\xi^{E}:=\xi_{1}\xi_{2}\cdots \xi_{l}$, $u$ is also used to stand for the index set $\{ i_{1},i_{2},\ldots,i_{s}\}$. It is easy to show that $U(L)$ is a $\mathbb{Z}$-graded $\theta(L,K)$-module with the basis $$\{e^{\alpha}\xi^{u}\mid 0\leq \alpha\leq \pi, u\in \mathbb{B}\}.$$ Any $K$-module $V$ obtains the structure of a $\theta(L,K)$-module by letting $\mathbb{F}[z_{1},\ldots,z_{k}]$ act via its canonical supplementation which sends $z_{i}$ to $0$. Henceforth, $K$-module will be regarded as $\theta(L,K)$-module in this fashion. Let $\rho$ be the natural representation of $K$ in $L/K$. Then there exists a unique homomorphism $\sigma:U(K)\rightarrow \mathbb{F}$ of $\mathbb{F}$-superalgebra such that $\sigma(x)=\mathrm{str}(\rho(x))$, where $x$ is an arbitrary element of $K$ and $\mathrm{str}(\rho(x))$ is the supertrace of $\rho(x)$ (see [@1; @M]). We introduce a twisted action on $K$-module $V$ by setting $$x\circ v=x\cdot v+\sigma(x)v, \quad x\in K, \quad v\in V.$$ Note that $\sigma(x)=0$ for $x\in K_{\bar{1}}$, then $$\begin{aligned} [x,y]\circ v&=&[x,y]\cdot v+\sigma([x,y])v\\ &=&x\cdot (y\cdot v)-(-1)^{d(x)d(y)}y\cdot (x\cdot v)+\sigma(x)\sigma(y)v-(-1)^{d(x)d(y)}\sigma(y)\sigma(x)v\\ &=&x\cdot (y\cdot v)+\sigma(y)x\cdot v+\sigma(x)y\cdot v+\sigma(x)\sigma(y)v\\ & &-(-1)^{d(x)d(y)}y\cdot (x\cdot v)-(-1)^{d(x)d(y)}\sigma(y)x\cdot v\\ & &-(-1)^{d(x)d(y)}\sigma(x)y\cdot v-(-1)^{d(x)d(y)}\sigma(y)\sigma(x)v\\ &=&x\cdot(y\circ v)+\sigma(y)(x\circ v)-(-1)^{d(x)d(y)}y\cdot(x\circ v)-(-1)^{d(x)d(y)}\sigma(y)(x\circ v)\\ &=&x\circ (y\circ v)-(-1)^{d(x)d(y)}y\circ (x\circ v),\end{aligned}$$ i.e., $V$ is a new $K$-module by the twisted action. The new $K$-module will be denoted by $V_{\sigma}$. If $V$ is an $L_{0}$-module, then we can extend the operations on $V$ to $K$ by letting $L^{+}$ act trivially and regard it as a $K$-module. Generalized reduced Verma modules and coinduced modules ======================================================= Let $L$ be a $\mathbb{Z}$-graded Lie superalgebra over $\mathbb{F}$ and $V$ be a $K$-module. Following [@Farnsteiner], we give the definition as follow. The induced module $\mathrm{Ind}_{K}(V):=U(L)\otimes_{\theta(L,K)}V$ is called a *generalized reduced Verma module*. The coinduced module $\mathrm{Hom}_{\theta(L,K)}(U(L),V)$ will be denoted by $\mathrm{Coind}_{K}(V)$. It is clear from the above construction that the modules $\mathrm{Ind}_{K}(V)$ and $\mathrm{Coind}_{K}(V)$ are annihilated by $z_{i}$. Consider $\mathrm{Coind}_{K}(V)$ with $U(L)$-action given via $$(y\cdot f)(x):=(-1)^{d(y)(d(f)+d(x))}f(xy), \quad x,y\in U(L).$$ For $v\in V$, $0\leq\beta \leq \pi$ and $u,t\in \mathbb{B}$, let $\chi_{v}^{(\beta,t)}$ be the element of $\mathrm{Coind}_{K}(V)$ which sends $e^{\alpha}\xi^{u}$ onto $(-1)^{d(\chi_{v}^{(\beta,t)})d(\xi^{u})}\delta(\alpha,\beta)\delta(u,t)v$ , where $\delta(i,j)$ is Kronecker delta, defined by $\delta(i,j)=1$ if $i=j$ and $\delta(i,j)=0$ otherwise. It obviously suffices to verify that $$\chi_{v}^{(\beta,t)}(e^{\beta}\xi^{t}\vartheta)=(-1)^{d(\vartheta)( d(\chi_{v}^{(\beta,t)})+d(\xi^{t}))+d(\chi_{v}^{(\beta,t)})d(\xi^{t})}\vartheta\circ v\label{eq1}$$ and $d(\chi_{v}^{(\beta,t)})=d(\xi^{t})+d(v)$, for all $\vartheta\in \theta(L,K)$, for all $v\in V_{\sigma}$. \[lm1\] There is a natural isomorphism of functors $$\Phi:\mathrm{Ind}_{K}(V_{\sigma})\rightarrow \mathrm{Coind}_{K}(V)$$ such that $\Phi(y\otimes v)=(-1)^{d(y)d(\Phi)}y\cdot \chi_{v}^{(\pi,E)}$, where $y\in U(L)$ and $v\in V_{\sigma}$. Suppose that the bilinear mapping $\psi: U(L)\times V_{\sigma}\rightarrow \mathrm{Hom}_{\mathbb{F}}(U(L),V)$ is defined by $\psi(y,v)=(-1)^{d(y)d(\psi)}y\cdot \chi_{v}^{(\pi,E)}$. Let $\vartheta\in \theta(L,K)$ and $u'\in U(L)$. Then the equation (\[eq1\]) and $d(\chi_{v}^{(\pi,E)})=d(\psi)+d(v)$ imply that $$\begin{aligned} \psi(y\vartheta,v)(u')&=&(-1)^{(d(y)+d(\vartheta))d(\psi)}y\vartheta\cdot \chi_{v}^{(\pi,E)}(u')\\ &=&(-1)^{(d(y)+d(\vartheta))(d(v)+d(u'))}\chi_{v}^{(\pi,E)}(u'y\vartheta)\\ &=&(-1)^{d(y)(d(v)+d(\vartheta)+d(u'))+d(\vartheta)d(\psi)+(d(\psi)+d(v))(d(u')+d(y))}\vartheta\circ v\\ &=&(-1)^{d(y)(d(v)+d(\vartheta)+d(u'))+(d(\vartheta)+d(\psi)+d(v))(d(u')+d(y))}\vartheta\circ v\\ &=&(-1)^{d(y)(d(v)+d(\vartheta)+d(u'))}\chi_{\vartheta\circ v}^{(\pi,E)}(u'y)\\ &=&(-1)^{d(y)d(\psi)}y\cdot \chi_{\vartheta\circ v}^{(\pi,E)}(u')\\ &=&\psi(y,\vartheta\circ v)(u').\end{aligned}$$ Consequently, $\psi$ is $\theta(L,K)$-balanced, and induces a mapping $$\Phi:U(L)\otimes_{\theta(L,K)} V_{\sigma}\rightarrow \mathrm{Hom}_{\mathbb{F}}(U(L),V).$$ The verification of the inclusion $\mathrm{im}\psi\subseteq \mathrm{Hom}_{\theta(L,K)}(U(L),V)$ is routine. For any $x,y\in U(L)$ and $v\in V_{\sigma}$, we have $$(x\cdot \Phi)(y\otimes v)=(-1)^{d(y)d(\Phi)}((xy)\cdot \chi_{v}^{(\pi,E)})=(-1)^{d(x)d(\Phi)}\Phi(x\cdot(y\otimes v)).$$ Hence $\Phi$ is a homomorphism of $U(L)$-modules. For any $f\in \mathrm{Coind}_{K}(V)$, there exists $e^{\alpha}\xi^{u}\in U(L)$ such that $$f=\sum_{\alpha,u}(-1)^{d(f)d(\xi^{u})}\chi_{f(e^{\alpha}\xi^{u})}^{(\alpha,u)},$$ where $0\leq \alpha\leq \pi$ and $u\in \mathbb{B}$. Then $\Phi(\sum_{\alpha,u}(-1)^{d(f)d(\xi^{u})}y\otimes f(e^{\alpha}\xi^{u}))=f$, i.e., $\Phi$ is a surjection. Suppose that $0=y\cdot X_{v}^{(\pi,E)}\in \mathrm{Coind}_{K}(V)$ and $y=e^{\alpha}\xi^{u}\in U(L)$, then there exists $u'=e^{\beta}\xi^{t}\in U(L)$ such that $\alpha+\beta=\pi$ and $u+t=E$. It follows that $$0=y\cdot \chi_{v}^{(\pi,E)}(u')=(-1)^{d(y)(d(u')+d(\chi_{v}^{(\pi,E)}))+d(\chi_{v}^{(\pi,E)})(d(u')+d(y))}v.$$ Therefore, $y\otimes v=0$, i.e., $\Phi$ is an injection. Now we show that $\Phi$ is a natural homomorphism. Suppose that $W$ is a $K$-module and $\varphi:V\rightarrow W$ is a homomorphism of $K$-module. Clearly, $\varphi$ is also a homomorphism between $V_{\sigma}$ and $W_{\sigma}$. We claim that the following diagram is commutative. $$\begin{CD} \mathrm{Ind}_{K}(V_{\sigma}) @>\Phi>> \mathrm{Coind}_{K}(V) \\ @V\mathrm{id}\otimes \varphi VV @VV\varphi^{*}V \\ \mathrm{Ind}_{K}(W_{\sigma}) @>\Phi'>> \mathrm{Coind}_{K}(W) \end{CD}$$ Note that $\varphi^{*}$ and $\mathrm{id}\otimes \varphi$ are homomorphisms of $U(L)$-modules, the assertion follows from the ensuing calculation: $$\varphi^{*}\circ \Phi(1\otimes v)(u')=\chi_{\varphi(v)}^{(\pi,E)}(u')=(\Phi'\circ(\mathrm{id}\otimes \varphi))(1\otimes v)(u'),\quad u'\in U(L).$$ In conclusion, the proof is completed. 1. If the above result is applied to the module $V_{-\sigma}$, then we obtain natural isomorphisms $\mathrm{Ind}_{K}(V)\cong\mathrm{Coind}_{K}(V_{-\sigma})$. 2. Suppose that $K$ acts nilpotently on $L/K$ or $(\rho(K))^{(1)}=\rho(K)$. Then $\sigma=0$ and every $K$-module $V$ gives an isomorphism $\mathrm{Ind}_{K}(V)\cong\mathrm{Coind}_{K}(V)$. Following [@shen2], we refer to a $\mathbb{Z}$-graded $L$-module $V$ as positively graded if $V=\bigoplus\limits_{i\geq 0}V_{i}$ and $L_{j}\cdot V_{i}\subseteq V_{i+j}$. A positively graded module $V$ is said to be transitive if $V_{0}=\{v\in V\mid x\cdot v=0$, for all $x\in L^{-}\}$. \[p1\] Let $P=\mathrm{Coind}_{K}(V)$ be an $L$-module and $$P_{i}:=\{f\in P\mid f(U(L)_{j})=0, j\neq -i\}.$$ Then 1. $P$ is a positively graded $L$-module. 2. $P_{0}$ is isomorphism to $V$ as an $L_{0}$-module. 3. $P$ is transitively graded. <!-- --> 1. Let $f$ be an element of $P_{i}$ and suppose that $y\in U(L)_{q}$, where $i,q\in \mathbb{Z}$. If $x\in U(L)_{j}$ for $j\neq -i-q$, then $xy\in U(L)_{j+q}$, where $j\in \mathbb{Z}$. It follows that $$(y\cdot f)(x)=(-1)^{d(y)(d(f)+d(x))}f(xy)=0.$$ Consequently, $(y\cdot f)$ belongs to $P_{i+q}$. Let $\{x_{1},\ldots,x_{n}\}$ be the basis of $U(L)$ over $\theta(L,K)$ and induced by $\{e_{1},\ldots,e_{k}\}$ and $\{\xi_{1},\ldots,\xi_{l}\}$. In accordance with the basis of $U(L)$, we may assume that $x_{r}=e^{\alpha}\xi^{u}\in U(L)_{i(r)}$, where $i(r)\leq 0$ and $1\leq r\leq n$. Any element of $U(L)_{q}$ is a sum of elements $x=\sum_{r=1}^{n}h_{r}x_{r}$, $h_{r}\in \theta(L,K)_{q-i(r)}$. Given $r\in \{1,2,\ldots,n\}$, we have $\chi_{v}^{(\alpha,u)}(x)=(-1)^{(d(x)+d(v))d(x)}h_{r}v$. If $q\neq i(r)$, then $\chi_{v}^{(\alpha,u)}(x)=0$. It follows that $\chi_{v}^{(\alpha,u)}$ is an element of $P_{-i(r)}$. For every $f\in P$, we have $f=\sum_{\alpha,u}(-1)^{d(f)d(\xi^{u})}\chi_{f(e^{\alpha}\xi^{u})}^{(\alpha,u)}$. Consequently, $P=\oplus_{r=1}^{n} P_{-i(r)}$ and $P$ is a positively graded module. 2. We proceed by showing that $\mu:P_{0}\rightarrow V$; $\mu(f)=f(1)$ is an isomorphism of $L_{0}$-modules. If $x\in L_{0}$, then $$\mu(x\cdot f)=(x\cdot f)(1)=(-1)^{d(x)d(f)}f(x)=x\cdot f(1)=x\cdot\mu(f),$$ i.e., $\mu$ is a homomorphism of $L_{0}$-modules. Since $1:=e^{\alpha}\xi^{u}\in U(L)_{0}$ is contained in $\{x_{1},\ldots,x_{n}\}$, $(-1)^{(d(\xi^{u})+d(v))d(\xi^{u})}\chi_{v}^{(\alpha,u)}$ is a pre-image of $v\in V$ under $\mu$. Suppose that $f\in \mathrm{ker}\mu$. Owing to the P-B-W theorem, for every element $x\in U(L)_{0}$, we may assume that $x=\sum_{i+j=0}a_{i}b_{j}$, where $a_{i}\in U(K)_{i}$ and $b_{j}\in U(L^{-})_{j}$. Since $a_{i}=0$ for $i<0$ and $a_{i}\in U(L_{0})U(L^{+})$ for $i>0$, we obtain $$\begin{aligned} f(x)&=&\sum_{i+j=0}(-1)^{d(a_{i})d(f)}a_{i}f(b_{j})=(-1)^{d(a_{0})d(f)}a_{0}f(b_{0})\\ &=&(-1)^{(d(a_{0})+d(b_{0}))d(f)}a_{0}b_{0}f(1)=0.\end{aligned}$$ As a result $f=0$ on $U(L_{0})$ and thereby on all of $U(L)$. Therefore, $\mu$ is an isomorphism of $L_{0}$-modules. 3. Suppose that $f$ is an element of $P$ such that $x\cdot f=0$ for every $x\in L^{-}$. Then each $\mathbb{Z}$-homogeneous constituent of $f$ enjoys the same property. Thus we may assume $f\in P_{q}$, where $q\in \mathbb{Z}$. Suppose that $q>0$ and $y$ is an element of $U(L)_{-q}$. Without loss of generality we may assume that $y=\sum_{i+j=-q}a_{i}b_{j}$, where $a_{i}\in U(K)_{i}$ and $b_{j}\in U(L^{-})_{j}$. As $a_{i}\cdot V=0$ for $i>0$, we have $$f(y)=\sum_{i+j=-q}(-1)^{d(a_{i})d(f)}a_{i}f(b_{j})=(-1)^{d(a_{0})d(f)}a_{0}f(b_{-q}).$$ Then it follows that $f(y)=(-1)^{(d(a_{0})+d(b_{-q}))d(f)}a_{0}b_{-q}f(1)$. Since $b_{-q}$ belongs to $U(L^{-})$, we obtain $b_{-q}\cdot f=0$. Thus $f(y)=0$. Similarly, if $q<0$, then $f(y)$ also equals to zero. Therefore, $f\in P_{0}$. Conversely, if $f\in P_{0}$, then $f(U(L)_{i})=0$ for $i\neq 0$. For any $x\in L^{-}$, we have $$(x\cdot f)(y)=(-1)^{d(x)(d(f)+d(y))}f(yx)=(-1)^{d(x)(d(y))}y\cdot f(x)=0, \quad y\in U(L)^{+}$$ and $$(x\cdot f)(y)=(-1)^{d(x)(d(f)+d(y))}f(yx)=0, \quad y\in U(L)^{-}\oplus U(L)_{0}.$$ Therefore, $x\cdot f=0$ for all $x\in L^{-}$. For $x_{1},\ldots,x_{n}\in L$, set $$(x_{1}\cdots x_{n})^{T}:=(-1)^{n+\sum_{i=1}^{n-1}\sum_{j=i+1}^{n}d(x_{i})d(x_{j})}x_{n}\cdots x_{1}.$$ It is easy to verify that $x_{i}^{T}=-x_{i}$ and $d(x_{i}^{T})=d(x_{i})$ for $i\in \{1,\ldots,n\}$. Then the principal anti-automorphism of $U(L)$ is defined by $x\mapsto x^{T}$, for all $x\in U(L)$. In the following proposition, the property of adjoint isomorphism will be investigated. \[p2\] There is a natural isomorphism: $$\Psi:(\mathrm{Ind}_{K}(V))^{*}\rightarrow \mathrm{Coind}_{K}(V^{*}),$$ namely, for $\varphi\in (\mathrm{Ind}_{K}(V))^{*}$, $x\in U(L)$ and $v\in V$, $$\Psi:\varphi\mapsto\Psi(\varphi), \mbox{ where } \Psi(\varphi)(x):v\mapsto \varphi(x^{T}\otimes v).$$ Firstly, we prove that $\Psi$ is a homomorphism of $U(L)$-modules. Let $\varphi_{1}$ and $\varphi_{2}$ are elements of $(\mathrm{Ind}_{K}(V))^{*}$. Then the definition of $\varphi_{1}+\varphi_{2}$ shows that $$\begin{aligned} \Psi(\varphi_{1}+\varphi_{2})(x)(v)&=&(\varphi_{1}+\varphi_{2})(x^{T}\otimes v)\\ &=&(\varphi_{1})(x^{T}\otimes v)+(\varphi_{2})(x^{T}\otimes v)\\ &=&\Psi(\varphi_{1})(x)(v)+\Psi(\varphi_{2})(x)(v)\\ &=&(\Psi(\varphi_{1})+\Psi(\varphi_{2}))(x)(v),\end{aligned}$$ where $x\in U(L)$ and $v\in V$. Therefore, $\Psi(\varphi_{1}+\varphi_{2})=\Psi(\varphi_{1})+\Psi(\varphi_{2})$. For any $x,y\in U(L)$, $v\in V$ and $\varphi\in (\mathrm{Ind}_{K}(V))^{*}$, we have $$\begin{aligned} y\cdot \Psi(\varphi)(x)(v)&=&(-1)^{d(y)(d(\Psi)+d(\varphi)+d(x))}\Psi(\varphi)(xy)(v)\\ &=&(-1)^{d(y)(d(\Psi)+d(\varphi)+d(x))}\varphi((xy)^{T}\otimes v)\\ &=&(-1)^{d(y)(d(\Psi)+d(\varphi))}\varphi(yx\otimes v)\\ &=&(-1)^{d(y)d(\Psi)}y\cdot\varphi(x^{T}\otimes v)\\ &=&(-1)^{d(y)d(\Psi)}\Psi(y\cdot\varphi)(x)(v).\end{aligned}$$ Therefore, $y\cdot \Psi(\varphi)=(-1)^{d(y)d(\Psi)}\Psi(y\cdot\varphi)$. Next $\Psi$ is injective. In fact, if $\Psi(\varphi)(x)(v)=0$, then $0=\Psi(\varphi)(x)(v)=\varphi(x^{T}\otimes v)$ for all $x\in U(L)$ and $v\in V$. Thus $\varphi=0$ because it vanishes on every generator of $\mathrm{Ind}_{K}(V)$. Now we show that $\Psi$ is surjective. Let $f\in \mathrm{Coind}_{K}(V^{*})$. Define $\varphi(x\otimes v):=f(x^{T})(v)$ for $x\in U(L)$ and $v\in V$. Then $\Psi(\varphi)=f$. It is easy to check that $\Psi$ is a natural homomorphism. In conclusion, the proof is completed. Proposition \[p2\] is called adjoint isomorphism in homological algebra (see [@rotman]). \[th1\] $\mathrm{Ind}_{K}(V_{\sigma})\cong(\mathrm{Ind}_{K}(V_{\sigma}))^{*}$ if and only if $V\cong (V_{\sigma})^{*}$. If $\mathrm{Ind}_{K}(V_{\sigma})\cong(\mathrm{Ind}_{K}(V_{\sigma}))^{*}$, by Lemma \[lm1\] and Proposition \[p2\], then $$\begin{aligned} \mathrm{Coind}_{K}(V)\cong \mathrm{Coind}_{K}((V_{\sigma})^{*}).\end{aligned}$$ It follows from Proposition \[p1\] that $V\cong (V_{\sigma})^{*}$. The sufficiency is obvious. Invariant forms on generalized reduced Verma modules ==================================================== The results of this section generalize Chiu’s results in [@qiusen] and determine generalized reduced Verma modules over modular Lie superalgebras which possess a nondegenerate super-symmetric or skew super-symmetric invariant bilinear form. Let $L$ be a Lie superalgebra over $\mathbb{F}$ and $V$ be an $L$-module. A bilinear form $\lambda:V\times V\rightarrow \mathbb{F}$ is called super-symmetric (skew super-symmetric) if $\lambda(v,w)=(-1)^{d(v)d(w)}\lambda(w,v)$ ($\lambda(v,w)=-(-1)^{d(v)d(w)}\lambda(w,v)$), for all $v,w\in V$. A super-symmetric (or skew super-symmetric) bilinear form $\lambda:V\times V\rightarrow \mathbb{F}$ is called invariant on $L$ if $\lambda(x\cdot v,w)=-(-1)^{d(v)d(x)}\lambda(v,x\cdot w)$, for all $x\in L$ and $v,w\in V$. The subspace $\mathrm{rad}(\lambda):=\{v\in V\mid \lambda(v,w)=0$, for all $w\in V\}$ is called the radical of $\lambda$. The form $\lambda$ is nondegenerate if $\mathrm{rad}(\lambda)=0$. \[p3\] There exists a nondegenerate super-symmetric (skew super-symmetric) invariant bilinear form $\lambda$ on $V$ if and only if there exists an isomorphism of $L$-modules $\phi:V\rightarrow V^{*}$ such that $\phi(v)(w)=(-1)^{d(v)d(w)}\phi(w)(v)$ ($\phi(v)(w)=-(-1)^{d(v)d(w)}\phi(w)(v)$), for all $v,w\in V$. Let $\lambda$ be a nondegenerate super-symmetric (skew super-symmetric) invariant bilinear form on $V$. Define $\phi:V\rightarrow V^{*}$ such that $\phi(v)(w):=\lambda(v,w)$, for all $v,w\in V$. Obviously, $\phi$ is a linear mapping such that $\mathrm{ker}\phi=\mathrm{rad}(\lambda)=0$ and $\phi(v)(w)=(-1)^{d(v)d(w)}\phi(w)(v)$ ($\phi(v)(w)=-(-1)^{d(v)d(w)}\phi(w)(v)$). Hence $\phi$ is injective. Since $\mathrm{dim}V=\mathrm{dim}V^{*}$, $\phi$ is bijective. For $x\in L$ and $v,w\in V$, we have $$\begin{aligned} \phi(x\cdot v)(w)&=&\lambda(x\cdot v,w)=-(-1)^{d(x)d(v)}\lambda(v,x\cdot w)\\ &=&-(-1)^{d(x)d(v)}\phi(v)(x\cdot w)=(-1)^{d(x)d(v)}(x\cdot \phi(v))(w).\end{aligned}$$ Thus $\phi$ is the desired isomorphism of $L$-modules. Conversely, let $\phi$ be an isomorphism of $L$-modules such that $\phi(v)(w)=(-1)^{d(v)d(w)}\phi(w)(v)$ ($\phi(v)(w)=-(-1)^{d(v)d(w)}\phi(w)(v)$), for all $v,w\in V$. Put $\lambda(v,w):=\phi(v)(w)$. Thus $\lambda$ be a super-symmetric (skew super-symmetric) bilinear form on $V$. Furthermore, $$\begin{aligned} \lambda(x\cdot v,w)&=&\phi(x\cdot v)(w)=(-1)^{d(x)d(\phi)}(x\cdot \phi(v))(w)\\ &=&-(-1)^{d(x)d(v)}\phi(v)(x\cdot w)=-(-1)^{d(x)d(v)}\lambda(v,x\cdot w),\end{aligned}$$ for all $x\in L$ and $v,w\in V$. Hence $\lambda$ is invariant. As $\mathrm{rad}(\lambda)=\mathrm{ker}\phi=0$, $\lambda$ is nondegenerate. \[p4\] Let $V$ be an irreducible $L$-module. If $V$ is isomorphic to $V^{*}$ as $L$-module, then there exists a nondegenerate invariant bilinear form $\lambda$ on $V$ which is either super-symmetric or skew super-symmetric. By the proof of Proposition \[p3\], there exists a nondegenerate invariant bilinear form $\beta$ on $V$. Let $$\lambda(v,w)=\beta(v,w)+(-1)^{d(v)d(w)}\beta(w,v), \quad v,w\in V.$$ Clearly, $\lambda$ is a super-symmetric bilinear form on $V$. Since $V$ is an irreducible $L$-module, this implies that $\mathrm{rad}(\lambda)=\mathrm{ker}\phi=0$ or $\mathrm{rad}(\lambda)=\mathrm{ker}\phi=V$. Therefore, $\lambda$ is either nondegenerate or $0$. It follows that either $\beta$ or $\lambda$ is desired form. \[th2\] Let $L$ be a $\mathbb{Z}$-graded Lie superalgebra over $\mathbb{F}$ and $V$ be an $L_{0}$-module. Then the following statements are equivalent. 1. There exists a nondegenerate super-symmetric or skew super-symmetric invariant bilinear form on $\mathrm{Ind}_{K}(V_{\sigma})$. 2. There exists an isomorphism of $L_{0}$-modules $\zeta:V\rightarrow (V_{\sigma})^{*}$ such that $\zeta(v)(w)=(-1)^{d(v)d(w)}\zeta(w)(v)$ or $\zeta(v)(w)=-(-1)^{d(v)d(w)}\zeta(w)(v)$, $v,w\in V$. Suppose that there exists a nondegenerate super-symmetric or skew super-symmetric invariant bilinear form on $\mathrm{Ind}_{K}(V_{\sigma})$. By Proposition \[p3\], there exists an isomorphism of $L$-modules $\phi:\mathrm{Ind}_{K}(V_{\sigma})\rightarrow (\mathrm{Ind}_{K}(V_{\sigma}))^{*}$ such that $$\begin{aligned} \phi(x_{1}\otimes v_{1})(x_{2}\otimes v_{2}) &=&(-1)^{(d(x_{1})+d(v_{1}))(d(x_{2})+d(v_{2}))}\phi(x_{2}\otimes v_{2})(x_{1}\otimes v_{1})\label{1}\end{aligned}$$ or $$\begin{aligned} \phi(x_{1}\otimes v_{1})(x_{2}\otimes v_{2}) &=&-(-1)^{(d(x_{1})+d(v_{1}))(d(x_{2})+d(v_{2}))}\phi(x_{2}\otimes v_{2})(x_{1}\otimes v_{1}),\label{2}\end{aligned}$$ where $x_{1},x_{2}\in U(L)$ and $v_{1},v_{2}\in V$. Theorem \[th1\] shows that there exists an isomorphism of $L_{0}$-modules $\zeta:V\rightarrow (V_{\sigma})^{*}$. Let $x_{1}=e^{\alpha}\xi^{u}\in U(L^{-})$ and $x_{2}=e^{\beta}\xi^{t}\in U(L^{-})$, where $0\leq\alpha\leq \pi$, $0\leq\beta\leq \pi$ and $u,t\in \mathbb{B}$. By the proof of Lemma \[lm1\] and Proposition \[p2\], we have $$\begin{aligned} & &\phi(x_{1}\otimes v_{1})(x_{2}\otimes v_{2}) =(-1)^{d(x_{1})d(x_{2})+d(x_{1})d(v_{1})}\chi_{\zeta(v_{1})}^{(\pi,E)}(x_{2}^{T}x_{1})(v_{2})\nonumber\\ &=&(-1)^{d(x_{1})d(x_{2})+d(x_{1})d(v_{1})+(d(\zeta)+d(v_{1})+d(\xi^{E}))(d(x_{1})+d(x_{2}))}\delta(\pi,\alpha+\beta)\delta(E,u+t)\zeta(v_{1})(v_{2})\nonumber\\ &=&(-1)^{d(x_{1})d(x_{2})+d(x_{2})d(v_{1})+(d(\zeta)+d(\xi^{E}))(d(x_{1})+d(x_{2}))}\zeta(v_{1})(v_{2}).\label{3}\end{aligned}$$ According to (\[1\]), (\[2\]) and (\[3\]), $$\zeta(v_{1})(v_{2})=(-1)^{d(v_{1})d(v_{2})}\zeta(v_{2})(v_{1}) \mbox{ or } \zeta(v_{1})(v_{2})=-(-1)^{d(v_{1})d(v_{2})}\zeta(v_{2})(v_{1}),$$ for all $v_{1},v_{2}\in V$. Conversely, it also follows from Lemma \[lm1\], Proposition \[p2\], Theorem \[th1\] and Proposition \[p3\]. Following the notations in the proof of Theorem \[th2\], we have the following results: 1. If $d(x_{1})$ and $d(x_{2})$ need not all $\bar{1}$, then there exists a nondegenerate super-symmetric (skew super-symmetric) invariant bilinear form on $\mathrm{Ind}_{K}(V_{\sigma})$ if and only if there exists an isomorphism of $L_{0}$-modules $\zeta:V\rightarrow (V_{\sigma})^{*}$ such that $$\zeta(v_{1})(v_{2})=(-1)^{d(v_{1})d(v_{2})}\zeta(v_{2})(v_{1})\quad (\zeta(v_{1})(v_{2})=-(-1)^{d(v_{1})d(v_{2})}\zeta(v_{2})(v_{1})),$$ for all $v_{1},v_{2}\in V$. 2. If $d(x_{1})=d(x_{2})=\bar{1}$, then there exists a nondegenerate super-symmetric (skew super-symmetric) invariant bilinear form on $\mathrm{Ind}_{K}(V_{\sigma})$ if and only if there exists an isomorphism of $L_{0}$-modules $\zeta:V\rightarrow (V_{\sigma})^{*}$ such that $\zeta(v_{1})(v_{2})=-(-1)^{d(v_{1})d(v_{2})}\zeta(v_{2})(v_{1})$ ($\zeta(v_{1})(v_{2})=(-1)^{d(v_{1})d(v_{2})}\zeta(v_{2})(v_{1})$), for all $v_{1},v_{2}\in V$. Generalized reduced Verma modules and mixed products of modules =============================================================== In this section, the relation between generalized reduced Verma modules and mixed products of modules over $\mathbb{Z}$-graded modular Lie superalgebras of Cartan type will be discussed. \[th3\] Let $L$ be a $\mathbb{Z}$-graded Lie superalgebra over $\mathbb{F}$ and $V=\oplus_{i\geq 0}V_{i}$ be a positively and transitively graded $L$-module such that $z_{i}\cdot V=0$, $1\leq i\leq k$. Then the linear mapping $\psi:V\rightarrow \mathrm{Coind}_{K}(V_{0})$ defined by $\psi(v)(x)=(-1)^{d(x)d(v)}\mathrm{pr_{0}}(x\cdot v)$, for all $x\in U(L)$ and $v\in V$, is an injective homomorphism of $L$-modules, where $\mathrm{pr_{0}}:V\rightarrow V_{0}$ denotes the canonical projection. In particular, $\psi(V_{0})=\mathrm{Coind}_{K}(V_{0})_{0}$ and $zd(\psi)=0$. Note that $\mathrm{pr_{0}}$ is a homomorphism of $\theta(L,K)$-modules. In fact, for any $h_{j}\in \theta(L,K)_{j}$ and $v_{i}\in V_{i}$, we have $\mathrm{pr_{0}}(h_{j}\cdot v_{i})=(-1)^{d(h_{j})d(\mathrm{pr_{0}})}h_{j}\cdot\mathrm{pr_{0}}(v_{i})$, where $i,j\in \mathbb{N}_{0}$. Since the mapping $U(L)\rightarrow V$ defined by $x\mapsto (-1)^{d(x)d(v)}x\cdot v$ also satisfies this property, $\psi$ is well-defined. Moreover, for an arbitrary element $l\in L$, we obtain $$\begin{aligned} \psi(l\cdot v)(x)&=&(-1)^{d(x)(d(l)+d(v))}\mathrm{pr_{0}}(x\cdot(l\cdot v))\\ &=&(-1)^{d(l)(d(x)+d(v))}\psi(v)(x\cdot l)=(-1)^{d(l)d(\psi)}(l\cdot \psi(v))(x).\end{aligned}$$ Therefore, $\psi$ is a homomorphism of $L$-modules. To prove $\psi$ is injective, we assume that $\mathrm{ker}\psi\neq 0$. Evidently, $zd(\psi)=0$ and thereby $\mathrm{ker}\psi$ is a $\mathbb{Z}$-homogeneous subspace of $V$. Then $\mathrm{ker}\psi\neq 0$ leads to the existence of a minimal $i\geq 0$ such that $\mathrm{ker}\psi\cap V_{i}\neq 0$. Let $v_{i}\in \mathrm{ker}\psi\cap V_{i}$ and $l\in L_{-j}$ ($j>0$). It follows that $x\cdot v_{i}=\mathrm{pr_{0}}(x\cdot v_{i})=(-1)^{d(x)d(v_{i})}\psi(v_{i})(x)=0$ for every $x\in U(L)_{-i}$. If $q\neq j-i$, then $$\psi(l\cdot v_{i})(x)=(-1)^{d(x)(d(l)+d(v_{i}))}\mathrm{pr_{0}}(x\cdot(l\cdot v_{i}))=0,$$ where $x\in U(L)_{q}$. If $q=j-i$, then $xl\in U(L)_{-i}$ and $(xl)\cdot v_{i}=0$. Consequently, $l\cdot v_{i}$ belongs to the trivial subspace $\mathrm{ker}\psi\cap V_{i-j}$. Since $V$ is transitive, $v_{i}\in V_{0}$ and $i=0$. As a result, $x\cdot v_{0}=0$ for all $x\in U(L)_{0}$. It follows from $1\in U(L)_{0}$ that $v_{0}=0$. This conclusion confutes the assumption $\mathrm{ker}\psi\neq 0$ and thereby $\psi$ is an injective homomorphism of $L$-modules. Let $\mu:\mathrm{Coind}_{K}(V_{0})_{0}\rightarrow V_{0}$ such that $\mu(f)=f(1)$. Let $x$ be an element of $U(L)_{j}$. If $j\neq 0$, then $\mathrm{pr_{0}}(x\cdot f(1))=0$ and $f(x)=0$. In the case of $j=0$, the P-B-W theorem provides a presentation $x=\sum_{j=1}^{n}\sum_{i\geq 0}a_{ij}b_{ij}$, where $a_{ij}\in U(K)_{i}$ and $b_{ij}\in U(L^{-})_{-i}$. Clearly, $$\begin{aligned} & &f(x)-(-1)^{d(x)d(f)}\mathrm{pr_{0}}(x\cdot f(1))\\ &=&\sum_{j=1}^{n}\sum_{i\geq 0}((-1)^{d(a_{ij})d(f)}a_{ij}f(b_{ij}) -(-1)^{d(x)d(f)}a_{ij}\mathrm{pr_{0}}(b_{ij}f(1)))\\ &=&\sum_{j=1}^{n}((-1)^{d(a_{0j})d(f)}a_{0j}f(b_{0j})-(-1)^{d(x)d(f)}a_{0j}\mathrm{pr_{0}}(b_{0j}f(1)))\\ &=&\sum_{j=1}^{n}(-1)^{d(x)d(f)}(a_{0j}b_{0j}f(1)-a_{0j}b_{0j}f(1))=0.\end{aligned}$$ For an arbitrary element $x\in U(L)$, $f(x)=(-1)^{d(x)d(f)}\mathrm{pr_{0}}(x\cdot f(1))$. Consequently, $\psi\circ\mu=\mathrm{id}_{\mathrm{Coind}_{K}(V_{0})_{0}}$ and $\psi(V_{0})=\mathrm{Coind}_{K}(V_{0})_{0}$. For $\alpha=(\alpha_1,\ldots,\alpha_k)\in\mathbb{N}_0^k$, we put $|\alpha|:=\sum_{i=1}^{k}\alpha_{i}$. Let $\mathcal {O}(k,\underline{m})$ denote the divided power algebra over $\mathbb{F}$ with a $\mathbb{F}$-basis $\{x^{(\alpha)}\mid\alpha\in\mathbb{A}(k,\underline{m})\}$, where $$\mathbb{A}(k,\underline{m}):=\left\{\alpha:=(\alpha_{1},\ldots,\alpha_{k})\in\mathbb{N}_0^{k}\mid 0\leqslant\alpha_{i}\leqslant p^{m_{i}}-1, i=1,2,\ldots,k\right\}.$$ Let $\Lambda(l)$ be the exterior superalgebra over $\mathbb{F}$ in $l$ variables $\xi_{1},\xi_{2}$, $\ldots,\xi_{l}$. Denote by $\mathcal{O}(k,l,\underline{m})$ the tenser product $\mathcal{O}(k,\underline{m})\otimes_{\mathbb{F}}\Lambda(l)$. Put $\mathrm{Y}_{0}:=\{1,2,\ldots,k\}$ and $\mathrm{Y}_{1}:=\{1,2,\ldots,l\}$. If $u\in\mathbb{B}_{s}$, $j\in \{u\}$, then we suppose that $u-\langle j\rangle\in \mathbb{B}_{s-1}$ such that $\{u-\langle j\rangle\}=\{u\}\setminus\{j\}$. Let $u(j)=|\{l\in\{u\}\mid l<j\}|$. If $j\in \mathrm{Y}_{1}\setminus\{u\}$, then we put $u(j)=0$ and $\xi^{u-\langle j\rangle}=0$. Clearly, $\left\{x^{(\alpha)}\xi^{u}\mid \alpha\in\mathbb{A}(k,\underline{m}), u\in \mathbb{B}\right\}$ constitutes an $\mathbb{F}$-basis of $\mathcal{O}(k,l,\underline{m})$ and $zd(x^{(\alpha)}\xi^{u})=|\alpha|+|u|\geq0$. Let $D_{1},\ldots,D_{k},d_{1},\ldots,d_{l}$ be the linear transformations of $\mathcal{O}(k,l,\underline{m})$ and $\varepsilon_i:=(\delta_{i1}$, $\ldots,\delta_{ik})$ such that $$\begin{aligned} D_i(x^{(\alpha)}\xi^{u})=x^{(\alpha-\varepsilon_{i})}\xi^{u},\quad i\in\mathrm{Y}_{0},\quad d_j(x^{(\alpha)}\xi^{u})=(-1)^{u(j)}x^{(\alpha)}\xi^{u-\langle j\rangle}, \quad j\in\mathrm{Y}_{1},\end{aligned}$$ where $\delta_{ij}$ is Kronecker delta, defined by $\delta_{ij}=1$ if $i=j$ and $\delta_{ij}=0$ otherwise. Modular Lie superalgebras of Cartan type $L(k,l,\underline{m})$ ($L=W, S, H, K$) are subalgebras of the derivation superalgebras of $\mathcal{O}(k,l,\underline{m})$. For the precise definitions please refer to [@zhang]. If $L=W, S, H$, then $\{D_{1},\ldots,D_{k}\}$ is the canonical basis of $L(k,l,\underline{m})^{-}\cap L(k,l,\underline{m})_{\bar{0}}$ and $\{d_{1},\ldots,d_{l}\}$ is the canonical basis of $L(k,l,\underline{m})^{-}\cap L(k,l,\underline{m})_{\bar{1}}$. The definition of the product in $L(k,l,\underline{m})$ (see [@zhang]) entails the vanishing $\mathrm{ad}D_{i}^{p^{m_{i}}}$ on $L(k,l,\underline{m})$ and we therefore define $z_{i}:=D_{i}^{p^{m_{i}}}$, $1\leq i\leq k$. \[c\] Let $L(k,l,\underline{m})$ ($L=W, S, H$) denote a $\mathbb{Z}$-graded Lie superalgebra of Cartan type. If $V$ is an $L(k,l,\underline{m})_{0}$-module, then $\mathrm{Ind}_{K}(V_{\sigma})$ is isomorphic to the mixed product $\mathcal{O}(k,l,\underline{m})\otimes V$. Since $(\mathcal{O}(k,l,\underline{m})\otimes V)_{k}:=\langle a\otimes v\mid a\in \mathcal{O}(k,l,\underline{m})_{k}, v\in V\rangle$, the mixed product is a positively graded module. According to the definition of the mixed product (see [@zhang1]), we have $$\begin{aligned} D_{i}(x^{(\alpha)}\xi^{u}\otimes v)&=&x^{(\alpha-\varepsilon_{i})}\xi^{u}\otimes v,\quad i\in\mathrm{Y}_{0},\\ d_{j}(x^{(\alpha)}\xi^{u}\otimes v)&=&(-1)^{u(j)}x^{(\alpha)}\xi^{u-\langle j\rangle}\otimes v,\quad j\in\mathrm{Y}_{1}, \end{aligned}$$ where $\alpha\in\mathbb{A}(k,\underline{m})$, $u\in \mathbb{B}$ and $v\in V$. The first equality shows that $z_{i}(\mathcal{O}(k,l,\underline{m})\otimes V)=0$, $1\leq i\leq k$. The above equalities also ensure the transitivity of $\mathcal{O}(k,l,\underline{m})\otimes V$. Proposition \[th3\] furnishes an embedding from $\mathcal{O}(k,l,\underline{m})\otimes V$ into $\mathrm{Coind}_{K}(V)$. Since $$\mathrm{dim}(\mathrm{Coind}_{K}(V))=\mathrm{dim}(\mathcal{O}(k,l,\underline{m})\otimes V)=2^{l}p^{m_{1}+\cdots+m_{k}}\mathrm{dim}V,$$ the mapping is bijective. Then Lemma \[lm1\] gives an isomorphism between $\mathrm{Ind}_{K}(V_{\sigma})$ and $\mathcal{O}(k,l,\underline{m})\otimes V$. Let notations be as in Theorem \[th2\] and \[c\]. Then the following statements are equivalent. 1. There exists a nondegenerate super-symmetric or skew super-symmetric invariant bilinear form on the mixed product $\mathcal{O}(k,l,\underline{m})\otimes V$. 2. There exists an isomorphism of $L(k,l,\underline{m})_{0}$-modules $\zeta:V\rightarrow (V_{\sigma})^{*}$ such that $\zeta(v)(w)=(-1)^{d(v)d(w)}\zeta(w)(v)$ or $\zeta(v)(w)=-(-1)^{d(v)d(w)}\zeta(w)(v)$, for all $v,w\in V$. Acknowledgments {#acknowledgments .unnumbered} =============== This work was supported by the NNSF of China (Grant No.11171055), Natural Science Foundation of Jilin province (No. 20130101068) and the Fundamental Research Funds for the Central Universities (No.12SSXT139). The authors thank professors Liangyun Chen, Baolin Guan, Li Ren for their helpful comments and suggestions. [00]{} L. E. Ross, Representations of graded Lie algebras, Trans. Amer. Math. Soc. 120 (1965) 17-23. H. Strade, R. Farnsteiner, Modular Lie algebras and their representations, Monogr. Textbooks Pure Appl. Math. Vol. 116, Dekker, Inc, 1988. V. G. Kac, Lie superalgebras, Adv. Math. 26 (1977) 8-96. M. Scheunert, Theory of Lie superalgebras, Springer-Verlay. Berlin, Heidelberg and New York, 1979. D. N. Verma, Structure of representation of complex semisimple Lie algebras, Bull. Amer. Math. Soc. 74 (1986), 160-166. O. Khomenko, V. Mazorchuk, Generalized Verma modules induced from ${\rm sl}(2,\Bbb C)$ and associated Verma modules, J. Algebra. 242(2) (2001), 561-576. V. Mazorchuk, On the structure of an $\alpha$-stratified generalized Verma module over Lie algebra ${\rm sl}(n,\bold C)$, Manuscripta Math. 88(1) (1995), 59-72. B. Xin, Y. Z. Wu, Generalized Verma modules over Lie algebras of Weyl type, Algebra Colloq. 16(1) (2009), 131-142. Y. S. Cheng, Y. C. Su, Generalized Verma modules over some Block algebras, Front. Math. China. 3(1) (2008), 37-47. I. N. Bernstein, I. M. Gelfand, S. I. Gelfand, The structre of representations generated by vectors of highset weight, Funkt. Anal. i Prilozhen. 5 (1971), 1-9. J. Dixmier, Algebres Enverloppantes, Gauthier-Villars. Paris. 1974. A. Rocha-Caridi, Splitting criteria for $g$-modules induced from a parabolic and a Bernstein-Gelfand-Gelfand resolution of a finite-dimensional irreducible $g$-module, Trans. Amer. Math. Soc. 262 (1980). 335-366. V. Futorny, V. Mazorchuk, Structure of $\alpha$-stratified modules for finite-dimensional Lie algebras, J. Algebra. 183 (1996), 456-482. D. Milicić, W. Soergel, The composition series of modules induced from Whittaker modules, Comment. Math. Helv. 72 (1997), 503-520. A. Khomenko, V. Mazorchuk, On the determinant of Shapovalov form for generalized Verma modules, J. Algebra. 215 (1999), 318-329. V. Mazorchuk, S. Ovsienko, Submodule structure of generalized Verma modules indued from generic Gelfand-Zetlin modules, Alg. Repr. Theory. 1 (1998), 3-26. R. Farnsteiner, Extension functors of modular Lie algebras, Math. Ann. 288(4) (1990), 713-730. R. Farnsteiner, H. Strade, Shapiro’s lemma and its consequences in the cohomology theory of modular Lie algebras, Math. Z. 206(1) (1991), 153-168. S. Chiu, The invariant forms on the graded modules of the graded Cartan type Lie algebras, Chin. Ann. Math. Ser. B. 13(1) (1992), 16-24. G. Y. Shen, Graded modules of graded Lie algebras of Cartan type I: Mixed products of modules, Sci. Sinica Ser. A. 29(6) (1986), 570-581. G. Y. Shen, Graded modules of graded Lie algebras of Cartan type II: Positive and negative graded modules, Sci. Sinica Ser. A. 29(10) (1986), 1009-1019. G. Y. Shen, Graded modules of graded Lie algebras of Cartan type III: Irreducible modules, Chin. Ann. Math. Ser. B. 9(4) (1988), 404-417. Y. Z. Zhang, Graded modules of the Cartan-type $\mathbb{Z}$-graded Lie superalgebras $W(n)$ and $S(n)$, Chin. Sci. Bull. 40(20) (1995), 1829-1832. (in Chinese). Y. Z. Zhang, $\mathbb{Z}$-graded module of Lie superalgebra $H(n)$ of Cartan type, Chin. Sci. Bull. 41(10) (1996), 813-817. Y. Z. Zhang, Mixed products of modules of infinite-dimensional Lie superalgebras of Cartan type, Chin. Ann. Math. Ser. A 18(6) (1997), 725-732. (in Chinese). Y. Z. Zhang, H. C. Fu, Finite-dimensional Hamiltonian Lie superalgebra, Comm. Algebra. 30(6) (2002), 2651-2673. Y. Wang, Y. Z. Zhang, Derivation algebra $\mathrm{Der}(H)$ and central extensions of Lie superalgebras, Comm. Algebra. 32 (2004), 4117-4131. Y. Z. Zhang, Finite-dimensonal Lie superalgebras of Cartan-type over fields of prime characteristic, Chin. Sci. Bull. 42 (1997) 720-724. J. Rotman, An introduction to homological algebra, Academic Press. New York, 1979. [^1]: Corresponding author. [^2]: E-mail addresses: zhengkl561@nenu.edu.cn (K. Zheng), zhyz@nenu.edu.cn (Y. Zhang).
--- abstract: 'The cross section of the process $e^+e^-\to \pi^+\pi^-\pi^0$ was measured in the Spherical Neutral Detector experiment at the VEPP-2M collider in the energy region $\sqrt[]{s} = 980 \div 1380$ MeV. The measured cross section, together with the $e^+e^-\to \pi^+\pi^-\pi^0$ and $\omega\pi^+\pi^-$ cross sections obtained in other experiments, was analyzed in the framework of the generalized vector meson dominance model. It was found that the experimental data can be described by a sum of $\omega$, $\phi$ mesons and two $\omega^\prime$ and $\omega^{\prime\prime}$ resonances contributions, with masses $m_{\omega^\prime}\sim 1490$, $m_{\omega^{\prime\prime}}\sim 1790$ MeV and widths $\Gamma_{\omega^\prime}\sim 1210$, $\Gamma_{\omega^{\prime\prime}}\sim 560$ MeV. The analysis of the $\pi^+\pi^-$ invariant mass spectra in the energy region $\sqrt[]{s}$ from 1100 to 1380 MeV has shown that for their description one should take into account the $e^+e^-\to\omega\pi^0\to\pi^+\pi^-\pi^0$ mechanism also. The phase between the amplitudes corresponding to the $e^+e^-\to\omega\pi$ and $e^+e^-\to\rho\pi$ intermediate states was measured for the first time. The value of the phase is close to zero and depends on energy.' address: | Budker Institute of Nuclear Physics,\ Siberian Branch of the Russian Academy of Sciences\ and Novosibirsk State University,\ 11 Lavrentyev,Novosibirsk,\ 630090, Russia author: - 'M.N.Achasov[^1], V.M.Aulchenko, K.I.Beloborodov, A.V.Berdyugin,A.G.Bogdanchikov, A.V.Bozhenok, A.D.Bukin, D.A.Bukin, S.V.Burdin, T.V.Dimova, V.P.Druzhinin, V.B.Golubev, V.N.Ivanchenko, A.A.Korol, S.V.Koshuba, I.N.Nesterenko, E.V.Pakhtusova, A.A.Polunin, A.A.Salnikov, S.I.Serednyakov, V.V.Shary, Yu.M.Shatunov, V.A.Sidorov, Z.K.Silagadze, A.N.Skrinsky, A.G.Skripkin, Yu.V.Usov, A.V.Vasiljev' title: '**Study of the process $e^+e^- \to \pi^+\pi^-\pi^0$ in the energy region $\sqrt[]{s}$ from 0.98 to 1.38 GeV.**' --- Introduction ============ The cross section of hadron production in the $e^+e^-$ annihilation in the energy region $\sqrt[]{s} < 1.03$ GeV can be described within the vector meson dominance model (VDM) framework and is determined by the transitions of light vector mesons ($\rho,\omega,\phi$) into the final states. The light vector mesons have been studied rather well. They are quark-antiquark $q\overline{q}$ ($q=u,d,s$) bound states, and their masses, widths and main decays have been measured with high accuracy [@pdg]. The cross section for hadron production above the $\phi(1020)$ resonance ($\sqrt[]{s} \simeq 1.03$–$2$ GeV) cannot be described in the conventional VDM framework (taking into account $\rho,\omega$ and $\phi$ mesons only) indicating the existence of states with vector meson quantum numbers $I^G(J^{PC})=1^+(1^{--}),0^-(1^{--})$ and with masses of about 1450, 1650 MeV. Parameters of these states are not well established due to inaccurate and conflicting experimental data. The nature of these states is not clear either. In some reviews of experimental data they are considered as a mixture of $q\overline{q}$ with 4-quark $qq\overline{qq}$ and hybrid $q\overline{q}g$ states [@don1; @don2; @don3; @don4]. On the other hand, the experimental data do not contradict the hypothesis that these states have $q\overline{q}$ structure and are radial and orbital excitations of the light vector mesons [@ak1; @ak2; @ak3]. In this context the main experimental task is the improvement of the accuracy of cross section measurement. As was already mentioned, in the VDM framework the cross section of the process $e^+e^- \to \pi^+\pi^-\pi^0$ is determined by the amplitudes of vector meson $V$ ($V=\omega,\phi,\omega^\prime,{\ldots}$) transitions into the final state: $V \to\pi^+\pi^-\pi^0$. The $\rho\pi$ intermediate state dominates in these transitions \[Fig.\[diag\](a)\]. The other mechanism of $V\to\pi^+\pi^-\pi^0$ transition is also possible via $\rho-\omega$ mixing: $V\to\omega\pi^0\to\rho^0\pi^0$ ($V=\rho,\rho^\prime,\rho^{\prime\prime}$) \[Fig.\[diag\](b)\]. This effect was predicted in Ref.[@thrhoom] and was observed in the SND (Spherical Neutral Detector) experiment in the energy range $\sqrt[]{s}=1200$–$1400$ MeV [@sndro]. The studies of the $e^+e^- \to \pi^+\pi^-\pi^0$ cross section and $\pi\pi$ invariant mass spectra above $\phi$-meson production region provide information about excited states of vector meson and their interference. The $e^+e^- \to \pi^+\pi^-\pi^0$ cross section in the energy region above $\phi$ meson and up to 2200 MeV has been studied in several experiments [@m3n; @mea; @gg2; @dm1; @nd; @dm2], but none of them have covered the whole region. The SND study of this cross section in the range $\sqrt[]{s} = 1040$ – 1380 MeV based on a part of collected data was already reported in Ref.[@sndmhad]. Here we present the results obtained by using the total data sample. The present work includes both the total cross section and the dipion mass spectra studies. Experiment ========== The SND detector [@sndnim] ran from 1995 to 2000 at the VEPP-2M [@vepp2] collider in the energy range $\sqrt[]{s}$ from 360 to 1400 MeV. The detector contains several subsystems. The tracking system includes two cylindrical drift chambers. The three-layer spherical electromagnetic calorimeter is based on NaI(Tl) crystals [@calor99]. The muon/veto system consists of plastic scintillation counters and two layers of streamer tubes. The calorimeter energy and angular resolution depends on the photon energy as $\sigma_E/E(\%) = {4.2\% / \sqrt[4]{E(\mbox{GeV})}}$ and $\sigma_{\phi,\theta} = {0.82^\circ / \sqrt[]{E(\mathrm{GeV})}} \oplus 0.63^\circ$ The tracking system angular resolution is about $0.5^\circ$ and $2^\circ$ for azimuthal and polar angles respectively. The energy loss resolution $dE/dx$ in the drift chamber is about 30%. SND was described in details in Ref.[@sndnim]. In 1997 and 1999 the SND collected data in the energy region $\sqrt[]{s}$ from 1040 to 1380 MeV with integrated luminosity about $9.0~\mbox{pb}^{-1}$, in addition about $130~\mbox{nb}^{-1}$ was collected at $\sqrt[]{s}=980$ MeV. The beam energy was calculated from the magnetic field value in the bending magnets and revolution frequency of the collider. The center of mass energy determination accuracy is about 0.1 MeV and the spread of the beam energy is from 0.2 to 0.4 MeV. For the luminosity measurements, the processes $e^+e^- \to e^+e^-$ and $e^+e^- \to \gamma\gamma$ were used. In this work the luminosity measured by $e^+e^- \to e^+e^-$ was used for normalization. The systematic error of the integrated luminosity determination is estimated to be 2%. Since luminosity measurements by $e^+e^- \to e^+e^-$ and $e^+e^- \to \gamma\gamma$ reveal a systematic spread of about 1%, this was added to the statistical error of the luminosity determination in each energy point. The statistical accuracy was better than 1%. Data analysis ============= Selection of $e^+e^- \to \pi^+\pi^-\pi^0$ events ------------------------------------------------ The data analysis and selection criteria used in this work are similar to those described in Ref.[@phi98; @dplphi98]. During the experimental runs, the first-level trigger [@sndnim] selects events with energy deposition in the calorimeter more than 180 MeV and with two or more charged particles. During processing of the experimental data the event reconstruction is performed [@sndnim; @phi98]. For further analysis, events containing two or more photons and two charged particles with $|z| < 10$ cm and $r < 1$ cm were selected. Here $z$ is the coordinate of the charged particle production point along the beam axis (the longitudinal size of the interaction region depends on beam energy and varies from 2 to 2.5 cm); $r$ is the distance between the charged particle track and the beam axis in the $r-\phi$ plane. Extra photons in $e^+e^- \to \pi^+\pi^-\pi^0$ events can appear because of the overlap with the beam background or nuclear interactions of the charged pions in the calorimeter. Under these selection conditions, the background sources are $e^+e^- \to \pi^+\pi^-\pi^0\pi^0$, $e^+e^-\gamma\gamma$, $\pi^+\pi^-\gamma$, $K^+K^-$, $K_SK_L$ processes and the beam background. We note that in the energy region above the $\phi$-meson the process $e^+e^- \to \pi^+\pi^-\pi^0$ does not dominate. Even more, its cross section is several times lower than the cross section of the main background process $e^+e^- \to \pi^+\pi^-\pi^0\pi^0$. To suppress the beam background, the following cuts on the angle $\psi$ between two charged particle tracks and energy deposition of the neutral particles $E_{neu}$ were applied: $\psi > 40^{\circ}$, $E_{neu}>100$ MeV. To reject the background from the $e^+e^- \to K^+K^-$ process, the following cuts were imposed: $(dE/dx) < 5 \cdot (dE/dx)_{min} $ for each charged particle, $(dE/dx) < 3 \cdot (dE/dx)_{min} $ at least for one of them, and $\Delta\phi > 10^\circ$. Here $\Delta\phi$ is an acollinearity angle in the azimuthal plane and $(dE/dx)_{min}$ is an average energy loss of a minimum ionizing particle. The last cut $|\Delta\phi| > 10^\circ$ also suppresses the $e^+e^- \to \pi^+\pi^-\gamma$ events. To suppress the $e^+e^- \to e^+e^-\gamma\gamma$ events an energy deposition in the calorimeter of the charged particles $E_{cha}$ was required to be small enough: $E_{cha} < 0.5 \cdot \sqrt[]{s}$. For events left after these cuts, a kinematic fit was performed under the following constraints: the charged particles are assumed to be pions, the system has zero total momentum, the total energy is $\sqrt[]{s}$, and the photons originate from the $\pi^0 \to \gamma\gamma$ decays. The value of the $\chi^2$ function $\chi^2_{3\pi}$ (Fig.\[xi2u\]) is calculated during the fit. In events with more than two photons, extra photons are considered as spurious ones and rejected. To do this, all possible subsets of two photons were inspected and the one, corresponding to the maximum likelihood was selected. After the kinematic fit the following additional cuts were applied: $N_{\gamma}=2$ ($N_{\gamma}$ is the number of detected photons), $\chi^2_{3\pi} < 5$ and the polar angle $\theta_\gamma$ of at least one of the photons should satisfy to the following criterion: $36^{\circ} < \theta_\gamma < 144^{\circ}$. The angular distributions of particles for the selected events are shown in Fig.\[tetn\],\[teu12\],\[teu3\] and \[teu45\] while Fig.\[epu4\] and Fig.\[epu45\] demonstrate the photon energy distributions for the same events. The experimental and simulated distributions are in agreement. Background subtraction ---------------------- The number of background events was estimated from the following formula: $$\begin{aligned} \label{bg} N_{bkg}({s}) = \sum_i \sigma_{Ri}({s}) \epsilon_i({s}) IL({s}),\end{aligned}$$ where $i$ is a process number, $\sigma_{Ri}({s})$ is the cross section of the background process taking into account the radiative corrections, $IL({s})$ is the integrated luminosity, $\epsilon_i({s})$ is the detection probability for the background process obtained from simulation under selection described above. The $e^+e^-\to\pi^+\pi^-\gamma$ cross section was calculated for the case, when the photon has the energy above 10 MeV and is radiated at the angle $\theta$ more than $10^\circ$. As it was mentioned above, the main source of background is the events of the $e^+e^-\to\pi^+\pi^-\pi^0\pi^0$ process. Two mechanisms contribute to the total cross section of this process: $e^+e^-\to\omega\pi$ and $e^+e^-\to\rho\pi\pi$. It was shown in Ref.[@cmd4p; @cleo4p] that the $e^+e^-\to\rho\pi\pi$ process dynamics can be described with the $a_1\pi$ intermediate state. The SND studies of the $e^+e^-\to\pi^+\pi^-\pi^0\pi^0$ process [@snd4p] agree with this conclusion. For background estimation the $e^+e^-\to\omega\pi$ and $e^+e^-\to\rho\pi\pi$ cross sections measured in SND experiments were used [@snd4p; @ppg]. To obtain the detection probability of the $e^+e^-\to\rho\pi\pi$ events, the simulation with the $a_1\pi$ intermediate state was used. The numbers of $e^+e^-\to \pi^+\pi^-\pi^0(\gamma)$ events (after background subtraction) and background event numbers are shown in Table \[tab1\]. Here $\gamma$ is a photon emitted by initial particles. To estimate the accuracy of background events number determination the $\chi^2_{3\pi}$ distribution (Fig.\[xi2u\]) was studied. The experimental $\chi^2_{3\pi}$ distribution in the range $0<\chi^2_{3\pi}<20$ was fitted by a sum of background and signal. The distribution for background events was taken from the simulation and that for $e^+e^-\to\pi^+\pi^-\pi^0$ events was obtained by using data collected in the vicinity of the $\phi$ meson peak [@phi98; @dplphi98] (the $\chi^2_{3\pi}$ distribution actually does not change in the interval $\sqrt[]{s}= 1$ – 1.4 GeV). As a result, the ratio between the number of background events obtained from the fit and the number calculated according to (\[bg\]) was found to be $1.4 \pm 0.2$. Using this ratio, the accuracy of the determination of the number of background events can be estimated to be about 40%. Detection efficiency -------------------- The detection efficiency of the $e^+e^-\to\pi^+\pi^-\pi^0(\gamma)$ process was obtained from simulation. The detection efficiency for events without $\gamma$-quantum radiation depends on the center of mass energy and varies from 0.15 to 0.16 in the energy range $\sqrt[]{s}=980$ – 1380 MeV. This dependence can be approximated by a linear function. The detection efficiency dependence on the radiated photon energy is shown in Fig.\[efrad\] . Inaccuracies in the simulation of the $\chi^2_{3\pi}$, $dE/dx$, and $N_\gamma$ distributions lead to an error in the average detection efficiency determination. To take into account these uncertainties, the detection efficiency was multiplied by correction coefficients, which were obtained in the following way [@phi98]. The experimental events were selected without any conditions on the parameter under study, using the selection parameters uncorrelated with the studied one. The same selection criteria were applied to simulated events. Then the cut was applied to the parameter and the correction coefficient was calculated: $$\begin{aligned} \delta = { {n/N} \over {m/M} },\end{aligned}$$ where $N$ and $M$ are the number of events in experiment and simulation respectively selected without any cuts on the parameter under study; $n$ and $m$ are the number of events in experiment and simulation when the cut on the parameter was applied. As a rule, the error in the coefficient $\delta$ determination is connected with the uncertainty of background subtraction. This systematic error was estimated by varying other selection criteria. The correction coefficient $\delta_{\chi^2_{3\pi}}=0.91\pm0.03$, due to the uncertainty in the $\chi^2_{3\pi}$ distribution simulation, was obtained using data collected in the vicinity of the $\phi$ resonance [@phi98; @dplphi98]. The correction which takes into account the inaccuracy of simulation of extra photons is $\delta_{N_\gamma} = 0.87 \pm 0.02$, and that correction for the inaccuracy of simulation $dE/dx$ energy losses is $\delta_{dE/dx} = 0.98 \pm 0.01$. The overlap of the beam background with the events containing charged particles can result in track reconstruction failure and a decrease of detection efficiency. To take into account this effect, background events (experimental events collected when the detector was triggered with an external generator) were mixed with the simulated events. It was found that the detection efficiency decreased by about 3% and therefore the correction coefficient $\delta_{over} = 0.97 \pm 0.03$ was used. The total correction used in this work is equal to: $$\delta_{tot}=\delta_{\chi^2_{3\pi}}\times\delta_{dE/dx}\times\delta_{N_\gamma} \times\delta_{over} = 0.75 \pm 0.04.$$ The systematic error of detection efficiency determination is 5%. The detection efficiency after the applied corrections is shown in Table \[tab1\]. Theoretical framework ===================== In the VDM framework the cross section of the $e^+e^-\to\pi^+\pi^-\pi^0$ process is $$\begin{aligned} \label{ds} {d\sigma \over dm_0 dm_+} = { {4\pi\alpha} \over {s^{3/2}} } {{|\vec{p}_+ \times \vec{p}_-|^2} \over {12\pi^2\mbox{~}\sqrt[]{s}}} m_0m_+ \cdot |F|^2,\end{aligned}$$ where $\vec{p}_+$ and $\vec{p}_-$ are the $\pi^+$ and $\pi^-$ momenta, $m_0$ and $m_+$ are $\pi^+\pi^-$ and $\pi^+\pi^0$ invariant masses. The formfactor $F$ of the $\gamma^\star \to \pi^+\pi^-\pi^0$ transition has the form $$\begin{aligned} \label{formfac} |F|^2 = \Biggl| A_{\rho\pi}(s) \sum_{i=+,0,-} { g_{\rho^i\pi\pi} \over D_\rho(m_i)} + A_{\omega\pi}(s) {\Pi_{\rho\omega}g_{\rho^0\pi\pi}\over D_\rho(m_0) D_\omega(m_0)} \Biggr|^2 .\end{aligned}$$ Here $$D_\rho(m_i) = m_{\rho^i}^2 - m_i^2 -im_i\Gamma_{\rho^i}(m_i),$$ $$\Gamma_{\rho^i}(m_i) = \Biggl({m_{\rho^i} \over m_i}\Biggr)^2 \cdot \Gamma_{\rho^i} \cdot \Biggl({q_i(m_i) \over q_i(m_{\rho^i})}\Biggr)^3$$ $$q_0(m) = {1 \over 2}(m^2-4m_\pi^2)^{1/2},$$ $$q_\pm(m) = {1 \over 2m} \bigl[(m^2-(m_{\pi^0}+m_\pi)^2)(m^2-(m_{\pi^0}-m_\pi)^2)\bigr]^{1/2}$$ $$m_-=\sqrt[]{s+m_{\pi^0}^2+2m_{\pi}^2-m_0^2-m_+^2},$$ where $m_-$ is the $\pi^-\pi^0$ invariant mass, $m_{\pi^0}$ and $m_\pi$ are the neutral and charged pion masses, $i$ denotes the sign of a $\rho$-meson ($\pi\pi$ pair) charge. The $\rho^0 \to \pi^+\pi^-$ and $\rho^\pm\to\pi^\pm\pi^0$ transition coupling constants could be determined in the following way: $$g_{\rho^0\pi\pi}^2 = {6\pi m_{\rho^0}^2\Gamma_{\rho^0} \over q_0(m_{\rho^0})^3},$$ $$g_{\rho^\pm\pi\pi}^2 = {6\pi m_{\rho^\pm}^2\Gamma_{\rho^\pm} \over q_\pm(m_{\rho^\pm})^3}$$ Experimental data [@dplphi98] do not contradict the equality of the coupling constants $g_{\rho^0\pi\pi}^2 = g_{\rho^\pm\pi\pi}^2$. In this case the $\rho^0$ and $\rho^\pm$ meson widths are related as follows: $$\begin{aligned} \label{shir} \Gamma_{\rho^\pm} = \Gamma_{\rho^0}{m_{\rho^0}^2 \over m_{\rho^\pm}^2} { q_\pm(m_{\rho^\pm})^3 \over q_0(m_{\rho^0})^3}.\end{aligned}$$ In the subsequent analysis we assume that $g_{\rho^0\pi\pi}^2 = g_{\rho^\pm\pi\pi}^2$, and the width values were taken from SND measurements [@dplphi98] $\Gamma_{\rho^0} = 149.8$ MeV, $\Gamma_{\rho^\pm} = 150.9$ MeV. The neutral and charged $\rho$ mesons masses were assumed to be equal and were also taken from the SND measurements [@dplphi98] $m_\rho=775.0$ MeV. The second term in (\[formfac\]) takes into account the $\rho-\omega$ mixing [@thrhoom]. The polarization operator of this mixing $\Pi_{\rho\omega}$ satisfies $\mbox{Im}(\Pi_{\rho\omega}) \ll \mbox{Re}(\Pi_{\rho\omega})$ [@akfaz; @akozi], where $$\begin{aligned} \mbox{Re}(\Pi_{\rho\omega}) = \sqrt[]{{\Gamma_\omega \over \Gamma_{\rho^0}(m_\omega)} B(\omega\to\pi^+\pi^-)} \cdot \biggl| (m_\omega^2-m_\rho^2) - im_\omega(\Gamma_\omega - \Gamma_{\rho^0}(m_\omega))\biggr|,\end{aligned}$$ so we assumed $\mbox{Im}(\Pi_{\rho\omega}) = 0$ in the subsequent analysis. The $e^+e^-\to\pi^+\pi^-\pi^0$ process cross section can be written in the following way: $$\begin{aligned} \label{sech3p} \sigma_{3\pi} = \sigma_{\rho\pi\to3\pi} + \sigma_{\omega\pi\to3\pi} + \sigma_{int},\end{aligned}$$ where $$\begin{aligned} \label{sech1} \sigma_{\rho\pi\to3\pi} = {{4\pi\alpha} \over {s^{3/2}}} W_{\rho\pi}(s)\biggl| A_{\rho\pi}(s) \biggr|^2,\end{aligned}$$ $$\begin{aligned} \label{sech2} \sigma_{\omega\pi\to3\pi} = {{4\pi\alpha} \over {s^{3/2}}} W_{\omega\pi}(s)\biggl| A_{\omega\pi}(s) \biggr|^2,\end{aligned}$$ $$\begin{aligned} \label{sech3} \sigma_{int} = {{4\pi\alpha} \over {s^{3/2}}} \biggl\{ A_{\rho\pi}(s) A_{\omega\pi}^\star (s) W_{int}(s) + A_{\rho\pi}^\star (s) A_{\omega\pi}(s) W_{int}^\star (s) \biggr\}.\end{aligned}$$ The phase space factors $W_{\rho\pi}(s)$, $W_{\omega\pi}(s)$ and $W_{int}(s)$ were calculated as follows: $$\begin{aligned} W_{\rho\pi}(s) = {1 \over 12 \pi^2 \mbox{~}\sqrt[]{s}} \int\limits^{\sqrt[]{s}-m_{\pi^0}}_{2m_\pi} m_0 dm_0 \int\limits^{m_+^{max}(m_0)}_{m_+^{min}(m_0)} m_+ dm_+ |\vec{p}_+ \times \vec{p}_-|^2 \cdot \biggl|\sum_{i=+,0,-} { g_{\rho^i\pi\pi} \over D_\rho(m_i)}\biggr|^2,\end{aligned}$$ $$\begin{aligned} W_{\omega\pi}(s) = {1 \over 12 \pi^2 \mbox{~}\sqrt[]{s}} \int\limits^{\sqrt[]{s}-m_{\pi^0}}_{2m_\pi} m_0 dm_0 \int\limits^{m_+^{max}(m_0)}_{m_+^{min}(m_0)} m_+ dm_+ |\vec{p}_+ \times \vec{p}_-|^2 \cdot \biggl| {\Pi_{\rho\omega}g_{\rho^0\pi\pi}\over D_\rho(m_0) D_\omega(m_0)} \biggr|^2,\end{aligned}$$ $$\begin{aligned} W_{int}(s) = {1 \over 12 \pi^2 \mbox{~}\sqrt[]{s}} \int\limits^{\sqrt[]{s}-m_{\pi^0}}_{2m_\pi} m_0 dm_0 \int\limits^{m_+^{max}(m_0)}_{m_+^{min}(m_0)} m_+ dm_+ |\vec{p}_+ \times \vec{p}_-|^2 \cdot \biggl( \biggl[ {\Pi_{\rho\omega}g_{\rho^0\pi\pi}\over D_\rho(m_0) D_\omega(m_0)}\biggr]^\star \cdot \sum_{i=+,0,-}{ g_{\rho^i\pi\pi}\over D_\rho(m_i)} \biggr),\end{aligned}$$ Amplitudes of the $\gamma^\star \to \rho\pi$ and $\gamma^\star \to \omega\pi^0$ transitions have the form $$\begin{aligned} \label{aropi} A_{\rho\pi}(s) = \sum_{V=\omega,\phi,\omega^\prime,{\ldots} } {g_{\gamma V}g_{V\rho\pi} \over D_V(s)}e^{i\phi_{\omega V}},\end{aligned}$$ $$\begin{aligned} A_{\omega\pi}(s) = \sum_{V=\rho,\rho^\prime,{\ldots} } {g_{\gamma V}g_{V\omega\pi^0} \over D_V(s)}e^{i\phi_{\rho V}},\end{aligned}$$ where $$D_V(s)=m_V^2-s-i\mbox{~}\sqrt[]{s}\Gamma_V(s), \mbox{~~~} \Gamma_V(s)=\sum_{f}\Gamma(V\to f,s).$$ Here $f$ denotes the final state of the vector meson $V$ decay. $\phi_{\omega V}$ ($\phi_{\rho V}$) are relative interference phases between vector mesons $V$ and $\omega$ ($\rho$), so $\phi_{\omega\omega}=0$ and $\phi_{\rho\rho}=0$. The coupling constants are determined through the decay branching ratios in the following way: $$\begin{aligned} \label{g} |g_{V\gamma}| = \Biggl[ {{3m_V^3\Gamma_VB(V \to e^+e^-)} \over {4\pi\alpha}} \Biggr]^{1/2}\end{aligned}$$ $$\begin{aligned} |g_{V\rho\pi}| = \Biggl[{{4\pi\Gamma_VB(V \to \rho\pi)} \over {W_{\rho\pi}(m_V)}} \Biggr]^{1/2}, \mbox{~~}\end{aligned}$$ $$\begin{aligned} |g_{V\omega\pi}| = \Biggl[{{12\pi\Gamma_VB(V \to \omega\pi)} \over {q_{\omega\pi}^3(m_V)}} \Biggr]^{1/2},\end{aligned}$$ where $q_{\omega\pi}(s)$ is the $\omega$-meson momentum. Cross section measurement ========================= From the data in Table \[tab1\] the cross section of the process $e^+e^-\to\pi^+\pi^-\pi^0$ can be calculated as follows: $$\begin{aligned} \label{aprox} \sigma(s) = {{N_{3\pi}(s)} \over {IL(s)\xi(s)}},\end{aligned}$$ where $N_{3\pi}(s)$ is the number of selected $e^+e^-\to\pi^+\pi^-\pi^0(\gamma)$ events, $IL(s)$ is the integrated luminosity, $\xi(s)$ is the function which takes into account the detection efficiency and radiative corrections for initial state radiation: $$\begin{aligned} \label{xifu} \xi(s) = {\int\limits^{E^{max}_\gamma}_0 \sigma_{3\pi}(s,E_\gamma)F(s,E_\gamma) \epsilon(s,E_\gamma) \mathrm{d}E_\gamma \over {\sigma_{3\pi}(s)}}.\end{aligned}$$ Here $E_\gamma$ is the emitted photon energy, $F(s,E_\gamma)$ is the electron “radiator” function [@fadin], $\epsilon(s,E_\gamma)$ is the detection efficiency of the process $e^+e^-\to\pi^+\pi^-\pi^0(\gamma_{rad})$ as a function of the emitted photon energy and the energy in the $e^+e^-$ center of mass system, $\sigma_{3\pi}(s)$ is the theoretical energy dependence of the cross section given by equation (\[sech3p\]). To obtain the values of $\xi(s)$ at each energy point, the visible cross section of the process $e^+e^-\to\pi^+\pi^-\pi^0(\gamma_{rad})$ $$\sigma^{vis}(s) = {N_{3\pi}(s) \over IL(s)}$$ was fitted by theoretical energy dependence $$\sigma^{th}(s) = \sigma_{3\pi}(s)\xi(s).$$ The following logarithmic likelihood function was minimized: $$\chi^2=\sum_{i} {{(\sigma^{vis}_i-\sigma^{th}_i)^2}\over{\sigma^2_i}},$$ where $i$ is the energy point number, $\sigma_i$ is the error of the visible cross section $\sigma^{vis}$. In a good approximation the contributions $\sigma_{\omega\pi\to3\pi}$ and $\sigma_{int}$ in expression (\[sech3p\]) can be omitted, as they are rather small ($\sim$ 5 – 10 %) and actually do not modify the shape of $\sigma_{3\pi}(s)$ energy dependence. So we assumed that $\sigma_{3\pi}(s)=\sigma_{\rho\pi\to3\pi}(s)$. The amplitude of the $\gamma^\star \to \rho\pi$ transition (\[aropi\]) was written as $$\begin{aligned} A_{\rho\pi}(s) = {{1}\over\sqrt[]{4\pi\alpha}} \sum_{V=\omega,\phi,\omega^\prime,\omega^{\prime\prime}} {{\Gamma_V m_V^2 \mbox{~} \sqrt[]{m_V\sigma(V\to 3\pi)}}\over{D_V(s)}} {{e^{i\phi_{\omega V}}}\over{\sqrt[]{W_{\rho\pi}(m_V)}}},\end{aligned}$$ where $$\sigma(V\to X) = {{12\pi B(V\to e^+e^-)B(V\to X) } \over {m_V^2}}.$$ The following form of the energy dependence of the $\omega^\prime$ and $\omega^{\prime\prime}$ total width was used $$\Gamma_V(s)=\Gamma_V{W_{\rho\pi}(s)\over W_{\rho\pi}(m_V)}.$$ In the fit the $\omega$ meson parameters (mass, width, branching ratios of main decays ) were fixed at their PDG values [@pdg], and the $\phi$ meson mass and width were fixed at the values measured by SND [@phi98]. It was shown [@phi98] that the $\sigma(\phi\to3\pi)$ parameter and the cross section value at $\sqrt[]{s} > 1027$ MeV have a rather large model error, due to the uncertainty in the choice of the phase $\phi_{\omega\phi}$ and the value of additional, besides the $\phi$ and $\omega$ resonances, contributions to the transition amplitude. Therefore we have taken the $\sigma(\phi\to3\pi)$ as a free parameter in the fit and the visible cross section presented in this work was fitted together with the visible cross section from Ref.[@phi98]. The masses and width of the $\omega^\prime$, $\omega^{\prime\prime}$ resonances were free parameters of the fit. Phases $\phi_{\omega V}$ can deviate from $180^\circ$ or $0^\circ$ and their values can have energy dependence due to mixing between vector mesons. For example, the phase $\phi_{\omega\phi}$ was found to be close to $180^\circ$ [@phi98] and agree with the prediction [@faza] $\phi_{\omega\phi}=\Phi(s)$ $(\Phi(m_\phi)\simeq 163^\circ)$, where the function $\Phi(s)$ is defined in Ref.[@faza]. There are no theoretical predictions of $\phi_{\omega\omega^\prime}$ and $\phi_{\omega\omega^{\prime\prime}}$ values and their energy dependences, and we have considered $\sqrt[]{\sigma(\omega^\prime\to 3\pi)}$ and $\sqrt[]{\sigma(\omega^{\prime\prime}\to3\pi)}$ as free parameters, i.e. $\phi_{\omega\omega^\prime}$ and $\phi_{\omega\omega^{\prime\prime}}$ can be equal to 0 or 180 degrees. The $\xi(s)$ values were obtained by approximation of the experimental data in several models: 1. $\phi_{\omega\phi}=180^\circ$ 2. $\phi_{\omega\phi}=\Phi(s)$ 3. $\phi_{\omega\phi}$ is a free parameter 4. $\sigma(\omega^{\prime\prime}\to3\pi)=0$, $\phi_{\omega\phi}=180^\circ$ 5. $\sigma(\omega^{\prime\prime}\to3\pi)=0$, $\phi_{\omega\phi}=\Phi(s)$ 6. $\sigma(\omega^{\prime\prime}\to3\pi)=0$, $\phi_{\omega\phi}$ is a free parameter. The values of $\xi(s)$ significantly depend on the applied model in the energy range $\sqrt[]{s} \simeq$ 1040 – 1090 MeV, and at $\sqrt[]{s} = 1040$ MeV the $\xi(s)$ values differ by a factor 10 for different models. Above 1090 MeV the $\xi(s)$ model dependence is negligible. Using obtained $\xi(s)$ values, the cross section of the $e^+e^-\to\pi^+\pi^-\pi^0$ process was calculated (Table.\[tab2\] ). The cross section in the energy region $\sqrt[]{s}=1027$ – $1060$ MeV has changed in comparison with the values reported in Ref.[@phi98]. In Ref.[@phi98] contributions from the $\omega$ excitations were taken into account as a constant amplitude. In present analysis the more realistic model was used and it caused a change in the cross section. The systematic error of the cross section determination at each energy point $\sqrt[]{s}$ is equal to $$\sigma_{sys} = \sigma_{eff} \oplus \sigma_{IL} \oplus \sigma_{mod}(s) \oplus \sigma_{bkg}(s).$$ Here $\sigma_{eff}=5\%$ and $\sigma_{IL}=2\%$ are systematic uncertainties in the detection efficiency and integrated luminosity, which are common for all energy points. The model uncertainty $\sigma_{mod}(s)$ is significant in the region $\sqrt[]{s} =1027$ – $1080$ MeV and was obtained from the difference of $\xi(s)$ values determined for the six models mentioned above. The error $\sigma_{bkg}(s)$ takes into account the inaccuracy ($\sim 40\%$) of background subtraction and depends on the beam energy. The obtained cross section differs by about $30\pm15$% from the previous SND result [@sndmhad] (Fig.\[sis\]), which claimed a systematic error about 12%. This difference is attributed to the fact that in the new analysis we implemented corrections to the detection efficiency (described in III.C) which were not used in the previous one. The comparison of the measured cross section with the other experimental results is presented in Fig.\[cs\]. Approximation of the $\pi^+\pi^-$ mass spectra. =============================================== The contribution of the $e^+e^-\to\omega\pi^0\to\rho^0\pi^0\to\pi^+\pi^-\pi^0$ mechanism to the process $e^+e^-\to\pi^+\pi^-\pi^0$ is seen as the interference in the $\pi^+\pi^-$ invariant mass spectra. To analyze the dipion mass spectra, the formfactor $F$ (expression (\[formfac\])) was presented in the following form: $$\begin{aligned} \label{formfac2} |F|^2 = \Biggl| A_{\rho\pi}(s) \Biggr|^2 \times \Biggl| \sum_{i=+,0,-} { g_{\rho^i\pi\pi} \over D_\rho(m_i)} + R(s)e^{i\psi(s)} {\mbox{Re}(\Pi_{\rho\omega}) g_{\rho^0\pi\pi} \over D_\rho(m_0) D_\omega(m_0)} \Biggr|^2,\end{aligned}$$ where $R(s)$ is the absolute value, and $\psi(s)$ is the phase of the ratio $A_{\omega\pi}(s)/A_{\rho\pi}(s)$. The $\psi(s)$ energy dependence can be obtained from the approximation of the experimental $\pi^+\pi^-$ invariant mass spectra as described below. The $R(s)$ value was calculated from the equation $$\begin{aligned} R^2\cdot \biggl(W_{\omega\pi}(s)-{q^3_{\omega\pi}(s)\over 3} {\sigma_{3\pi}(s)\over \sigma_{\omega\pi}(s)}\biggr) + R\cdot \biggl(e^{-i\psi}W_{int}(s)+e^{i\psi}W_{int}^\star (s) \biggr)+ W_{\rho\pi}(s) = 0,\end{aligned}$$ which follows from expressions (\[sech1\]-\[sech3\]). The $e^+e^-\to\omega\pi^0$ cross section was obtained from SND measurements of the $e^+e^-\to\omega\pi^0\to\pi^0\pi^0\gamma$ cross section [@ppg]: $\sigma_{\omega\pi^0}=\sigma_{\omega\pi^0\to\pi^0\pi^0\gamma}/ B(\omega\to\pi^0\gamma)$, $\sigma_{3\pi}(s)$ is the $e^+e^-\to\pi^+\pi^-\pi^0$ cross section measured here (Table \[tab2\]). The real part of the polarization operator $\Pi_{\rho\omega}$ is proportional to $\sqrt[]{B(\omega\to\pi^+\pi^-)}$. The world average value for this branching ratio is $B(\omega\to\pi^+\pi^-)=2.21 \pm 0.30 \%$ [@pdg]. The results of $B(\omega\to\pi^+\pi^-)$ measurements in different experiments deviate from each other by a factor of more than 1.5 . For example, OLYA detector reported the value $B(\omega\to\pi^+\pi^-)=2.3\pm0.5 \%$ [@olya], while CMD-2 experiment reported $B(\omega\to\pi^+\pi^-)=1.33\pm0.25 \%$ [@cmdpp]. So $B(\omega\to\pi^+\pi^-)$ was considered as a free parameter of the fit. For the mass spectra analysis the events selected in the energy region $\sqrt[]{s} \ge 1100$ MeV were used. For each energy point the $\pi^+\pi^-$ mass spectra were formed and arranged in histograms with a dipion mass range from 280 to 1240 MeV and bin width of 40 MeV. The invariant mass values were calculated after the kinematic reconstruction. The expected background was subtracted bin by bin while forming the desired histograms. The analysis of the dipion mass spectra was performed in a way similar to this described in Ref.[@dplphi98]. The experimental spectra were fitted with theoretical distributions. Using the $e^+e^-\to\pi^+\pi^-\pi^0$ cross section (\[ds\]) and formfactor (\[formfac2\]), the theoretical spectra were calculated: $$\begin{aligned} S^{(0)}_j(s) = {1 \over C_S(s)} \cdot\int\limits^{m_{j}+\Delta}_{m_{j}-\Delta} m_0 dm_0 \int\limits^{m_+^{max}(m_0)}_{m_+^{min}(m_0)} m_+ dm_+ |\vec{p}_+ \times \vec{p}_-|^2 \cdot |F|^2,\end{aligned}$$ where $j$ is the bin number, $\Delta=20$ MeV is a half of the bin width, $m_j$ is the central value of the invariant mass in the $j$th bin, $C_S(s)$ is a normalizing coefficient. These spectra were corrected taking into account the detection efficiency $\epsilon^{(0)}_j$ for the $j$th bin and a probability $a^{(0)}_{ij}$ for the event belonging to the $j$th bin to migrate to the $i$th bin due to the finite detector resolution $$\begin{aligned} G^{(0)}_i(s) = {1 \over C_G(s)} \Biggl(\sum_j a^{(0)}_{ij} S^{(0)}_j(s) \epsilon^{(0)}_j \Biggr) \cdot (1+\delta^{(0)}_i(s)).\end{aligned}$$ Here $\delta^{(0)}_i(s)$ is a radiative correction and $C_G(s)$ is a normalizing coefficient. The values of $a^{(0)}_{ij}$, $\epsilon^{(0)}_{j}$ and $\delta^{(0)}_i(s)$ were obtained from simulation. The function to be minimized was $$\begin{aligned} \chi^2=\sum_{s} \chi^2_0(s) = \sum_{s} \sum_{i} \Biggl( { {H_i^{(0)}-G_i^{(0)}} \over {\sigma_i^{(0)}} }\Biggr)^2.\end{aligned}$$ Here $H^{(0)}$ is the normalized experimental $\pi^+\pi^-$ mass distribution (histogram); $\sigma_i^{(0)}=\Delta H^{(0)}_i \oplus \Delta G^{(0)}_i$ include the uncertainties $\Delta H^{(0)}_i$ and $\Delta G^{(0)}_i$ of the experimental and theoretical distributions ($\Delta H^{(0)}_i \gg \Delta G^{(0)}_i$). During the fitting the phase $\psi(s)$ at each energy point and $B(\omega\to\pi^+\pi^-)$ were free parameters. Values of the phase $\psi(s)$ were allowed to vary from $-180^\circ$ to $180^\circ$. The obtained $\psi(s)$ values are presented in Table \[tab3\]. The systematic inaccuracy of $\psi(s)$ is about $7^\circ$ and is connected with a systematic error in $R(s)$ determination, which in its turn is about $4\%$ due to uncertinities of $\sigma_{\omega\pi}$ and $\sigma_{3\pi}$ measurements. The $\omega\to\pi^+\pi^-$ decay probability was found to be equal to $2.38\pm^{1.77}_{0.90}\pm0.18 \%$, where the systematic error is also related to the uncertainty of the $R(s)$ determination. In Figs.\[neu\] and \[cha\] the experimental $\pi\pi$ mass spectra together with the theoretical distributions obtained from the fit and the spectra expected from the only $\rho\pi$ intermediate state model are shown. In the $\pi^+\pi^-$ mass spectra the peak in the $\omega$ meson region is clearly seen. The distribution of the invariant mass of the $\pi^\pm\pi^0$ pairs does not contradict to the $\rho\pi$ intermediate state model at the level of our statistical accuracy. These figures demonstrate that together with the $\rho\pi$ intermediate state the $\omega\pi^0$ intermediate state also contributes to the process $e^+e^-\to\pi^+\pi^-\pi^0$. The total cross section analysis. ================================= The analysis of the $e^+e^-\to \pi^+\pi^-\pi^0$ cross section energy dependence obtained here (Table \[tab2\]) met the following difficulties: 1. The cross section was measured in the limited $\sqrt[]{s}$ energy region and it is necessary to use the results of other experiments. As a result, because of different systematic effects the problem of matching cross sections of various measurements arises; 2. In the ideal case, to obtain the vector mesons parameters, the combined fit of all $e^+e^-\to hadrons$ cross sections is necessary; The cross section measured in this work was analyzed together with the DM2 results of the $e^+e^-\to \pi^+\pi^-\pi^0$ and $\omega\pi^+\pi^-$ [@dm2] cross sections measurements. The $e^+e^-\to \pi^+\pi^-\pi^0$ cross section was fitted by the expression (\[sech3p\]). The $A_{\rho\pi}$ amplitude was written in the following way: $$\begin{aligned} A_{\rho\pi}(s) = {{1}\over\sqrt[]{4\pi\alpha}} \Biggl( { {\Gamma_\omega m_\omega^2 \mbox{~} \sqrt[]{m_\omega\sigma(\omega\to 3\pi)}} \over {D_\omega(s)}} { 1 \over {\sqrt[]{W_{\rho\pi}(m_\omega)}}} + { {\Gamma_\phi m_\phi^2 \mbox{~} \sqrt[]{m_\phi\sigma(\phi\to 3\pi)}} \over {D_\phi(s)}} { {e^{i\Phi(s)}} \over {\sqrt[]{W_{\rho\pi}(m_\phi)}}} + \nonumber \\ + \sum_{i=1}^3 {{\Gamma_{\omega^i} m_{\omega^i}^2 \mbox{~} \sqrt[]{m_{\omega^i}\sigma(\omega^i\to 3\pi)}}\over{D_{\omega^i}(s)}} {{e^{i\phi_{\omega\omega^i}}}\over{\sqrt[]{W_{\rho\pi}(m_{\omega^i})}}} \Biggr),\end{aligned}$$ where $i$ is the resonance number. The following form of the energy dependence of the $\omega^i$ total widths was used: $$\begin{aligned} \Gamma_{\omega^1}(s)=\Gamma_{\omega^1}{W_{\rho\pi}(s)\over W_{\rho\pi}(m_{\omega^1})},\end{aligned}$$ $$\begin{aligned} \Gamma_{\omega^i}(s)=\Gamma_{\omega^i}\biggl(B(\omega^i\to 3\pi) {W_{\rho\pi}(s) \over W_{\rho\pi}(m_{\omega^i})} + B(\omega^i \to \omega\pi\pi) {W_{\omega\pi\pi}(s) \over W_{\omega\pi\pi}(m_{\omega^i})}\biggr), \mbox{~~~} i=2,3.\end{aligned}$$ Here $W_{\omega\pi\pi}(s)$ is the phase space factor of the $\omega\pi\pi$ final state [@ak2]. The probabilities of the $\omega^i$ decays into $\pi^+\pi^-\pi^0$ and $\omega\pi\pi$ were calculated in the following way: $$\begin{aligned} B(\omega^i\to f) = {\sigma(\omega^i\to f) \over \sum_f \sigma(\omega^i\to f)}. \end{aligned}$$ Here $\sigma(\omega^i\to\omega\pi\pi) = 1.5 \cdot \sigma(\omega^i\to\omega\pi^+\pi^-)$. In the total width energy dependence the contributions from the following final states were neglected: $K_SK^\pm\pi^\mp$, $K^{\star0}K^-\pi^+$, $\overline{K}^{\star0}K^+\pi^-$, $K\overline{K}$. The $\omega$ meson parameters were fixed according to the PDG table values [@pdg]. The $m_\phi$, $\Gamma_{\phi}$ and parameters of the $\phi\to K\overline{K}$ and $\eta\gamma$ decays were fixed at the values obtained by SND [@phi98], while $\sigma(\phi\to 3\pi)$ was a free parameter of the fit. As it was mentioned above, the phases $\phi_{\omega\omega^i}$ can differ from 0 or 180 degrees and be energy dependent. Here we consider only $\sqrt[]{\sigma(\omega^i\to 3\pi)}$ as a free parameter, i.e. $\phi_{\omega\omega^i}=0^\circ$ or $180^\circ$. For the $A_{\omega\pi}$ amplitude two models with different energy behavior of the phase were used. Their parameters were obtained by fitting the $e^+e^-\to\omega\pi^0\to\pi^0\pi^0\gamma$ cross section measured by SND [@ppg] and CLEO2 data on $\tau\to 3\pi\pi^0$ decay [@cleo4p] (Fig.\[omp\]). The first model was suggested in Ref. [@ppg]. It assumes that only the $\rho$ and $\rho^{\prime\prime}$ resonances contribute to the $e^+e^-\to\omega\pi$ cross section (i.e. $A_{\omega\pi}=A_{\rho\to\omega\pi}+ A_{\rho^{\prime\prime}\to\omega\pi}$), at that the following parameters are used: the coupling constant $g_{\rho\omega\pi}\sim 15.2$ GeV$^{-1}$,$\rho^{\prime\prime}$-mass $m_{\rho^{\prime\prime}}\sim 1700$ MeV, width $\Gamma_{\rho^{\prime\prime}}\sim 1$ GeV, phase $\phi_{\rho\rho^{\prime\prime}}=180^\circ$ and $\sigma(\rho^{\prime\prime}\to\omega\pi) \sim 9$ nb. The $\rho^{\prime\prime}$ total width energy dependence is taken to be the following $$\begin{aligned} \Gamma_{\rho^{\prime\prime}}(s)=\Gamma_{\rho^{\prime\prime}} \biggl( 0.1{m^2_{\rho^{\prime\prime}} \over s} {q^3_{\pi\pi}(s) \over q^3_{\pi\pi}(m_{\rho^{\prime\prime}})} + 0.9{ q^3_{\omega\pi}(s) \over q^3_{\omega\pi}(m_{\rho^{\prime\prime}})} \biggr), \end{aligned}$$ where $q_{\pi\pi}(s)$ is the pion momentum. The second model assumes that three $\rho$, $\rho^\prime$ and $\rho^{\prime\prime}$ resonances contribute to the $e^+e^-\to\omega\pi$ cross section (i.e. $A_{\omega\pi}=A_{\rho\to\omega\pi}+ A_{\rho^\prime\to\omega\pi}+ A_{\rho^{\prime\prime}\to\omega\pi}$). In this case the parameters of the model are: $g_{\rho\omega\pi}\sim 16.8$ GeV$^{-1}$, $m_{\rho^{\prime}}\sim 1480$ MeV, $\Gamma_{\rho^{\prime}}\sim 790$ MeV, $\phi_{\rho\rho^\prime}=180^\circ$, $\sigma(\rho^\prime\to\omega\pi) \sim 86$ nb, and $m_{\rho^{\prime\prime}}\sim 1640$ MeV, $\Gamma_{\rho^{\prime\prime}}\sim 1290$ MeV, $\phi_{\rho\rho^{\prime\prime}}=0^\circ$, $\sigma(\rho^{\prime\prime}\to\omega\pi) \sim 48$ nb. The $\rho^\prime$ and $\rho^{\prime\prime}$ total width energy dependence is taken in the form $$\begin{aligned} \Gamma_{\rho^{\prime(\prime\prime)}}(s)=\Gamma_{\rho^{\prime(\prime\prime)}} { q^3_{\omega\pi}(s) \over q^3_{\omega\pi}(m_{\rho^{\prime(\prime\prime)}})}\end{aligned}$$ In both models the $\rho$ meson energy dependent width has the form: $$\begin{aligned} \Gamma_{\rho}(s)=\Gamma_{\rho^0} {{m_{\rho^0}^2}\over{s}} {{q^3_{\pi\pi}(s)}\over{q^3_{\pi\pi}(m_{\rho^0})}} + {{g_{\rho\omega\pi}^2}\over{12\pi}}q^3_{\omega\pi}(s)\end{aligned}$$ The $e^+e^-\to\omega\pi^+\pi^-$ process cross section was written in the following way: $$\begin{aligned} \sigma_{\omega\pi\pi} = {1 \over s^{3/2}} \Biggl| \sum_{i=2}^3 {{\Gamma_{\omega^i}m_{\omega^i}^2\mbox{~}\sqrt[]{\sigma(\omega^i\to\omega\pi^+\pi^-) m_{\omega^i}}} \over{D_{\omega^i}(s)}} \mbox{~} \sqrt[]{{W_{\omega\pi\pi}(s)}\over{W_{\omega\pi\pi}(m_{\omega^i})}} \Biggr|^2.\end{aligned}$$ The cross sections of the $e^+e^-\to \pi^+\pi^-\pi^0$ and $\omega\pi^+\pi^-$ processes measured by SND and DM2 were fitted together. The function to be minimized was $$\chi^2=\chi^2_{3\pi(SND)}+\chi^2_{3\pi(DM2)}+\chi^2_{\omega\pi\pi(DM2)},$$ where $$\chi^2_{3\pi(SND)} = \sum_{s}\Biggl( { {\sigma_{3\pi}^{(SND)}(s)-\sigma_{3\pi}(s)} \over {\Delta_{3\pi}^{(SND)}(s)} } \Biggr)^2$$ $$\chi^2_{3\pi(DM2)} = \sum_{s}\Biggl( { C_{3\pi}\cdot{\sigma_{3\pi}^{(DM2)}(s)-\sigma_{3\pi}(s)} \over {\Delta_{3\pi}^{(DM2)}(s)} } \Biggr)^2$$ $$\chi^2_{\omega\pi\pi(DM2)} = \sum_{s}\Biggl( { C_{\omega\pi\pi}\cdot{\sigma_{\omega\pi\pi}^{(DM2)}(s)- \sigma_{\omega\pi\pi}(s)} \over {\Delta_{\omega\pi\pi}^{(DM2)}(s)} } \Biggr)^2$$ Here $\sigma_{3\pi(\omega\pi\pi)}^{(SND(DM2))}(s)$ are the experimental cross sections, $\Delta$ are their uncertainties, $C_{3\pi}$ and $C_{\omega\pi\pi}$ are coefficients which take into account the relative systematic bias between SND and DM2 data. The $e^+e^-\to\pi^+\pi^-\pi^0$ cross section measured by SND (Table \[tab2\]) was fitted in the energy region $\sqrt[]{s}$ from 980 to 1380 MeV. The errors $\Delta_{3\pi(SND)}$ include the statistical $\sigma_{stat}$ and the following systematic errors: $\sigma_{bkg}$ due to the inaccuracy of the background subtraction and $\sigma_{mod}$ due to model dependence. Thus $\Delta_{3\pi(SND)}=\sigma_{stat}\oplus\sigma_{mod}\oplus\sigma_{bkg}$. The fitting was performed with $m_{\omega^i}$, $\Gamma_{\omega^i}$, $\sqrt[]{\sigma(\omega^i\to 3\pi)}$, $\sqrt[]{\sigma(\omega^i\to\omega\pi^+\pi^-)}$ and $\sigma(\phi\to 3\pi)$ as free parameters. To estimate the possible relative bias between SND and DM2 data, the $C_{3\pi}$ considered as a free parameter as well. It was found that $C_{3\pi}=1.72\pm 0.24$. To estimate the possible biases independently the cross sections of the $e^+e^-\to\omega\pi^0$ process (Fig.\[omp\]) measured by SND [@ppg] and DM2 [@dm2omp], and cross section calculated, by using CVC hypothesis, from the CLEO2 result on the $\tau\to 3\pi\pi^0$ decay [@cleo4p] were also studied. The $e^+e^-\to\omega\pi^0$ cross section was measured by DM2 by using $\pi^+\pi^-2\pi^0$ final state, i.e. as in the case of the $\pi^+\pi^-\pi^0$ and $\omega\pi^+\pi^-$ final states the events containing both tracks and photons were detected. This gives us a hope that all these DM2 measurements have similar systematic errors. The SND and CLEO2 data agree rather well. The DM2 and CLEO2 data points are strongly overlapped. The average ratio of the CLEO2 and DM2 cross sections is 1.54, and this agrees with $C_{3\pi}=1.72\pm 0.24$. In further analysis we assumed $C_{3\pi}=C_{\omega\pi\pi}$ and fixed these coefficients at 1 or 1.54. It is generally accepted that two $\omega$-like resonances $\omega^\prime$ and $\omega^{\prime\prime}$ exist [@pdg; @dm2]. The first fit was done by assuming that the number of the $\omega^i$ resonances is equal to 3 and without taking into account the $\omega\pi\to\pi^+\pi^-\pi^0$ mechanism (i.e., $\sigma_{\omega\pi\to 3\pi}=0$ and $\sigma_{int}=0$ were assumed). The obtained parameters of the $\omega^i$ resonances are shown in Table \[tab44\]. The $\sigma(\omega^1\to 3\pi)$ differs from zero by about one standard deviation. If in this approximation one takes into account the contribution from the $\omega\pi\to\pi^+\pi^-\pi^0$ mechanism, then $\sigma(\omega^1\to 3\pi)= 0.07 \pm^{0.32}_{0.07}$ nb, and the parameters of the $\omega^2$, $\omega^3$ resonances deviate from their previous values within their statistical errors. So in the further analysis the parameter $\sigma(\omega^1\to 3\pi)$ was fixed to zero and for the $\omega^2$, $\omega^3$ resonances a more usual notation $\omega^\prime$, $\omega^{\prime\prime}$ was used. The further fittings were performed under the following assumptions: 1. the contribution from the $\omega\pi\to\pi^+\pi^-\pi^0$ was not taken into account, i.e. $\sigma_{\omega\pi\to 3\pi}=0$, $\sigma_{int}=0$. 2. the first model for the amplitude $A_{\omega\pi}$ was used; 3. the second model for the amplitude $A_{\omega\pi}$ was used; The results of the fits are shown in Tables \[tab4\], \[tab5\] and Fig.\[cs3\], \[cs4\]. In case when no relative shift between SND and DM2 experiments was assumed, the value of $\chi^2_{3\pi(DM2)}$ is rather large. The obtained parameters depend weakly on the applied model. Discussion ========== The fit results revealed that the $e^+e^-\to\pi^+\pi^-\pi^0$ and $e^+e^-\to\omega\pi^+\pi^-$ cross sections can be described by a sum of contributions of the $\omega$ and $\phi$ mesons and two additional $\omega^\prime$, $\omega^{\prime\prime}$ resonances. The following $\omega^\prime$ parameters were obtained (Table \[tab4\]): $$m_{\omega^\prime} = 1490 \pm 50 \pm 25 \mbox{~~MeV},$$ $$\Gamma_{\omega^\prime} = 1210 \pm^{300}_{200} \pm 170 \mbox{~~MeV},$$ $$\sigma(\omega^\prime\to 3\pi) = 3.5 \pm 0.5 \pm 0.2 \mbox{~~nb},$$ $$\sigma(\omega^\prime\to\omega\pi^+\pi^-) = 0.03 \pm^{0.1}_{0.03} \pm 0.01 \mbox{~~nb},$$ $$\phi_{\omega\omega^\prime} \sim 180^\circ$$ The $\omega^{\prime}$ decays mostly into $\pi^+\pi^-\pi^0$: $B(\omega^2\to 3\pi) \simeq 99 \%$ and its electronic width is $\Gamma(\omega^\prime\to e^+e^-) \simeq 650$ eV. The $\omega^{\prime\prime}$ parameters were found to be: $$m_{\omega^{\prime\prime}} = 1790 \pm 40 \pm 10 \mbox{~~MeV},$$ $$\Gamma_{\omega^{\prime\prime}} = 560 \pm^{150}_{100} \pm 20 \mbox{~~MeV},$$ $$\sigma(\omega^{\prime\prime}\to 3\pi) = 2.0 \pm 0.40 \pm 0.8 \mbox{~~nb},$$ $$\sigma(\omega^{\prime\prime}\to\omega\pi^+\pi^-) = 1.9 \pm 0.4 \pm 0.8 \mbox{~~nb},$$ $$\phi_{\omega\omega^{\prime\prime}} \sim 0^\circ$$ The $\omega^{\prime\prime}$ resonance decays with approximately equal probabilities into $\pi^+\pi^-\pi^0$ and $\omega\pi\pi$: $B(\omega^{\prime\prime}\to 3\pi)\simeq 0.4$, $B(\omega^{\prime\prime}\to \omega\pi\pi) \simeq 0.6$ and it has the electronic width $\Gamma(\omega^{\prime\prime}\to e^+e^-) \simeq 600$ eV. The second errors shown are due to the uncertainty of the $A_{\omega\pi}$ amplitude choice and possible relative bias between different experiments. The rather large electronic widths obtained for the $\omega^{\prime}$ and $\omega^{\prime\prime}$ resonances may represent some challenge for theory. In the framework of the nonrelativistic quark model one can obtain the following ratios: $$\biggl| {{\Psi_{\omega^{\prime}}^S(0)}\over{\Psi_\omega^S(0)}}\biggr|^2 = \biggl({{m_{\omega^\prime}}\over{m_\omega}} \biggr)^2 \cdot {{\Gamma(\omega^\prime\to e^+e^-)}\over{\Gamma(\omega\to e^+e^-)}} \sim 4,$$ $$\biggl| {{\Psi_{\omega^{\prime\prime}}^S(0)}\over{\Psi_\omega^S(0)}}\biggr|^2 = \biggl({{m_{\omega^{\prime\prime}}}\over{m_\omega}} \biggr)^2 \cdot {{\Gamma(\omega^{\prime\prime}\to e^+e^-)}\over{\Gamma(\omega\to e^+e^-)}} \sim 5,$$ where $\Psi_V^S(0)$ is the radial wave function of the $q\overline{q}$ bound state at the origin. For the quark-antiquark potentials used to describe heavy quarkonia, such ratios are always less than unity [@zk1]. This is also confirmed experimentally. For example, analogous ratios for $c\overline{c}$ and $b\overline{b}$ states are: $|\Psi_{\psi(2S)}^S(0)/\Psi_{J/\psi}^S(0)|^2\simeq 0.57$, $|\Psi_{\Upsilon(2S)}^S(0)/\Psi_{\Upsilon(1S)}^S(0)|^2\simeq 0.44$, $|\Psi_{\Upsilon(3S)}^S(0)/\Psi_{\Upsilon(1S)}^S(0)|^2\simeq 0.43$. Of course, the nonrelativistic quark model is unreliable for light-quark $\omega$-states. But, surprisingly, it gives quite reasonable description of the ground state $\rho$, $\omega$ and $\phi$ meson leptonic widths, which do not change radically in the framework of the “relativized” quark model [@zk2]. For comparison, the nonrelativistic quark model predictions for the two photon widths of the light pseudoscalar mesons are dramatically wrong and only the “relativized” model gives reasonable result [@zk3]. More precise data and more deep analysis is required to draw strict conclusions. The $\omega^\prime$, $\omega^{\prime\prime}$ widths obtained from the fit are rather large in comparison with their masses (this result agrees with experimental data analysis reported in [@ak1; @ak2; @ak3]). In this context the question whether the sum of Breit-Wigner amplitudes is an adequate description of the cross sections in the energy region $m_\phi<\sqrt[]{s}<2000$ MeV becomes actual. The presented analysis of the $\omega$-like excited states is somewhat speculative since we had to assume a rather large systematic bias between SND and DM2 measurements. The $\sigma(\phi\to 3\pi)$ was found to be equal $$\sigma(\phi\to 3\pi) = 646 \pm 4 \pm 37 \mbox{~nb}.$$ This agrees with the results of SND studies of the $e^+e^-\to\pi^+\pi^-\pi^0$ cross section in the vicinity of the $\phi$ resonance $\sigma(\phi\to 3\pi)=659\pm35$ nb [@phi98]. The slight deviations in the central value and the error can be related to the difference in descriptions of the $\omega^\prime$, $\omega^{\prime\prime}$ contributions used in these works. The fit was performed by assuming $\phi_{\omega\phi}=\Phi(s)$ [@faza]. If $\phi_{\omega\phi}$ is considered to be a free parameter of the fit, then its value is: $$\phi_{\omega\phi}=164^\circ \pm 3^\circ,$$ which agrees with $\Phi(m_\phi)=163^\circ$ [@faza]. The relative phase $\psi(s)$ between $A_{\rho\pi}$ and $A_{\omega\pi}$ amplitudes and $B(\omega\to\pi^+\pi^-)$ was obtained from the $\pi^+\pi^-$ invariant mass spectra analysis in the $\sqrt[]{s}$ energy region from 1100 to 1380 MeV (Table \[tab3\], Fig.\[faz2\]). The phase $\psi(s)$ can be also calculated from the total cross section fit results (Table \[tab4\]). Figure \[faz2\] demonstrates that the phase $\psi(s)$ energy dependence cannot be described if the model with $A_{\omega\pi}=A_{\rho\to\omega\pi}+ A_{\rho^{\prime\prime}\to\omega\pi}$ is used. On the other hand, the model in which $A_{\omega\pi}=A_{\rho\to\omega\pi}+ A_{\rho^\prime\to\omega\pi}+A_{\rho^{\prime\prime}\to\omega\pi}$ gives satisfactory description of the data. The $\omega\to\pi^+\pi^-$ decay probability was found to be: $$B(\omega\to\pi^+\pi^-)=2.38\pm^{1.77}_{0.90}\pm0.18 \%$$ This result does not contradict both to OLYA measurements [@olya] and world average value [@pdg], as well as to CMD2 result [@cmdpp]. Using the results of the total $e^+e^-\to\pi^+\pi^-\pi^0$ cross section and $\pi^+\pi^-$ invariant mass spectra analysis, the contribution of the $e^+e^-\to\rho\pi\to \pi^+\pi^-\pi^0$ mechanism to the total cross section was estimated to be $\sim 90\%$ in the energy range $\sqrt[]{s}=1100$ – 1380 MeV. For the data analysis the model which takes into account only $e^+e^-\to\rho\pi\to\pi^+\pi^-\pi^0$ and $\omega\pi^0\to\pi^+\pi^-\pi^0$ mechanisms were used. The $e^+e^-\to\rho^{\prime(\prime\prime)}\pi\to\pi^+\pi^-\pi^0$, intermediate state, as well as the $\rho$ and $\pi$ meson interaction in the final state [@akfaz] are also possible. Taking into account these contributions in the fit can change the $\psi(s)$ values, but the statistics collected in SND experiments is not enough for studies of such contributions. In addition, the parameters of the $\rho^{\prime(\prime\prime)}$ resonances are poorly established. In the energy dependence of the total width the contributions from the following decays were not taken into account: $\omega^{\prime(\prime\prime)}\to K_SK^\pm\pi^\mp, K^{\star0}K^-\pi^+(\overline{K}^{\star0}K^+\pi^-), K\overline{K}$, $\rho^{\prime(\prime\prime)}\to\rho\pi\pi, \eta\pi^+\pi^-, K\overline{K}, K_SK^\pm\pi^\mp, K^{\star0}K^-\pi^+(\overline{K}^{\star0}K^+\pi^-)$. The mixing between vector mesons excitations was neglected. It is possible that a more detailed model for the $A_{\omega\pi}$ and $A_{\rho\pi}$ amplitudes can change the calculated energy dependence of the phase $\psi(s)$ presented in Fig.\[faz2\]. At present in BINP (Novosibirsk) the VEPP-2000 collider with energy range from 0.36 to 2 GeV and luminosity up to $10^{32}$ cm$^{-2}$s$^{-1}$ (at$\sqrt[]{s} \sim 2$ GeV) is under construction [@vepp2000]. The two detectors SND [@sndupgrade] and CMD-2M [@cmd2m] are being upgraded for experiments at this new facility. In these experiments the increase of the accuracy in determination of $e^+e^- \to hadrons$ cross sections is expected in the energy range $m_\phi<\sqrt[]{s}<2000$ MeV. We hope that the new data will improve the understanding of the nature of $\rho^{\prime(\prime\prime)}$, $\omega^{\prime(\prime\prime)}$ and $\phi^{\prime(\prime\prime)}$ resonances, as well as their decay mechanisms and theoretical methods of their description. Conclusion ========== The cross section of the process $e^+e^-\to \pi^+\pi^-\pi^0$ was measured in the SND experiment at the VEPP-2M collider in the energy region $\sqrt[]{s} = 980$ – 1380 MeV. Due to the increased luminosity, and improved corrections for analysis losses and initial state radiation, the cross section measurements reported here (Table \[tab2\]) supersede those in Ref.[@sndmhad] and Ref.[@phi98]. The measured cross section was analyzed in the framework of the generalized vector meson dominance model together with the $e^+e^-\to \pi^+\pi^-\pi^0$ and $\omega\pi^+\pi^-$ cross sections obtained by DM2. It was found that the experimental data can be described with a sum of contributions of $\omega$, $\phi$ mesons and two $\omega^\prime$ and $\omega^{\prime\prime}$ resonances with masses $m_{\omega^\prime}\sim 1490$, $m_{\omega^{\prime\prime}}\sim 1790$ MeV and widths $\Gamma_{\omega^\prime}\sim 1210$, $\Gamma_{\omega^{\prime\prime}}\sim 560$ MeV. The analysis of the dipion mass spectra in the energy region $\sqrt[]{s}$ from 1100 to 1380 MeV has shown that for their description the mechanism $e^+e^-\to\omega\pi^0\to\pi^+\pi^-\pi^0$ is required. The phase between $e^+e^-\to\omega\pi$ and $e^+e^-\to\rho\pi$ processes amplitudes was measured for the first time. Its value is close to zero and depends on energy. acknowledgments {#acknowledgments .unnumbered} =============== The authors are grateful to N.N.Achasov, S.I.Eidelman and A.A.Kozhevnikov for useful discussions. The present work was supported in part by grant no. 78 1999 of Russian Academy of Science for young scientists and grant STP “Integration” A0100. [99]{} Particle Data Group, D.E. Groom, et al., Eur.Phys.J. [**C 15**]{}, 1 (2000) A. Donnachie, Yu.S. Kalashnikova, Z.Phys. [**C 59**]{}, 621 (1993) A. Donnachie, Yu.S. Kalashnikova, Z.Phys. [**C 60**]{}, 187 (1993) A.B. Clegg, A. Donnachie Z.Phys. [**C 62**]{}, 455 (1994) A. Donnachie, Yu.S. Kalashnikova, Phys. Rev. [**D 60**]{}, 114011 (1999) N.N. Achasov, A.A. Kozhevnikov, Phys. Rev. [**D 55**]{}, 2663 (1997);\ Yad. Fiz. [**60**]{}, 1131 (1997) \[Phys. At. Nucl. [**60**]{}, 1011 (1997)\] N.N. Achasov, A.A. Kozhevnikov, Phys. Rev. [**D 57**]{}, 4334 (1998)\ Yad. Fiz. [**60**]{}, 2212 (1997) \[Phys. At. Nucl. [**60**]{}, 2029 (1997)\] N.N. Achasov, A.A. Kozhevnikov, Phys. Rev. [**D 62**]{}, 117503 (2000)\ Yad. Fiz. 65, 158 (2002) \[Phys. At. Nucl. 65, 155 (2002)\]. N.N. Achasov, A.A. Kozhevnikov, and G.N. Shestakov, Phys. Lett. [**50B**]{}, 448 (1974) .\ N.N. Achasov, N.M. Budnev, A.A. Kozhevnikov, and G.N. Shestakov, Yad. Fiz. 23, 610 (1976) \[Sov. J. Nucl. Phys. 23, 320 (1976)\];\ N.N. Achasov and G.N. Shestakov, Fiz. Elem. Chastits. At. Yadra 9, 48 (1978) M.N. Achasov et al., Preprint Budker INP 98-65 Novosibirsk, 1998 G. Cosme et al., Nuc. Phys. [**B 152**]{}, 215 (1979) B. Esposito et al., Lett. Nuovo Cim. 28, 195 (1980) C. Bacci et al., Nuc. Phys. [**B 184**]{}, 31 (1981) B. Delcourt et al., Phys. Lett [**113B**]{}, 93 (1982) S.I. Dolinsky et al., Phys. Rep. [**202**]{}, 99 (1991) A. Antonelli et al., Z. Phys., [**C 56**]{}, 15 (1992) M.N. Achasov et al., Phys. Lett. [**B 462**]{}, 365 (1999) M.N. Achasov et al., Nucl. Instr. and Meth. [**A 449**]{}, 125 (2000) A.N. Skrinsky, in Proc. of Workshop on physics and detectors for DA$\Phi$NE, Frascati, Italy, April 4-7, 1995, p.3 M.N. Achasov et al., CALORIMETRY IN HIGH ENERGY PHYSICS: Proceedings. Edited by Gaspar Barreira, Bernardo Tome, Agostinho Gomes, Amelia Maio, Maria J. Varanda. World Scientific, 2000. 863p. M.N. Achasov et al., Phys. Rev. [**D 63**]{}, 072002 (2001) M.N. Achasov et al., hep-ex/0106048, accepted for publication in Phys. Rev. D R.R. Akhmetshin et al., Phys. Lett. [**B 466**]{}, 392 (1999) K.W. Edwards et al., Phys. Rev. [**D 61**]{}, 072003 (2000) M.N. Achasov et al., Preprint, Budker INP 2001-34, Novosibirsk, 2001 (in Russian) M.N.Achasov et al., Phys. Lett. [**B 486**]{}, 29 (2000) N.N. Achasov and A.A. Kozhevnikov, Phys. Rev. D 49, 5773 (1994)\ Yad. Fiz. 56, 191 (1993) \[ Phys. Atom. Nucl. 56, 1261 (1993)\]\ Int. J. Mod. Phys. A 9, 527 (1994) N.N. Achasov and A.A. Kozhevnikov, Yad. Fiz. 55, 809 (1992) \[Sov. J. Nucl. Phys. 55, 449 (1992)\];\ Int. J. Mod. Phys. A 7, 4825 (1992). E.A. Kuraev, V.S. Fadin, Yad. Fiz. [**41**]{}, 733 (1985) \[Sov. J. Nucl. Phys. [**41**]{}, 466 (1985)\] N.N. Achasov, A.A. Kozhevnikov, Phys. Rev. [**D 61**]{} 054005 (2000)\ Yad. Fiz. 63, 2029 (2000) \[ Phys. Atom. Nucl. 63, 1936 (2000)\]\ L.M. Barkov et al., Nuc. Phys. [**B 256**]{} 365 (1985) R.R. Akhmetshin et al., hep-ex/0112031 D.Bisello et al., Nucl. Phys. Proc. Suppl. 21 111 (1991) C. Quigg and J.L. Rosner, Phys. Rep. [**56**]{}, 167 (1979) C.R. Munz, J. Resag, B.C. Metsch, H.R. Petry, Nucl. Phys. [**A578**]{} 418 (1994) B.C. Metsch, H.R. Petry, Acta. Phys. Polon. [**B271**]{} 3307 (1996) Yu.M.Shatunov et al, Project of a new electron-positron collider VEPP-2000, in Proc. of the 2000 European Particle Acc. Conf., Vienna (2000), p.439 G.N.Abramov, et al., SND Upgrade, Invited talk at “e+e- Physics at Intermediate Energies Workshop”, SLAC, Stanford, California, April 30 - May 2, 2001; e-print hep-ex/0105093 V.M. Aulchenko et al., Preprint Budker INP 2001-45 Novosibirsk, 2001 (in Russian) $\sqrt[]{s}$ (MeV) $IL$ (nb$^{-1}$) $\epsilon(s,E_\gamma=0)$ $\delta_{rad}$ $N_{3\pi}$ $N_{bkg}$ -------------------- ------------------ -------------------------- ------------------ ------------ ----------- 980 129 0.150 0.858 259$\pm$18 3$\pm$1 1040 69 0.153 11.706 – 131.646 90$\pm$10 4$\pm$1 1050 84 0.149 3.762 – 5.281 75$\pm$10 4$\pm$1 1060 279 0.150 1.808 – 2.018 196$\pm$16 8$\pm$2 1070 98 0.150 1.269 – 1.327 61$\pm$ 9 2$\pm$1 1080 578 0.150 1.060 – 1.102 325$\pm$23 22$\pm$6 1090 95 0.150 0.985 – 1.002 54$\pm$ 8 3$\pm$1 1100 445 0.152 0.928 255$\pm$18 14$\pm$3 1110 90 0.151 0.915 70$\pm$11 2$\pm$1 1120 306 0.150 0.889 213$\pm$17 11$\pm$3 1130 113 0.151 0.889 76$\pm$10 4$\pm$1 1140 289 0.151 0.901 177$\pm$16 9$\pm$2 1150 69 0.152 0.873 59$\pm$ 9 2$\pm$1 1160 320 0.152 0.877 217$\pm$17 11$\pm$2 1180 423 0.152 0.884 302$\pm$21 12$\pm$3 1190 172 0.152 0.872 125$\pm$12 4$\pm$1 1200 439 0.153 0.883 290$\pm$19 13$\pm$2 1210 151 0.153 0.871 129$\pm$12 4$\pm$1 1220 343 0.153 0.947 282$\pm$19 9$\pm$2 1230 141 0.153 0.871 103$\pm$11 4$\pm$1 1240 378 0.153 0.871 250$\pm$17 6$\pm$1 1250 209 0.154 0.871 165$\pm$14 6$\pm$1 1260 163 0.154 0.867 129$\pm$13 5$\pm$1 1270 241 0.154 0.868 175$\pm$15 8$\pm$2 1280 229 0.154 0.872 169$\pm$13 8$\pm$2 1290 272 0.155 0.866 199$\pm$15 9$\pm$2 1300 272 0.155 0.867 188$\pm$14 6$\pm$2 1310 202 0.155 0.874 153$\pm$14 5$\pm$1 1320 236 0.155 0.873 174$\pm$14 7$\pm$2 1330 293 0.156 0.876 206$\pm$15 8$\pm$2 1340 439 0.156 0.874 281$\pm$20 12$\pm$2 1350 257 0.156 0.876 169$\pm$14 6$\pm$2 1360 625 0.156 0.872 399$\pm$22 19$\pm$3 1370 256 0.156 0.879 179$\pm$15 7$\pm$2 1380 480 0.157 0.880 278$\pm$18 16$\pm$4 : Event numbers $N_{3\pi}$ of the $e^+e^-\to\pi^+\pi^-\pi^0(\gamma)$ process (after background subtraction) and $N_{bkg}$ of background processes, integrated luminosity $IL$ and detection efficiency $\epsilon(s,E_\gamma=0)$ (without $\gamma$-quantum radiation). $\delta_{rad}$ is radiative correction ($\delta_{rad}=\xi(s)/\epsilon(s,E_\gamma=0)$, $\xi(s)$ is defined through the expression (\[xifu\])).[]{data-label="tab1"} $\sqrt[]{s}$(MeV) $\sigma$(nb) $\sigma_{mod}$(nb) $\sigma_{bkg}$(nb) $\sigma_{eff}\oplus\sigma_{IL}$(nb) $\sigma_{sys}$(nb) ------------------- ------------------ -------------------- -------------------- ------------------------------------- -------------------- 980.00 15.58 $\pm$ 1.07 0.00 0.00 0.84 0.84 984.02$^\star$ 17.30 $\pm$ 0.80 0.00 0.00 0.86 0.86 984.21$^\star$ 18.10 $\pm$ 0.90 0.00 0.00 0.91 0.91 1003.71$^\star$ 37.60 $\pm$ 1.40 0.00 0.00 1.88 1.88 1003.91$^\star$ 36.20 $\pm$ 1.30 0.00 0.00 1.81 1.81 1010.17$^\star$ 68.50 $\pm$ 2.40 0.00 0.00 3.42 3.42 1010.34$^\star$ 69.50 $\pm$ 2.50 0.00 0.00 3.48 3.48 1015.43$^\star$ 220.00$\pm$ 6.50 0.00 0.00 11.00 11.00 1015.75$^\star$ 243.10$\pm$ 7.50 0.00 0.00 12.16 12.16 1016.68$^\star$ 358.90$\pm$10.60 0.00 0.00 17.94 17.94 1016.78$^\star$ 353.60$\pm$11.10 0.00 0.00 17.68 17.68 1017.59$^\star$ 493.60$\pm$14.90 0.00 0.00 24.68 24.68 1017.72$^\star$ 515.00$\pm$15.30 0.00 0.00 25.75 25.75 1018.62$^\star$ 664.20$\pm$13.10 0.00 0.00 33.21 33.21 1018.78$^\star$ 658.60$\pm$11.60 0.00 0.00 32.93 32.93 1019.51$^\star$ 667.00$\pm$11.80 0.00 0.00 33.35 33.35 1019.79$^\star$ 595.50$\pm$14.10 0.00 0.00 29.77 29.77 1020.43$^\star$ 471.20$\pm$15.50 0.00 0.00 23.56 23.56 1020.65$^\star$ 399.80$\pm$14.50 0.00 0.00 19.99 19.99 1021.41$^\star$ 270.10$\pm$ 9.90 0.00 0.00 13.51 13.51 1021.68$^\star$ 217.40$\pm$ 8.50 0.00 0.00 10.87 10.87 1022.32$^\star$ 142.90$\pm$ 6.10 0.00 0.00 7.14 7.14 1023.27$^\star$ 92.20 $\pm$ 3.40 0.00 0.00 4.61 4.61 1027.52$^\star$ 15.33 $\pm$ 0.73 0.57 0.00 0.77 0.96 1028.23$^\star$ 10.81 $\pm$ 0.62 0.52 0.00 0.54 0.75 1033.58$^\star$ 1.75 $\pm$ 0.11 0.47 0.00 0.09 0.48 1033.84$^\star$ 1.43 $\pm$ 0.12 0.41 0.00 0.07 0.42 1039.59$^\star$ 0.37 $\pm$ 0.04 0.31 0.00 0.02 0.31 1039.64$^\star$ 0.37 $\pm$ 0.03 0.31 0.00 0.02 0.31 1040.00 0.40 $\pm$ 0.04 0.33 0.00 0.02 0.33 1049.60$^\star$ 1.12 $\pm$ 0.12 0.20 0.00 0.06 0.21 1049.81$^\star$ 1.14 $\pm$ 0.15 0.20 0.00 0.06 0.21 1050.00 1.37 $\pm$ 0.18 0.23 0.00 0.07 0.24 1059.52$^\star$ 1.75 $\pm$ 0.21 0.09 0.00 0.09 0.13 1059.66$^\star$ 1.84 $\pm$ 0.28 0.09 0.00 0.09 0.13 1060.00 2.46 $\pm$ 0.20 0.14 0.00 0.13 0.19 1070.00 3.21 $\pm$ 0.47 0.07 0.04 0.17 0.19 1080.00 3.46 $\pm$ 0.24 0.07 0.09 0.19 0.22 1090.00 3.84 $\pm$ 0.57 0.03 0.07 0.21 0.22 1100.00 4.07 $\pm$ 0.29 0.00 0.09 0.22 0.24 1110.00 5.66 $\pm$ 0.89 0.00 0.05 0.31 0.31 1120.00 5.19 $\pm$ 0.42 0.00 0.11 0.28 0.30 1130.00 5.04 $\pm$ 0.67 0.00 0.10 0.27 0.29 1140.00 4.50 $\pm$ 0.40 0.00 0.09 0.24 0.26 1150.00 6.40 $\pm$ 0.98 0.00 0.10 0.35 0.36 1160.00 5.12 $\pm$ 0.39 0.00 0.10 0.28 0.29 1180.00 5.30 $\pm$ 0.37 0.00 0.09 0.29 0.30 1190.00 5.44 $\pm$ 0.53 0.00 0.08 0.29 0.30 1200.00 4.89 $\pm$ 0.32 0.00 0.09 0.26 0.28 1210.00 6.39 $\pm$ 0.60 0.00 0.08 0.34 0.35 1220.00 5.68 $\pm$ 0.41 0.00 0.07 0.31 0.32 1230.00 5.48 $\pm$ 0.59 0.00 0.09 0.30 0.31 1240.00 4.96 $\pm$ 0.34 0.00 0.04 0.27 0.27 1250.00 5.91 $\pm$ 0.51 0.00 0.08 0.32 0.33 1260.00 5.92 $\pm$ 0.60 0.00 0.10 0.32 0.33 1270.00 5.41 $\pm$ 0.47 0.00 0.10 0.29 0.31 1280.00 5.50 $\pm$ 0.43 0.00 0.10 0.30 0.31 1290.00 5.46 $\pm$ 0.42 0.00 0.10 0.29 0.31 1300.00 5.13 $\pm$ 0.40 0.00 0.07 0.28 0.29 1310.00 5.59 $\pm$ 0.52 0.00 0.07 0.30 0.31 1320.00 5.44 $\pm$ 0.44 0.00 0.09 0.29 0.31 1330.00 5.17 $\pm$ 0.38 0.00 0.08 0.28 0.29 1340.00 4.70 $\pm$ 0.34 0.00 0.08 0.25 0.27 1350.00 4.82 $\pm$ 0.41 0.00 0.07 0.26 0.27 1360.00 4.68 $\pm$ 0.27 0.00 0.09 0.25 0.27 1370.00 5.09 $\pm$ 0.43 0.00 0.08 0.27 0.29 1380.00 4.21 $\pm$ 0.28 0.00 0.09 0.23 0.25 : The $e^+e^-\to\pi^+\pi^-\pi^0$ cross section. $\star$ denotes the points in which the cross section was calculated using data from Ref. [@phi98] (the cross section has changed only for energies $\sqrt[]{s}>1027$ MeV). $\sigma_{mod}$ is model uncertainty, $\sigma_{bkg}$ is the error due to background subtraction, $\sigma_{eff}\oplus\sigma_{IL}$ - error due to uncertainty in detection efficiency and integrated luminosity determination (5% at the energies marked by $\star$ and 5.4% for other energy points), $\sigma_{sys}=\sigma_{eff} \oplus \sigma_{IL} \oplus \sigma_{mod}(s) \oplus \sigma_{bkg}(s)$ is the total systematic error. []{data-label="tab2"} $\sqrt[]{s}$(MeV) $\psi(s)$(degree) $P(\chi^2_0)$ ------------------- -------------------- --------------- 1100 -57$\pm_{56}^{57}$ 0.59 1110 -66$\pm_{74}^{66}$ 0.57 1120 -1$\pm_{55}^{43}$ 0.23 1130 37$\pm$ 42 0.33 1140 130$\pm_{35}^{33}$ 0.32 1150 60$\pm$ 180 0.86 1160 -10$\pm_{39}^{35}$ 0.03 1180 25$\pm_{28}^{30}$ 0.98 1190 -20$\pm_{60}^{53}$ 0.28 1200 23$\pm_{33}^{32}$ 0.79 1210 131$\pm_{47}^{45}$ 0.48 1220 -16$\pm$51 0.44 1230 -102$\pm$37 0.28 1240 -21$\pm_{68}^{45}$ 0.45 1250 26$\pm_{40}^{39}$ 0.46 1260 -14$\pm_{61}^{48}$ 0.18 1270 -26$\pm_{67}^{45}$ 0.12 1280 1$\pm_{36}^{31}$ 0.79 1290 23$\pm_{49}^{46}$ 0.67 1300 -17$\pm_{36}^{33}$ 0.33 1310 32$\pm_{34}^{33}$ 0.42 1320 -34$\pm_{67}^{54}$ 0.25 1330 41$\pm_{28}^{28}$ 0.32 1340 30$\pm_{26}^{25}$ 0.52 1350 49$\pm_{39}^{37}$ 0.82 1360 35$\pm_{24}^{23}$ 0.02 1370 19$\pm_{51}^{43}$ 0.17 1380 23$\pm_{37}^{33}$ 0.88 : The relative phase $\psi(s)$ of the amplitudes $A_{\omega\pi}$ and $A_{\rho\pi}$. []{data-label="tab3"} $i$ $m_{\omega^i}$, MeV $\Gamma_{i}$, MeV $\sigma(\omega^i\to 3\pi)$, nb $\sigma(\omega^i\to \omega\pi^+\pi^-)$, nb $\phi_{\omega\omega^i}$ ----- ---------------------- ----------------------- -------------------------------- -------------------------------------------- ------------------------- 1 $1249 \pm^{42}_{87}$ $404 \pm^{88}_{81}$ $0.22 \pm^{0.23}_{0.17}$ – $180^\circ$ 2 $1428 \pm^{64}_{52}$ $765 \pm^{395}_{272}$ $2.02 \pm^{0.50}_{0.58}$ $0.05 \pm^{0.06}_{0.04}$ $180^\circ$ 3 $1773 \pm^{30}_{26}$ $483 \pm^{93}_{73}$ $2.43 \pm^{0.56}_{0.47}$ $2.50 \pm^{0.33}_{0.31}$ $0^\circ$. : The results of the fit, taking into account three $\omega^i$ resonances.[]{data-label="tab44"} $N$ 1 2 3 --------------------------------------------------------- ------------------------- ------------------------- ------------------------- $\sigma(\phi\to 3\pi)$, nb 647$\pm$4 646$\pm$4 647$\pm$4 $m_{\omega^\prime}$, MeV 1506$\pm^{40}_{32}$ 1465$\pm^{33}_{38}$ 1481$\pm^{35}_{30}$ $\Gamma_{\omega^\prime}$, MeV 1322$\pm^{274}_{202}$ 1037$\pm^{202}_{153}$ 1079$\pm^{202}_{160}$ $\sigma(\omega^\prime\to 3\pi)$, nb 3.31$\pm 0.49$ 3.44$\pm^{0.46}_{0.47}$ 3.56$\pm^{0.43}_{0.44}$ $\sigma(\omega^\prime\to \omega\pi^+\pi^-)$, nb 0.03$\pm^{0.08}_{0.03}$ 0.03$\pm^{0.07}_{0.03}$ 0.03$\pm^{0.09}_{0.03}$ $\phi_{\omega\omega^\prime}$ $180^\circ$ $180^\circ$ $180^\circ$ $m_{\omega^{\prime\prime}}$, MeV 1798$\pm^{43}_{34}$ 1801$\pm^{43}_{33}$ 1793$\pm^{41}_{33}$ $\Gamma_{\omega^{\prime\prime}}$, MeV 581$\pm^{176}_{119}$ 580$\pm^{172}_{117}$ 560$\pm^{162}_{120}$ $\sigma(\omega^{\prime\prime}\to 3\pi)$, nb 1.72$\pm^{0.45}_{0.40}$ 1.27$\pm^{0.33}_{0.32}$ 1.54$\pm^{0.40}_{0.35}$ $\sigma(\omega^{\prime\prime}\to \omega\pi^+\pi^-)$, nb 1.51$\pm^{0.34}_{0.30}$ 1.48$\pm^{0.33}_{0.30}$ 1.53$\pm^{0.34}_{0.31}$ $\phi_{\omega\omega^{\prime\prime}}$ $0^\circ$ $0^\circ$ $0^\circ$ $\chi^2_{3\pi(SND)}/N_{3\pi}^{(SND)}$ 55.3/67 52.4/67 52.7/67 $\chi^2_{3\pi(DM2)}/N_{3\pi}^{(DM2)}$ 40.2/18 42.8/18 39.5/18 $\chi^2_{\omega\pi\pi(DM2)}/N_{\omega\pi\pi}^{(DM2)}$ 9.3/18 9.8/18 9.3/18 : Fit results for the $e^+e^-\to\pi^+\pi^-\pi^0$ and $\omega\pi^+\pi^-$ cross sections. The column number $N$ corresponds to the different models for $A_{\omega\pi}$ amplitude. $N_{3\pi}^{(SND)}$, $N_{3\pi}^{(DM2)}$ and $N_{\omega\pi\pi}^{(DM2)}$ is the number of fitted points of the processes $e^+e^-\to\pi^+\pi^-\pi^0$ and $\omega\pi^+\pi^-$ obtained in SND and DM2 experiments. The DM2 data was used in the fit as published in Ref.[@dm2].[]{data-label="tab4"} $N$ 1 2 3 --------------------------------------------------------- ------------------------- ------------------------- ------------------------- $\sigma(\phi\to 3\pi)$, nb 646$\pm$4 646$\pm$4 646$\pm$4 $m_{\omega^\prime}$, MeV 1513$\pm^{45}_{37}$ 1472$\pm^{40}_{32}$ 1491$\pm^{44}_{37}$ $\Gamma_{\omega^\prime}$, MeV 1383$\pm^{300}_{229}$ 1095$\pm^{240}_{174}$ 1156$\pm^{257}_{189}$ $\sigma(\omega^\prime\to 3\pi)$, nb 3.45$\pm 0.50$ 3.57$\pm^{0.47}_{0.51}$ 3.65$\pm^{0.47}_{45}$ $\sigma(\omega^\prime\to \omega\pi^+\pi^-)$, nb 0.03$\pm^{0.10}_{0.03}$ 0.03$\pm^{0.11}_{0.03}$ 0.04$\pm^{0.12}_{0.04}$ $\phi_{\omega\omega^\prime}$ $180^\circ$ $180^\circ$ $180^\circ$ $m_{\omega^{\prime\prime}}$, MeV 1784$\pm^{38}_{31}$ 1784$\pm^{38}_{31}$ 1780$\pm^{38}_{31}$ $\Gamma_{\omega^{\prime\prime}}$, MeV 563$\pm^{156}_{110}$ 550$\pm^{147}_{104}$ 544$\pm^{146}_{104}$ $\sigma(\omega^{\prime\prime}\to 3\pi)$, nb 2.80$\pm^{0.67}_{0.58}$ 2.29$\pm^{0.54}_{0.49}$ 2.59$\pm^{0.61}_{0.52}$ $\sigma(\omega^{\prime\prime}\to \omega\pi^+\pi^-)$, nb 2.35$\pm^{0.48}_{0.44}$ 2.34$\pm^{0.48}_{0.41}$ 2.40$\pm^{0.49}_{0.44}$ $\phi_{\omega\omega^{\prime\prime}}$ $0^\circ$ $0^\circ$ $0^\circ$ $\chi^2_{3\pi(SND)}/N_{3\pi}^{(SND)}$ 51.8/67 49.2/67 49.6/67 $\chi^2_{3\pi(DM2)}/N_{3\pi}^{(DM2)}$ 22.1/18 22.7/18 22.1/18 $\chi^2_{\omega\pi\pi(DM2)}/N_{\omega\pi\pi}^{(DM2)}$ 9.3/18 9.4/18 9.3 : Fit results for the $e^+e^-\to\pi^+\pi^-\pi^0$ and $\omega\pi^+\pi^-$ cross sections. The column number $N$ corresponds to the different models for $A_{\omega\pi}$ amplitude. $N_{3\pi}^{(SND)}$, $N_{3\pi}^{(DM2)}$ and $N_{\omega\pi\pi}^{(DM2)}$ is the number of fitted points of the processes $e^+e^-\to\pi^+\pi^-\pi^0$ and $\omega\pi^+\pi^-$ obtained in SND and DM2 experiments. The DM2 data was increase by a factor 1.54.[]{data-label="tab5"} [^1]: e-mail: achasov@inp.nsk.su, FAX: +7(383-2)34-21-63
--- abstract: 'In the standard approach to cosmological modeling in the framework of general relativity, the energy conditions play an important role in the understanding of several properties of the Universe, including singularity theorems, the current accelerating expansion phase, and the possible existence of the so-called phantom fields. Recently, the $f(T)$ gravity has been invoked as an alternative approach for explaining the observed acceleration expansion of the Universe. If gravity is described by a $f(T)$ theory instead of general relativity, there are a number of issues that ought to be reexamined in the framework of $f(T)$ theories. In this work, to proceed further with the current investigation of the limits and potentialities of the $f(T)$ gravity theories, we derive and discuss the bounds imposed by the energy conditions on a general $f(T)$ functional form. The null and strong energy conditions in the framework of $f(T)$ gravity are derived from first principles, namely the purely geometric Raychaudhuri equation along with the requirement that gravity is attractive. The weak and dominant energy conditions are then obtained in a direct approach via an effective energy-momentum tensor for $f(T)$ gravity. Although similar, the energy condition inequalities are different from those of general relativity, but in the limit $f(T)=T$, the standard forms for the energy conditions in general relativity are recovered. As a concrete application of the derived energy conditions to locally homogeneous and isotropic $f(T)$ cosmology, we use the recent estimated values of the Hubble and the deceleration parameters to set bounds from the weak energy condition on the parameters of two specific families of $f(T)$ gravity theories.' author: - Di Liu - 'M.J. Rebouças' title: 'Energy conditions bounds on $f(T)$ gravity' --- Introduction {#sect1} ============ A diverse set of cosmological observations coming from different sources, including the supernovae-type Ia (SNe Ia) [@sne], the cosmic microwave background radiation (CMBR) [@cmbr], and the large-scale structure (LSS) [@lss] clearly indicate that the Universe is currently expanding with an accelerating rate. A number of alternative models and different frameworks have been proposed to account for this observed late-time accelerated expansion of the Universe. These approaches can be classified into two broad groups. In the first, the framework of general relativity is kept unchanged and an unknown form of matter sources, the so-called dark energy, is invoked. In this regard, the simplest way to describe the accelerated expanding Universe is by introducing a cosmological constant into the general relativity field equations. Although this is entirely consistent with the available observational data, it faces difficulties, including the microphysical origin and the order of magnitude of the cosmological constant. In the second group, modifications of Einstein’s gravitation theory are assumed as an alternative for describing the accelerated expansion.[^1] Examples of the latter group include generalized theories of gravity based upon modifications of the Einstein-Hilbert action by taking nonlinear functions $f(R)$ of the Ricci scalar $R$ or other curvature invariants (for reviews see Ref. [@fr]). An alternative modification of general relativity, known as $f(T)$ gravity, has been examined recently as a possible way of describing the current acceleration of the Universe [@Bengochea09; @Linder10; @Myrzakulov11]. The origin of $f(T)$ gravity theory goes back to 1928 with Einstein’s attempt to unify gravity and electromagnetism through the introduction of a tetrad (vierbein) field along with the concept of absolute parallelism or teleparallelism [@Einstein]. In the teleparallel gravity (TG) theories the dynamical object is not the metric $g_{\mu \nu}$ but a set of tetrad fields $\mathbf{e^{}_{a}}(x^\mu)$, and rather than the familiar torsionless Levi-Civita connection of general relativity, a Weitzenböck connection (which has no curvature but only torsion) is used to define the covariant derivative. The gravitational field equation of TG is then described in terms of the torsion instead of the curvature [@FNGtn1; @FNGtn2; @FNGtn3]. In formal analogy with the $f(R)$, the $f(T)$ gravity theory was suggested by extending the Lagrangian of teleparallel gravity to a function $f(T)$ of a torsion scalar $T$ [@Bengochea09; @Linder10]. In comparison with $f(R)$ gravity in the metric formalism, whose field equations are of the fourth order, $f(T)$ gravity has the advantage that the dynamics are governed by second-order field equations. The fact that $f(T)$ theories can potentially be used to explain the observed accelerating expansion along with the relative simplicity of their field equations has given birth to a number of papers on these gravity theories, in which several features of $f(T)$ gravity have been discussed, including observational cosmological constraints [@Bengochea-2011; @Wei-Ma-Qi-2011; @Wu-Yu-2010], solar system constraints [@Iorio-Saridakis-2012], cosmological perturbations [@Dent-Duta-Saridakis-2011; @Zheng-Huang-2011; @Chen-dent-Dutta-2011], dynamical behavior [@Wu-Yu-b-2010], spherically symmetric solutions [@Wang-2011], the existence of relativistic stars [@Stars-in-f(T)], the possibility of quantum divide crossing [@Wu-Yu-2011], cosmographic constraints [@Cosmography-2011], and the lack of local Lorentz invariance [@Li-Sotiriou-Barrow-2011; @Miao-Li-Miao-2011; @Li-Miao-Miao-2011] which may give rise to undesirable outcomes from $f(T)$ gravity [@Zheng2011; @Lilarg], although suitable tetrad fields can be chosen [@Tamanini-Bohmer-2012]. For some further references on several aspects of $f(T)$ gravity theories we refer the readers to Ref. [@FT]. In the framework of general relativity the so-called *energy conditions* have been used to derive remarkable results in a number of contexts. For example, the famous Hawking-Penrose singularity theorems invoke the strong energy condition (SEC) [@Hawking-Ellis], whose violation allows for the observed accelerating expansion, and the proof of the second law of black hole thermodynamics requires null energy conditions (NEC) [@Visser; @Wald]. On macroscopic scales relevant for cosmology, the confrontation of the energy conditions predictions with observational data is another important issue that has been considered in a number of recent articles. In this regard, since the pioneering works by Visser [@M_Visser1997], a number of articles have been published concerning this confrontation by using model-independent energy-conditions bounds on the cosmological observable quantities, such as the distance modulus, lookback time, and deceleration and curvature parameters [@EC-1; @EC-2; @EC-3; @EC-4; @EC-5; @EC-6; @EC-7; @EC-8]. Owing to their role in several important issues in general relativity and cosmology, the energy conditions have also been investigated in several frameworks of modified gravity theories, including $f(R)$ gravity [@EC-fR-grav; @Kung], gravity with nonminimal coupling between curvature and matter [@Bertolami-Sequeira09], Gauss-Bonnet gravity [@EC-Gauss-Bonnet], modified $f(G)$ gravity [@EC_fG-grav], and Brans-Dicke theories [@EC-Brans-Dicke] (see also the related Refs. [@EC_fG-NM-copling; @EC-f(RT)]). In this article, to proceed further with these investigations on the potentialities, difficulties, and limitations of $f(T)$ gravity theories, we derive the energy conditions for the general functional form of $f(T)$ and discuss some concrete examples of these bounds by using observational constraints on the Hubble and the deceleration parameters. The null and strong energy conditions (NEC and SEC) are derived in the framework of $f(T)$ from first principles, i.e., from the purely geometric Raychaudhuri equation along with the requirement that gravity is attractive. We find that the NEC and the SEC in general $f(T)$ gravity, although similar, are different from those of Einstein’s gravity and $f(R)$ gravity, but in the limiting case $f(T)= T$, the standard general relativity forms for theses energy conditions are recovered. The resulting inequalities for the SEC and NEC in the $f(T)$ gravity framework are then compared with what would be obtained by translating these energy conditions in terms of an effective energy-momentum tensor for $f(T)$ gravity.There emerges from this comparison a natural formulation for the weak and dominant energy conditions (WEC and DEC) in the context of $f(T)$ gravity, which also reduce to the standard GR forms for these conditions in the limit $f(T) = T$. As a concrete application of the energy conditions for spatially homogeneous and isotropic $f(T)$ cosmology, we use recent estimated values of the Hubble and the deceleration parameters to set bounds from the WEC on the parameters of two specific families of $f(T)$ gravity theories. Our paper is organized as follows. In Sec. \[sec2\], we give a brief review on the $f(T)$ theories and derive the field equations. In Sec. \[sec3\], using the purely geometric Raychaudhuri equations for timelike and null congruences of curves, we derive the SEC and NEC from first principles, and the WEC and DEC through an effective energy-momentum tensor. In Sec. \[sec4\] we use the constraints on present-day values of cosmographic parameters to set constraints on exponential as well as on the Born-Infeld $f(T)$ gravity from the WEC. Finally, conclusions and final remarks are presented in Sec. \[sec5\]. $\mathbf{f(T)}$ gravity theory {#sec2} ============================== In this section, we briefly introduce the teleparallel gravity and its generalization known as $f(T)$ gravity. We begin by recalling that the dynamical variables in teleparallel gravity are the vierbein or tetrad fields, $\mathbf{e_a}(x^\mu)$, which is a set of four ($a= 0,\cdots,3$) vectors defining a local orthonormal frame at every point $x^\mu$ of the spacetime manifold. The tetrad vectors field $\mathbf{e_a}(x^\mu)$ are vectors in the tangent space and can be expressed in terms of a coordinate basis as $\mathbf{e_a}(x^\mu)=\,e^\mu_a\partial_\mu$. The spacetime metric tensor and the tetrads are related by[^2] $$\label{g-metric} g_{\mu \nu }=e_{\mu }^{a}\,e_{\nu }^{b}\,\eta _{ab}$$ where $\eta _{ab}=\text{diag}\,(1,-1,-1,-1)$ is the Minkowski metric of the tangent space at $x^\mu$. It follows that the relation between frame components, $e_{a}^{\mu }$, and coframe components, $e_{\mu }^{a}$, are given by $$\label{tetradralation} e_{a}^{\mu }\,e_{\nu}^{a}=\delta _{\nu }^{\mu } \qquad \text{and} \qquad e_{a}^{\mu }\,e_{\mu }^{b}=\delta _{a}^{b}\;.$$ In general relativity one uses the Levi-Civita connection $$\overset{\circ }{\Gamma }{}_{\;\;\mu \nu }^{\rho } = \frac{1}{2}g^{\rho \sigma }\left( \partial _{\nu} g_{\sigma \mu}+\partial _{\mu}g_{\sigma \nu}-\partial _{\sigma}g_{\mu \nu}\right)\;,$$which leads to nonzero spacetime curvature but zero torsion. In teleparallel gravity, instead of the Levi-Civita connection, one uses the Weitzenböck connection which is given by $$\label{connection} \widetilde{{\Gamma }}_{\;\mu \nu }^{\lambda }=e_{a}^{\lambda }\,\partial _{\nu }\, e_{\mu}^{a} = -e_{\mu }^{a}\,\partial _{\nu }\,e_{a}^{\lambda }\;.$$ An immediate consequence of this definition is that the covariant derivative, $D_{\mu }$, of the tetrad fields $$D_{\mu }e_{\nu }^{a} \equiv \partial _{\mu }e_{\nu }^{a}- \widetilde{\Gamma} _{\;\nu \mu}^{\lambda }e_{\lambda }^{a}=0\;,$$ vanishes identically. This equation leads to a zero curvature but nonzero torsion. To clarify the interrelations between Weitzenböck and Levi-Civita connections, one needs to introduce the torsion and contorsion tensors, which are given, respectively, by $$\label{T} T^{\rho }_{\;\;\mu \nu } \equiv \widetilde{\Gamma }_{\;\nu \mu }^{\rho } -\widetilde{\Gamma }_{\;\mu \nu}^{\rho } = e_{a}^{\rho }(\partial _{\mu }e_{\nu }^{a}-\partial _{\nu }e_{\mu}^{a})\;,$$ $$\label{K} K_{\;\;\mu \nu }^{\rho } \equiv \widetilde{\Gamma} _{\;\mu \nu }^{\rho } -\overset{\circ}{\Gamma }{}_{\;\mu \nu }^{\rho}=\frac{1}{2}(T_{\mu }{}^{\rho }{}_{\nu } + T_{\nu}{}^{\rho }{}_{\mu }-T_{\;\;\mu \nu }^{\rho })\;,$$ where above $\overset{\circ }{\Gamma }{}_{\;\;\mu \nu }^{\rho }$ is the Levi-Civita connection. Now, if one further defines the so-called super-potential $$\label{S} S_{\sigma }^{\;\;\mu \nu }\equiv K_{\;\;\;\;\sigma }^{\mu \nu }+\delta _{\sigma }^{\mu }T_{\;\;\;\;\;\xi }^{\xi \nu }-\delta _{\sigma }^{\nu }T_{\;\;\;\;\;\xi }^{\xi \mu }\;,$$ one obtains the torsion scalar $$\label{scalarT} T\equiv \frac{1}{2}S_{\sigma }^{\;\;\mu \nu }T_{\;\;\mu \nu }^{\sigma }= \frac{1}{4}T^{\xi \mu \nu }T_{\xi \mu \nu }+\frac{1}{2}T^{\xi \mu \nu }T_{\nu \mu \xi }-T_{\xi \mu }^{\;\;\;\;\xi } T_{\;\;\;\;\;\nu }^{\nu \mu}\,,$$ which is used as the Lagrangian density in formulation of the teleparallel gravity theory, which is given by $$\mathcal{L}_{T} = \frac{e\,T}{2\,\kappa^{2}}\;,$$ where $e=\det (e_\mu^{a})=\sqrt{-g}$, $\kappa^2=8\pi G$, and $G$ is the gravitational constant. Now, by taking an arbitrary function $f$ of the torsion scalar $T$, one obtains the Lagrangian density of $f(T)$ gravity theory, that is $$\label{f1} \mathcal{L}_{T} \,\longrightarrow \,\mathcal{L}_{f(T)} = \frac{e\,f(T)}{2\,\kappa^{2}}.$$ Now, by adding a matter Lagrangian density $\mathcal{L}_M$ to Eq. (\[f1\]) and varying the resultant action with respect to the vierbein, one obtains the following field equation for $f(T)$ gravity: $$\begin{aligned} \label{lagran1} &&\partial_\xi(ee^\rho_a S^{\;\;\sigma\xi}_\rho f_T)-ee^\lambda_a S^{\rho\xi\sigma} T_{\rho\xi\lambda}f_T+\frac{1}{2}ee^\sigma_af(T) \nonumber \\ &&=[\partial_\xi(ee^\rho_a S^{\;\;\sigma\xi}_\rho)-ee^\lambda_a S^{\rho\xi\sigma} T_{\rho\xi\lambda}]f_T +ee^\rho_a(\partial_\xi T)S^{\;\;\sigma\xi}_\rho f_{TT} \nonumber \\ &&+\frac{1}{2}ee^\sigma_af(T) = e\,\Theta^\sigma_{\;\;a}\;, $$ where $f_T= df(T)/dT$, $f_{TT}= d^2f(T)/dT^2$, and $\Theta^\sigma_{\;\;a}$ is the energy-momentum tensor of the matter fields. Here and in what follows we have chosen units such that $\kappa^2=c=1$. To bring the field equations (\[lagran1\]) to a form suitable for our purpose in the next section. To this end, we first note that if one multiply **$e^{-1}g_{\mu\sigma}e^a_\nu$**, both sides of (\[lagran1\]), the resultant equation is such that the coefficient of that the term $f_{T}$ takes the form $$\begin{aligned} \label{nablaS} &&e^a_\nu e^{-1}\partial_\xi(ee^\rho_a S^{\;\;\sigma\xi}_\rho)-S^{\rho\xi\sigma}T_{\rho\xi\nu} \nonumber \\ &&=\partial_\xi S^{\;\;\sigma\xi}_{\nu}-\widetilde{\Gamma }_{\;\nu \xi }^{\rho }S^{\;\;\sigma\xi}_{\rho} +\overset{\circ}{\Gamma }{}_{\;\tau\xi}^{\tau}S^{\;\;\sigma\xi}_{\nu} -S^{\rho\xi\sigma}T_{\rho\xi\nu} \nonumber \\ &&=-\nabla^\xi S_{\nu\xi}^{\;\;\;\;\sigma}-S^{\xi\rho\sigma}K_{\rho\xi\nu}\;,\end{aligned}$$ where the relation $$\label{relation1} K^{(\mu\nu)\sigma}=T^{\mu(\nu\sigma)}=S^{\mu(\nu\sigma)}=0$$ has been used. On the other hand, from the relation between and Weitzenböck connection and the Levi-Civita connection given by Eq.(\[K\]), one can write the Riemann tensor for the Levi-Civita connection in the form $$\begin{aligned} \label{tensorR} R^\rho_{\;\;\mu\lambda\nu}\!\!\!\!\!\!&&=\partial_{\lambda}\overset{\circ}{\Gamma }{}_{\;\mu\nu}^{\rho} -\partial_{\nu}\overset{\circ}{\Gamma }{}_{\;\mu\lambda}^{\rho} +\overset{\circ}{\Gamma }{}_{\;\sigma\lambda}^{\rho}\overset{\circ}{\Gamma }{}_{\;\mu\nu}^{\sigma} -\overset{\circ}{\Gamma }{}_{\;\sigma\nu}^{\rho}\overset{\circ}{\Gamma }{}_{\;\mu\lambda}^{\sigma}\\ \nonumber &&=\nabla_\nu K^\rho_{\;\;\mu\lambda}-\nabla_\lambda K^\rho_{\;\;\mu\nu} +K^\rho_{\;\;\sigma\nu}K^\sigma_{\;\;\mu\lambda}-K^\rho_{\;\;\sigma\lambda}K^\sigma_{\;\;\mu\nu}\;,\end{aligned}$$ whose associated Ricci tensor can then be written as $$R_{\mu\nu}=\nabla_\nu K^\rho_{\;\;\mu\rho}-\nabla_\rho K^\rho_{\;\;\mu\nu} +K^\rho_{\;\;\sigma\nu}K^\sigma_{\;\;\mu\rho} -K^\rho_{\;\;\sigma\rho}K^\sigma_{\;\;\mu\nu}\;.$$ Now, by using $K^\rho_{\;\;\mu\nu}$ given by Eq. (\[S\]) along with the relations (\[relation1\]) and considering that $S^\mu_{\;\;\rho\mu}= 2K^\mu_{\;\;\;\rho\mu}=-2T^\mu_{\;\;\;\rho\mu}$ one has  [@Li-Sotiriou-Barrow-2011; @Sotiriou-Li-Barrow2011b; @Lilarg] $$\begin{aligned} &&R_{\mu\nu}=-\nabla^\rho S_{\nu\rho\mu}-g_{\mu\nu}\nabla^\rho T^\sigma_{\;\;\;\rho\sigma} -S^{\rho\sigma}_{\;\;\;\;\;\mu}K_{\sigma\rho\nu}\;, \nonumber \\ &&R=-T-2\nabla^\mu T^\nu_{\;\;\;\mu\nu}\;,\end{aligned}$$ and thus obtain $$\label{eqdivs} G_{\mu\nu}-\frac{1}{2}\,g_{\mu\nu}\,T =-\nabla^\rho S_{\nu\rho\mu}-S^{\sigma\rho}_{\;\;\;\;\mu}K_{\rho\sigma\nu}\;,$$ where $G_{\mu\nu}=R_{\mu\nu}-(1/2)\,g_{\mu\nu}\,R$ is the Einstein tensor. Finally, combining Eq. (\[nablaS\]) and Eq. (\[eqdivs\]), the field equations for $f(T)$ gravity Eq. (\[lagran1\]) can be rewritten in the form $$\label{motion1} A_{\mu\nu}f_T+B_{\mu\nu}f_{TT}+\frac{1}{2}g_{\mu\nu} f(T) =\Theta_{\mu\nu}\;,$$ where $$\begin{aligned} \label{motion1add} &&A_{\mu \nu }=g_{\sigma\mu}e^a_\nu[e^{-1}\partial_\xi(ee^\rho_a S^{\;\;\sigma\xi}_\rho)-e^\lambda_a S^{\rho\xi\sigma} T_{\rho\xi\lambda}]\\ \nonumber &&\qquad=-\nabla^\sigma S_{\nu\sigma\mu }-S_{\;\;\;\;\mu }^{\rho\lambda }K_{\lambda \rho \nu } =G_{\mu \nu }-\frac{1}{2}g_{\mu \nu }T, \; \\ \nonumber &&B_{\mu\nu}=S^{\;\;\;\;\sigma}_{\nu\mu}\nabla_\sigma T\;.\end{aligned}$$ To close this section, we note that since $A_\mu^{\;\;\mu}=-(R+2T)$, the trace of Eq. (\[motion1\]), which can be used as an independent relation to simplify the field equation, can be expressed as $$\begin{aligned} \label{trace} -(R+2T)f_T + B f_{TT} + 2f(T)=\Theta\;,\end{aligned}$$ where $B=B_\mu^{\;\;\mu}$ and $\Theta=\Theta_\mu^{\;\;\mu}$. Energy Conditions {#sec3} ================= Strong and null energy conditions --------------------------------- The ultimate origin of strong and null energy conditions is the Raychaudhuri equation together with the requirement that gravity is attractive. The Raychaudhuri equation gives temporal variation of the expansion $\theta$ of congruence of geodesics (for a review article see Ref. [@Kar-Dadhich]). For a congruence of timelike geodesics whose tangent vector field is $u^{\mu}$ Raychaudhuri equation reads $$\label{Raych-time} \frac{d\theta}{d\tau}= - \frac{1}{3}\,\theta^2 - \sigma_{\mu\nu}\sigma^{\mu\nu} + \omega_{\mu\nu}\omega^{\mu\nu} - R_{\mu\nu}u^{\mu}u^{\nu} \;,$$ where $\theta\,$, $\sigma^{\mu\nu}$ and $\omega_{\mu\nu}$ are, respectively, the expansion, shear, and rotation associated with the congruence defined by the vector field $u^{\mu}$, and $R_{\mu\nu}$ is the Ricci tensor. The evolution equation for the expansion of a congruence of null geodesics defined by a null vector field $k^\mu$ has a similar form as the Raychaudhuri equation (\[Raych-time\]), but with a factor $1/2$ rather than $1/3$, and $-R_{\mu\nu}k^{\mu}k^{\nu}$ instead of $-R_{\mu\nu}u^{\mu}u^{\nu}$ as the last term (see Ref. [@Carroll] for more details). Thus, its reads $$\label{Raych-null} \frac{d\theta}{d\tau}=-\frac{1}{2}\,\theta^2- \sigma_{\mu\nu}\sigma^{\mu\nu}+\omega_{\mu\nu}\omega^{\mu\nu} - R_{\mu\nu}k^\mu k^\nu \;,$$ where the kinematical quantities $\theta\,$, $\sigma^{\mu\nu}$ and $\omega_{\mu\nu}$ are now clearly associated with the congruence of null geodesics. An important point to be emphasized is that Raychaudhuri Eqs. (\[Raych-time\]) and (\[Raych-null\]) are purely geometric statements, and as such they make no reference to any theory of gravitation. Now, since the shear is a “spatial” tensor, i.e., $\sigma^2 \equiv \sigma_{\mu\nu} \sigma^{\mu\nu}\geq 0$, from Eqs. (\[Raych-time\]) and (\[Raych-null\]), one has that for any hypersurface of orthogonal congruences ($\omega_{\mu\nu}=0$), the conditions for gravity to remain attractive ($d\theta / d\tau < 0$) are given by $$\begin{aligned} R_{\mu\nu}u^\mu u^\nu\geq 0 \;, \label{SEC} \\ R_{\mu\nu}k^\mu k^\nu \geq 0 \;. \label{NEC}\end{aligned}$$ Thus, as long as one can use the field equations of any given gravity theory to relate $R_{\mu \nu}$ to the energy-momentum tensor $T_{\mu \nu}$, the above Raychaudhuri Eqs. (\[Raych-time\]) and (\[Raych-null\]), along with the requirement that gravity is attractive, lead to Eqs. (\[SEC\]) and (\[NEC\]), which can be employed to restrict the energy-momentum tensors in the framework of the gravity theory one is concerned with. Equations (\[SEC\]) and (\[NEC\]) are ultimately the SEC and DEC stated in a coordinate-invariant way for an unfixed geometrical theory of gravitation. Hence, for example, in the framework of general relativity, they take, respectively, the forms[^3] $$\begin{aligned} R_{\mu\nu}\, u^{\mu}u^{\nu}&=&\left( T_{\mu\nu} - \frac{T}{2} g_{\mu\nu} \right)\,u^{\mu}u^{\nu}\geq 0 \,, \label{StrongEC}\end{aligned}$$ and $$\begin{aligned} \label{NullEC} R_{\mu\nu}k^{\mu}k^{\nu} = T_{\mu\nu}\, k^{\mu}k^{\nu}\geq 0\,,\end{aligned}$$ which, for example, for a perfect fluid of density $\rho$ and pressure $p\,$, i.e., for $T_{\mu\nu}= (\rho + p)\,u_{\mu}u_{\nu} - p\,g_{\mu\nu},$ reduce to the well-known forms of the SEC and NEC in general relativity $$\label{SEC-GR} \rho + 3p \geq 0 \qquad \text{and} \qquad \rho + p \geq 0 \;.$$ Energy conditions in $f(T)$ gravity {#ssec3} ----------------------------------- According to the previous section the Raychaudhuri equations together with the attractive character of the gravitational interaction give rise to Eqs. (\[SEC\]) and (\[NEC\]), which hold for any geometrical theory of gravitation. In what follows, we maintain this approach to derive the SEC and NEC in the $f(T)$ gravity context. To this end, we first rewrite the $f(T)$ field equation (\[motion1\]) in the form $$\begin{aligned} \label{einst} G_{\mu\nu}=\frac{1}{f_T}[\,\Theta_{\mu\nu}+\frac{1}{2}(Tf_T-f)g_{\mu\nu} - B_{\mu\nu}f_{TT}\,]\,.\end{aligned}$$ Here, $\Theta_{\mu\nu}$ and $\Theta$ denote, respectively, the energy momentum tensor and its trace. From Eq. (\[einst\]) and by taking into account the trace equation (\[trace\]), we have $$\begin{aligned} R_{\mu\nu}=\mathcal{T}_{\mu\nu}-\frac{1}{2}\,g_{\mu\nu}\,\mathcal{T}\;,\end{aligned}$$ where $$\begin{aligned} \label{TTT} \mathcal{T}_{\mu\nu} &=&\frac{1}{f_T}(\Theta_{\mu\nu}-f_{TT}B_{\mu\nu})\;, \\ \mathcal{T}&=&\frac{1}{f_T}(\Theta + T f_T - f- B f_{TT}).\end{aligned}$$ Now, for the homogeneous and isotropic Friedmann-Lemaître-Robertson-Walker (FLRW) metric with scale factor $a(t)$, i.e., $g_{\mu\nu}=diag(1,-a^2,-a^2,-a^2)$, from Eqs. (\[T\]) through (\[scalarT\]) along with Eq. (\[motion1add\]), we have $$\begin{aligned} T=-6H^2\,,\end{aligned}$$ $$\begin{aligned} \label{Amu} A_{00}=6H^2\,, & \qquad A_{ij}=-2a^2(3H^2+\dot{H})\,\delta_{ij}\,,\end{aligned}$$ $$\begin{aligned} B_{ij}=24a^2H^2\dot{H}\,\delta_{ij}\,, \qquad B=-72H^2\dot{H}\,,\end{aligned}$$ where a dot denotes derivative with respect to time, $H=\dot{a}/a$ is the Hubble parameter, and the simplest and suitable tetrad basis was used [@Tamanini-Bohmer-2012]. Now, for a perfect fluid of density $\rho$ and pressure $p$, namely for $$\begin{aligned} \label{emt} \Theta_{\mu\nu}=(\rho+p)u_\mu u_\nu-p\,g_{\mu\nu} \;\; \text{with} \;\; u_\mu=(1,0,0,0)\,,\end{aligned}$$ taking $k_\mu=(1, a, 0, 0)$, we obtain the $\mathcal{T}_{\mu\nu}\,$ and its trace $\mathcal{T}$, namely $$\begin{aligned} \mathcal{T}_{00}=\frac{1}{f_T}\,\rho\,, \quad \; \mathcal{T}_{ij} =\frac{a^2}{f_T}(p-24H^2\dot{H}f_{TT})\,\delta_{ij}\;,\end{aligned}$$ and $$\mathcal{T}=\frac{1}{f_T}(\rho-3p+Tf_T-f+72H^2\dot{H}f_{TT})\;.$$ Thus, from equations (\[SEC\]) and (\[NEC\]) for a general $f(T)$ gravity, the strong energy condition (SEC) and the null energy condition (NEC) can be, respectively, written as $$\begin{aligned} \label{sec1} \!\!\!\!\!\!\! \mbox{\bf SEC}:\quad \frac{1}{2f_T}(\rho+3p+f-Tf_T-72H^2\dot{H}f_{TT})\geq 0\,,\end{aligned}$$ and $$\begin{aligned} \label{nec1} \!\!\!\!\!\!\!\!\!\!\!\mbox{\bf NEC}: \quad \frac{1}{f_T}(\rho+p-24H^2\dot{H}f_{TT})\geq 0 \,.\end{aligned}$$ We note that the well-known forms for the SEC ($\rho + 3p \geq 0$) and NEC ($\rho + p \geq 0$) in the framework of general relativity can be recovered as a particular case of the above SEC and DEC in the context of $f(T)$ gravity for the special case $f(T)=T$, as one would expect. To derive the weak and dominant energy conditions (WEC and DEC) in $f(T)$ gravity, it is important to realize that the above SEC and NEC inequalities \[Eqs.(\[sec1\]) and (\[nec1\])\] can also be recast as an extension of the SEC and NEC conditions in the context of general relativity by defining suitably an effective energy-momentum tensor in the context of $f(T)$ gravity. In fact, in $f(T)$ gravity theories one can define an effective energy-momentum tensor as [^4] $$\begin{aligned} \Theta^{eff}_{\mu\nu}=\frac{1}{f_T}[\Theta_{\mu\nu}+\frac{1}{2}(Tf_T-f)g_{\mu\nu} -f_{TT}B_{\mu\nu}]\;,\end{aligned}$$ from which one defines the effective energy density and the effective pressure in the FLRW by $$\begin{aligned} \label{effrho} \rho^{eff} = -g^{00}\Theta^{eff}_{00}=\frac{1}{f_T}[\rho+\frac{1}{2}(Tf_T-f)]\;,\end{aligned}$$ $$\begin{aligned} \label{effp} p^{eff} &=& \frac{1}{3}g^{ij}\Theta^{eff}_{ij} \nonumber \\ &=&\frac{1}{f_T} [p-\frac{1}{2}(Tf_T-f)- 24H^2\dot{H}f_{TT}]\;,\end{aligned}$$ which in turn make apparent that the SEC and NEC given by Eqs. (\[sec1\]) and (\[nec1\]) can be obtained from the corresponding general relativity expressions \[Eq. (\[SEC-GR\])\] by using the above effective matter components. Thus, using the effective energy-momentum tensor approach, the weak energy condition (WEC) in $f(T)$ gravity ($\,\rho^{eff}\geq 0\,$) reduce to $$\begin{aligned} \label{wec1} \!\!\!\!\!\!\!\!\!\! \mbox{\bf WEC:} \quad \frac{1}{f_T}[\rho+\frac{1}{2}(Tf_T-f)]\geq 0\;.\end{aligned}$$ Similarly, the dominant energy condition (DEC) in $f(T)$ gravity ($\,\rho^{eff}\geq|\,p\,|\,$) can be written in the form $$\begin{aligned} \label{dec1} \!\!\!\!\!\!\! \mbox{\bf DEC:} \quad \frac{1}{f_T}[\rho-p+(Tf_T-f)+24H^2\dot{H}f_{TT}]\geq 0\;.\end{aligned}$$ Constraining $\mathbf{f(T)}$ gravity theories {#sec4} ============================================= The energy conditions (\[sec1\]), (\[nec1\]), (\[wec1\]), and (\[dec1\]) can be used to place bounds on a given $f(T)$ in the context of FLRW models. To investigate such bounds, we first note that to ensure the positivity of the effective Newton gravity constant, one has $f_T>0$ [@Zheng2011]. Thus, after some algebra, in terms of present-day values for the cosmological parameters, the energy conditions  (\[sec1\]), (\[nec1\]), (\[wec1\]), and (\[dec1\]) can be, respectively, rewritten as $$\begin{aligned} \label{sec-2} &\mbox{\bf SEC:} & \nonumber \\ \rho_0&\!\!\!+3p_0+f_0+6H_0^2f_{T_0}+72(1+q_0)H_0^4f_{T_0T_0}\geq 0\,; \\ \nonumber \\ &\mbox{\bf NEC:} & \nonumber \\ &\rho_0+p_0+24(1+q_0)H_0^4f_{T_0T_0}\geq0\,;\label{nec2} \\ \nonumber \\ &\mbox{\bf WEC:} & \nonumber \\ & 2\rho_0-f_0-6H_0^2f_{T_0}\geq0\,; \label{wec2} \\ \nonumber \\ &\mbox{\bf DEC:} & \nonumber \\ \rho_0&\!\!\!\!\!-p_0-f_0-6[H_0^2f_{T_0}+4(1+q_0)H_0^4f_{T_0T_0}]\geq0\,, \label{dec2}\end{aligned}$$ where $q = - (\ddot{a}/a)\,H^{-2}$ is the deceleration parameter, and a subscript $0$ indicates the present-day value of the corresponding parameter. To make concrete applications of the above conditions to set bounds on $f(T)$, we first note that apart from the WEC \[Eq. (\[wec2\])\], all the above conditions depend on the current value of the pressure $p_0$. Therefore, for simplicity in what follows we shall focus on the observational WEC constraints on $f(T)$ gravity. Furthermore, we will also take the best fit value $H_0=0.718$ as determined by Cappozzielo [*et al.*]{} [@Cosmography-2011]. Exponential $f(T)$ gravity {#subsec41} -------------------------- As a first concrete example, we shall examine the WEC bounds on the parameter $\beta$ of the following exponential family of $f(T)$ gravity theories [@Linder10; @Wei-Ma-Qi-2011; @kazu2011]: $$\begin{aligned} \label{exp-grav} f(T)=T+\alpha T(1-e^{\beta T_0/T})\;\end{aligned}$$ with $$\begin{aligned} \alpha=-\frac{1-\Omega_{m0}}{1-(1-2\beta)\,e^\beta}\;,\end{aligned}$$ where the limit $\beta=0$ corresponds to $\Lambda$CDM model, $\Omega_{m0}$ is the dimensionless matter density parameter, and $T_0 = T (z=0)$ is the current value for the torsion scalar. By using $T_0= -6 H_0^2$, one finds from (\[wec2\]) the following WEC constraint $$\begin{aligned} \label{expmod} \alpha\,\beta\, T_0\, e^\beta \geq 0\;.\end{aligned}$$ Now we take $\Omega_{m0}=0.272^{+0.036}_{-0.034}$ —which arises from the combination of $557$ Type Ia Supernovae (SNe Ia) Union $2$ set, baryonic acoustic oscillation (BAO), and the cosmic microwave background (CMB) radiation at $95\%$ confidence level —along with the above observational value of $H_0$. These values lead to $\beta>-1.256$ for the relation (\[expmod\]) to be satisfied. Reciprocally, the inequality (\[expmod\]) is always fulfilled for all values $\beta$ such that $\beta>-1.256$. This makes explicit the constraint on parameter $\beta$ of the exponential $f(T)$ gravity \[Eq.(\[exp-grav\])\] for the WEC fulfillment. Born-Infeld $f(T)$ gravity {#subsec42} -------------------------- As the second concrete example, we consider the Born-Infeld (BI) $f(T)$ gravity given by [@biws] $$\label{largbi} f(T)=\lambda\left[\left(1-\epsilon+\frac{2T}{\lambda}\right)^{1/2}-1\right]\;,$$ where $\epsilon=4\Lambda/\lambda$ is a dimensionless parameter, $\Lambda$ is the cosmological constant, and $\lambda$ is a Born-Infeld-like constant. This gravity theory has been considered in several cosmological contexts, which include the avoidance of singularity in the standard model [@avsig], as a way to an inflationary scenario without inflaton [@infla], and also to bound the dynamics of the Hubble parameter [@bisum]. Clearly, the BI $f(T)$ gravity (\[largbi\]) reduces to the standard TG (often referred to as TEGR) when $\lambda \rightarrow \infty $. Here, we focus on the case $\lambda>0$ [@biws]. In this case, the WEC takes the form $$\label{biwec} \epsilon-\frac{T_0}{\lambda}+\left(1-\epsilon+\frac{2T_0}{\lambda}\right)^{1/2}-1>0\;.$$ This inequality holds for $$\label{BI-ranges} 0<\epsilon<1 \qquad \mbox{and} \qquad \lambda>-\frac{T_0}{\sqrt{\epsilon}(1-\sqrt{\epsilon})}\;,$$ which makes apparent that the range of $\epsilon$ in which the WEC is fulfilled coincides with that of an expanding universe where the cosmological constant is positive (type $II$ of Ref. [@biws]). Furthermore, by using $T_0= -6 H_0^2$, one finds from inequations (\[BI-ranges\]) the WEC lower bound on the parameter $\lambda$ in the BI teleparallel gravity, namely $\lambda > 12.36\,$. Final remarks {#sec5} ============= Motivated by the attempts to explain the observed accelerating expansion of the Universe with a modifying teleparallel gravitational theory, there have been many recent papers on $f(T)$ gravity. Despite the arbitrariness in the choice of different functional forms of $f(T)$, which call for ways of constraining the possible $f(T)$ gravity theories on physical grounds, several features of $f(T)$ gravity have been discussed in a number of recent articles. In this paper we have proceeded further with the investigations on the potentialities, difficulties, and limitations of $f(T)$ gravity theories by deriving the classical energy conditions in the $f(T)$ gravity context. Starting from the Raychaudhuri equation along with the requirement that gravity is attractive, we have derived the null and strong energy conditions in the framework of $f(T)$ gravity and shown that, although similar, they differ from NEC and SEC of general relativity, but in the limiting case $f(T)=T$, they reduce to well-known NEC and SEC of Einstein’s gravitational theory. The comparison of the NEC and SEC inequalities \[Eqs. (\[sec1\]) and (\[nec1\])\] with those which would be obtained by translating these energy conditions in terms of an effective energy-momentum tensor for $f(T)$ gravity, enabled us to obtain the general expressions for the weak and dominant energy conditions \[Eqs. (\[wec1\]) and (\[dec1\])\], which also reduce to the known corresponding energy conditions in general relativity in the limit $f(T)=T$. As concrete examples of how these energy conditions requirements may constrain $f(T)$ gravity theories, we have discussed the WEC bounds on two different $f(T)$ families of theories, namely the exponential and Born-Infeld $f(T)$ gravity theories (Secs. \[subsec41\] and \[subsec42\]). To this end, we have used the current observational bounds on $H_0$ and $\Omega_{m0}$ to show that the WEC are fulfilled for $\beta>-1.256$ in the exponential $f(T)$ gravity, whereas for Born-Infeld $f(T)$ gravity the WEC fulfillment is guaranteed for any $\lambda > 12.36\,$ such that $0<\epsilon<1$ holds. Finally, we emphasize that although the energy conditions in $f(T)$ gravity discussed in this paper have well-motivated physical grounds (the attractive character of gravity together with the Raychadhuri equation), the question as to whether they should be employed to any solution of $f(T)$ gravity theories is an open question, which is ultimately related to the confrontation between theory and observations. We recall that in the context of Einstein’s gravitational theory, this confrontation indicates that all energy conditions seem to have been violated in the recent past of cosmic evolution [@EC-1; @EC-8]. M.J.R. acknowledges the support of FAPERJ under a CNE E-26/101.556/2010 grant. This work was also supported by Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) - Brasil, under grant No. 475262/2010-7. M.J.R. thanks CNPq for the Grant under which this work was carried out. We are grateful to A.F.F. Teixeira for reading the manuscript and indicating some omissions and typos. D.L. is particularly grateful to Professor P.X. Wu for his several helpful suggestions and long-time support of the author’s research. [99]{} A.G. Riess $et$ $al$., Astron. J. [**116**]{}, 1009 (1998); S. Perlmutter $et$ $al$., Astrohpys. J. [**517**]{}, 565 (1999). D.N. Spergel $et$ $al$., Astrophys. J. Suppl. [**170**]{}, 377S (2007). M. Tegmark $et$ $al$., Phys. Rev. D [**69**]{}, 103501 (2004); U. Seljak $et$ $al$., Phys. Rev. D [**71**]{}, 103515 (2005); D.J. Eisenstein $et$ $al$., Astrophys. J. [**633**]{}, 560 (2005). L. Randall and R. Sundrum, Phys. Rev. Lett. **83**, 3370 (1999); **83**, 4690 (1999); G. Dvali, G. Gabadadze, and M. Porrati, Phys. Lett. B **485**, 208 (2000); C. Deffayet, S.J. Landau, J. Raux, M. Zaldarriaga, and P. Astier, Phys. Rev. D **66**, 024019 (2002); V. Sahni and Y. Shtanov, J. Cosmol. Astropart. Phys. **11**, 014 (2003); A. Lue, Phys. Rep. **423**, 1 (2006). S. Capozziello and M. Francaviglia, Gen. Relativ. Gravit. **40**, 357 (2007); A. De Felice and S. Tsujikawa, Living Rev. Rel. **13**, 3 (2010); T.P. Sotiriou and V. Faraoni, Rev. Mod. Phys **82**, 451 (2010); S. Nojiri and S. D. Odintsov, Phys. Rep. **505**, 59 (2011). G.J. Olmo, Int. J. Mod. Phys. D, **20**, 413 (2011); S. Capozziello and M. De Laurentis, Phys. Rep. **509**, 167 (2011); S. Capozziello and V. Faraoni, *Beyond Einstein Gravity, Fundamental Theories of Physics*, (Springer-Verlag,Berlin, 2011), vol. 170. G.R. Bengochea and R. Ferraro, Phys. Rev. D **79**, 124019 (2009). E.V. Linder, Phys. Rev. D **81**, 127301 (2010), Phys. Rev. D **82**, 109902(E) (2010). R. Myrzakulov, Eur. Phys. J. C **71**, 1752 (2011). A. Einstein, Sitzungsber. Preuss. Akad. Wiss. Phys. Math. Kl. **17**, 217; 224 (1928); A. Unzicker and T. Case, arXiv:physics/0503046 A. Einstein, Math. Ann. [**102**]{}, 685 (1930); A. Einstein, Sitzungsber. Preuss. Akad. Wiss. Phys. Math. Kl. **24**, 401 (1930); C. Pellegrini and J. Plebański, K. Dan. Vidensk. Selsk. Mat. Fys. Skr. [**2**]{}, 2 (1962); C. M[ø]{}ler, K. Dan. Vidensk. Selsk. Mat. Fys. Skr. **89**, No. 13 (1978); K. Hayashi and T. Nakano, Prog. Theor. Phys. [**38**]{}, 491(1967); K. Hayashi and T. Shirafuji, Phys. Rev. D [**19**]{}, 3524 (1979); [**24**]{}, 3312 (1981). R. Aldrovandi and J.G. Pereira, *An Introduction to Teleparallel Gravity*, (Instituto de Fisica Teorica, UNESP, Sao Paulo, Brazil) http://www.ift.unesp.br/users/jpereira/tele.pdf ; V.C. de Andrade, L.C.T. Guillen, J.G. Pereira, arXiv: gr-qc/0011087. V.C. de Andrade, L.C.T. Guillen, and J.G. Pereira, Phys. Rev. Lett. [**84**]{}, 4533 (2000). G.R. Bengochea, Phys. Lett. B **695**, 405 (2011). P. Wu and H. Yu, Phys. Lett. B **693**, 415 (2010). H. Wei, X.P. Ma, and H.Y. Qi, Phys. Lett. B **703**, 74 (2011). L. Iorio and E.N. Saridakis, arXiv:1203.5781v1 \[gr-qc\] J.B. Dent, S. Dutta, and E.N. Saridakis, J. Cosmol. Astropart. Phys. 01, 009 (2011). S.H. Chen, J.B. Dent, S. Dutta, and E.N. Saridakis, Phys. Rev. D **83**, 023508 (2011). R. Zheng and Q.G. Huang, J. Cosmol. Astropart. Phys. **03**, 002 (2011). P. Wu and H. Yu, Phys. Lett. B **692**, 176 (2010). T. Wang, Phys. Rev. D **84**, 024042 (2011). C.G. Böhmer, A. Mussa, and N. Tamanini, Class. Quant. Grav. **28**, 245020 (2011). P. Wu and H.W. Yu, Eur. Phys. J. C **71**, 1552 (2011). S. Capozziello, V.F. Cardone, H. Farajollahi, and A. Ravanpak, Phys. Rev. D **84**, 043527 (2011). B. Li, T.P. Sotiriou, J.D. Barrow, Phys. Rev. D **83**, 064035 (2011). T.P. Sotiriou, B. Li, J.D. Barrow, Phys. Rev. D **83**, 104030 (2011). M. Li, R. X. Miao, and Y. G. Miao, J. High Energy Phys. **1107**, 108 (2011). R.X. Miao, M. Li, and Y.G. Miao, J. Cosmol. Astropart. Phys. **1111**, 033 (2011). R. Zheng and Q. Huang, J. Cosmol. Astropart. Phys. [**1103**]{} 002, (2011). T.P. Sotiriou, B. Li, and J.D. Barrow, Phys. Rev. D [**83**]{}, 104030 (2011). N. Tamanini and C.G. Böhmer Phys.Rev. D [**86**]{}, 044009 (2012). K.K. Yerzhanov, S.R. Myrzakul, I.I. Kulnazarov, and R. Myrzakulov, arXiv:1006.3879; R. Yang, Eur. Phys. J. C [**71**]{} 1797 (2011); P.Y. Tsyba, I.I. Kulnazarov, K.K. Yerzhanov, and R. Myrzakulov, Int. J. Theor. Phys. [**50**]{}, 1876 (2011); S.H. Chen, J.B. Dent, S. Dutta, and E.N. Saridakis, Phys. Rev. D [**83**]{}, 023508 (2011); K. Bamba, C.Q. Geng, and C. Lee, arXiv:1008.4036; R. Myrzakulov, arXiv:1008.4486; K. Karami and A. Abdolmaleki, arXiv:1009.2459; K. Karami and A. Abdolmaleki, J.Phys.Conf.Ser. [**375**]{}, 032009 (2012) R. Yang, Euro phys. Lett. [**93**]{}, 60001 (2011); J.B. Dent, S. Dutta, and E.N. Saridakis, JCAP [**1101**]{}, 009 (2011); Y. Cai, S. Chen, J.B. Dent, S. Dutta, and E.N. Saridakis, Class. Quantum Grav. [**28**]{}, 215011 (2011); S. Chattopadhyay and U. Debnath, Int. J. Mod. Phys. D [**20**]{}, 1135 (2011); M. Li, R. Miao, and Y. Miao, J. High Energy Phys. [**1107**]{}, 108 (2011); Y. Zhang, H. Li, Y. Gong, and Z.H. Zhu, J. Cosmol. Astropart. Phys. **07**, 015 (2011). X. Meng and Y. Wang, Eur. Phys. J. C [**71**]{} 1755 (2011); H. Dong, Y.B. Wang, and X.H. Meng, Eur. Phys. J. C [**72**]{}, 2002 (2012); C.G. Böhmer, A. Mussa, and N. Tamanini, Class. Quant. Grav. [**28**]{}, 245020 (2011); H. Wei, H. Qi, and X. Ma, Phys. Lett. B [**712**]{} 430, (2012); C.Q. Geng, C. Lee, E.N. Saridakis, and Y. Wu, Phys. Lett. B [**704**]{}, 384 (2011); K. Bamba and C.Q. Geng, JCAP [**11**]{} 008, (2011); R. Ferraro and F. Fiorini, Phys. Rev. D [**84**]{}, 083518 (2011); R. Ferraro and F. Fiorini, Phys. Lett. B [**702**]{} 75, (2011); H. Wei, Phys. Lett. B **712**, 430 (2012); Y. Wu and C.Q. Geng, arXiv:1110.3099; D. Liu, P. Wu and H.W. Yu, arXiv:1203.2016; P.A. Gonzalez, E.N. Saridakis, and Y. Vasquez,J. High Energy Phys. [**1207**]{} 053 (2012) C.G. Böhmer, T. Harko, and F.S.N. Lobo, Phys. Rev. D [**85**]{}, 044033 (2012); K. Karami and A. Abdolmaleki, arXiv:1111.7269; H. Wei, X. Guo and L. Wang, Phys. Lett. B [**707**]{}, 2 (2012); K. Atazadeh and F. Darabi, Eur. Phys. J. C [**72**]{}, 2016(2012); H. Farajollahi, A. Ravanpak, P. Wu, Astrophys Space Sci. [**338**]{}, 195 (2012); K. Karami and A. Abdolmaleki, JCAP 04 [**007**]{} (2012); M.H. Daouda, M.E. Rodrigues, and M.J.S. Houndjo, Euro. Phys. J. C. **71**, 1817 (2011); M.H. Daouda, M.E. Rodrigues, and M.J.S. Houndjo, Euro. Phys. J. C. **72**, 1890 (2012). S.W. Hawking and G.F.R. Ellis, [*The Large Scale Structure of Spacetime*]{}, (Cambridge University Press, cambridge, England, 1973). M. Visser, [*Lorentzian Wormholes*]{}, (AIP Press, New York, 1996). R.M. Wald, *General Relativity*, (University of Chicago Press, Chicago, 1984). M. Visser, Science **276**, 88 (1997); Phys. Rev. D **56**, 7578 (1997). J. Santos, J.S. Alcaniz, and M.J. Rebouças, Phys. Rev. D **74**, 067301 (2006). J. Santos, J.S. Alcaniz, N. Pires, and M.J. Rebouças, Phys. Rev. D **75**, 083523 (2007). J.Santos, J.S. Alcaniz, M.J. Rebouças, and N. Pires, Phys. Rev. D **76**, 043519 (2007). Y. Gong, A. Wang, Q. Wu, and Y.Z. Zhang, J. Cosmol. Astropart. Phys. **08**, 018 (2007). Y. Gong and A. Wang, Phys. Lett. B **652**, 63 (2007). A.A. Sen and R.J. Scherrer, Phys. Lett. B **659**, 457 (2008); C. Catto[ë]{}n and M. Visser, Class. Quant. Grav. **25**, 165013 (2008). M.P. Lima, S. Vitenti, and M. J. Rebouças, Phys. Rev. D **77**, 083518 (2008); M.P. Lima, S.D.P. Vitenti, and M.J. Rebouças, Phys. Lett. B **668**, 83 (2008); M.P. Lima, S.D.P. Vitenti, M.J. Rebouças, [*in Astronomy and Relativistic Astrophysics: New Phenomena and New States of Matter in the Universe*]{}, edited by C.A.Z. Vasconcellos, B.E.J. Bodmann, H.Stoecker, M.J. Rebouças, V.B. Bezerra, and W. Greiner. (World Scientific, Singapore, 2010), p.219. C.J. Wu, C. Ma, and T.J. Zhang, Astrophys. J. **753**, 97 (2012). J. Santos, J.S. Alcaniz, M.J. Rebouças, and F.C. Carvalho, Phys. Rev. D **76**, 083513 (2007); J. Santos, M.J. Reboucas, and J.S. Alcaniz, Int. J. Mod. Phys. D **19**, 1315 (2010). J.H. Kung, Phys. Rev. D [**52**]{}, 6922 (1995); Phys. Rev. D [**53**]{}, 3017 (1996). O. Bertolami and M.C. Sequeira, Phys. Rev. D **79**, 104010 (2009). N.M. Garcia, T. Harko, F.S.N. Lobo, and J.P. Mimoso, Phys. Rev. D **83**, 104032 (2011). Y.Y. Zhao, Y.B. Wu, J. Lu, Z. Zhang, W.Li Han, and L.L. Ling Eur. Phys. J. C **72**, 1924 (2012). K. Atazadeh, A. Khaleghi, H.R. Sepangi, and Y. Tavakoli, Int. J. Mod. Phys. D **18**, 1101 (2009). A. Banijamali, B. Fazlpour, and M. R. Setare, Astrophys. Space Sci. **338**, 327 (2012). F.G. Alvarenga, M.J.S. Houndjo, A.V. Monwanou, and J.B. C. Orou, arXiv:1205.4678v2 \[gr-qc\]. R. Ferraro and F. Fiorini, Phys. Rev. D **78**, 124019 (2008). R. Ferraro and F. Fiorini, Int. J. Mod. Phys. Conf. Ser. **3**, 227 (2011). R. Ferraro and F. Fiorini, Phys. Rev. D **75**, 124019 (2007). F. Fiorini and R. Ferraro, Int. J. Mod. Phys. A **24**, 1686(2009). S. Kar and S. SenGupta, Pramana **69**, 49 (2007); N.Dadhich, arXiv:gr-qc/0511123. S. Carroll, [*Spacetime and Geometry: An Introduction to General Relativity*]{}, (Addison-Wesley, Reading, MA, 2004). : K. Bamba, C.Q. Geng, C.C. Lee and L.W. Luo, J. Cosmol. Astropart. Phys. **01**, 021 (2011). [^1]: An interesting member of this group arises by assuming extra dimensions and by taking the Lagrangian of the theory as function of the higher-dimensional Ricci scalar. This approach gives rise to the brane-world cosmology [@bw-refs]. [^2]: Throughout this paper we use Greek letters to denote spacetime coordinate indices, which are lowered and raised, respectively, with $g_{\mu \nu }$ and $g^{\mu \nu }$, and vary from $0$ to $3$, whereas firsts alphabetic latin lower case letters ($a$ and $b$) are tetrad indices, which are lowered and raised with the Minkowski tensor $\eta_{ab} = \text{diag}\,(1,-1,-1,-1)$ and $ \eta^{ab}$, respectively. We denote the spatial components ($1, 2, 3$) by using the middle alphabetic latin lower case letters $i$ and $j$. [^3]: Clearly, here $T$ is not the torsion scalar, but the trace of the energy momentum tensor $T=T^\mu_\mu$. [^4]: A comparison with the effective energy-momentum tensor of Ref. [@Lilarg] makes clear that the one used in the present work includes the whole matter.
--- abstract: 'Global optimality analysis in sub-Riemannian problem on the Lie group SH(2) is considered. We cutout open dense domains in the preimage and in the image of the exponential mapping based on the description of Maxwell strata. We then prove that the exponential mapping restricted to these domains is a diffeomorphism. Based on the proof of diffeomorphism, the cut time, i.e., time of loss of global optimality is computed on $\mathrm{\ensuremath{SH(2)}}$. We also consider the global structure of the exponential mapping and obtain an explicit description of cut locus and optimal synthesis.' author: - 'Yasir Awais Butt, Yuri L. Sachkov, Aamer Iqbal Bhatti' bibliography: - 'ref.bib' title: 'Cut Locus and Optimal Synthesis in Sub-Riemannian Problem on the Lie Group SH(2)' --- Introduction ============ In this work we complete our study of the sub-Riemannian problem on the Lie group $\mathrm{SH}(2)$ which is the group of motions of pseudo Euclidean plane. The work was initiated in [@Extremal_Pseudo_Euclid] where we defined the sub-Riemannian problem. The control system comprises two 3-dimensional left invariant vector fields and a 2-dimensional linear control vector. We applied PMP to the control system and obtained the corresponding Hamiltonian system. In [@intg_SH2] we proved the Liouville integrability of the Hamiltonian system. We calculated the Hamiltonian flow such that the extremal trajectories were parametrized in terms of Jacobi elliptic functions [@Extremal_Pseudo_Euclid]. Since PMP states only the first order optimality conditions, the trajectory resulting from PMP are only potentially optimal called extremal trajectories or geodesics. Further analysis based on second order optimality conditions is then needed to segregate the optimal trajectories or the minimizing geodesics. It is well known that the candidate optimal trajectories lose optimality either at the Maxwell points or at the conjugate points [@agrachev_sachkov],[@max_sre],[@cut_sre1]. Based on the optimality analysis one is able to state the time of loss of global optimality known as the cut time. Rigorous techniques for this optimality analysis have evolved over the years from research on related sub-Riemannian problems on various Lie groups, see e.g., [@max_sre], [@cut_sre1], [@Sachkov_Dido_Comp_Max], [@cut_engel]. These techniques were employed in [@Extremal_Pseudo_Euclid] and [@Max_Conj_SH2] to compute the Maxwell strata and the conjugate locus in the problem under investigation. An effective upper bound on the cut time was also computed. In this paper we extend the global optimality analysis similar to [@cut_sre2]. We decompose the image $M=\mathrm{SH}(2)$ and the preimage of the exponential mapping into open dense sets based on the Maxwell strata and conjugate loci and prove that the exponential mapping between these sets is a diffeomorphism. This leads naturally to the proof that the cut time is equal to the first Maxwell time. Finally, we analyze the global structure of the exponential mapping and obtain explicit characterization of the cut locus and the optimal synthesis on the manifold $\mathrm{SH}(2)$. The paper is organized as follows. In Section 2, we review the results from [@Extremal_Pseudo_Euclid] and [@Max_Conj_SH2] as ready reference. Sections 3 and 4 contain the main results of this work. In Section 3 we state and prove the conditions for exponential mapping being a diffeomorphism and compute the cut time. Section 4 pertains to explicit characterization of the Maxwell strata and the cut locus in terms of a stratification of $\mathrm{SH}(2)$. In Section 5 we conclude this work. Previous Work ============= Problem Statement\[sec:Problem-Statement\] ------------------------------------------ The Lie group $\mathrm{SH(2)}$ is a 3-dimensional group of roto-translations of the pseudo Euclidean plane [@Ja.Vilenkin]. The sub-Riemannian problem on the Lie group $\mathrm{SH(2)}$ reads as follows [@Extremal_Pseudo_Euclid]: $$\begin{aligned} \dot{x} & = & u_{1}\cosh z,\quad\dot{y}=u_{1}\sinh z,\quad\dot{z}=u_{2},\label{eq:2.1}\\ q & = & (x,y,z)\in M=\mathrm{SH(2)\cong\mathbb{R}^{3}},\quad x,y,z\in\mathbb{R},\quad(u_{1},u_{2})\in\mathbb{R}^{2},\label{eq:2.2}\\ q(0) & = & (0,0,0),\qquad q(t_{1})=q_{1}=(x_{1},y_{1},z_{1}),\label{eq:2.3}\\ l & = & \int_{0}^{t_{1}}\sqrt{u_{1}^{2}+u_{2}^{2}}\, dt\to\min.\label{eq:2.4}\end{aligned}$$ By Cauchy-Schwarz inequality, the sub-Riemannian length functional $l$ minimization problem (\[eq:2.4\]) is equivalent to the problem of minimizing the following action functional with fixed $t_{1}$ [@sachkov_lectures]: $$J=\frac{1}{2}\intop_{0}^{t_{1}}(u_{1}^{2}+u_{2}^{2})dt\rightarrow\min.\label{eq:2.5}$$ Known Results\[sec:Previous-Work\] ---------------------------------- We now briefly review the results from [@Extremal_Pseudo_Euclid] and [@Max_Conj_SH2] as a ready reference in this paper. System (\[eq:2.1\]) satisfies the bracket generating condition and is hence globally controllable [@Chow],[@Ravchevsky]. Existence of optimal trajectories for the optimal control problem (\[eq:2.1\])–(\[eq:2.5\]) follows from Filippovs theorem [@agrachev_sachkov]. We applied PMP [@agrachev_sachkov] to (\[eq:2.1\])–(\[eq:2.5\]) to derive the normal Hamiltonian system. It turns out that the vertical part of the normal Hamiltonian system is a double covering of a mathematical pendulum. The normal Hamiltonian system is given as: $$\begin{aligned} \dot{\gamma} & = & c,\quad\dot{c}=-\sin\gamma,\quad\lambda=(\gamma,c)\in C\cong(2S_{\gamma}^{1})\times\mathbb{R}_{c},\quad2S_{\gamma}^{1}=\mathbb{R}/(4\pi\mathbb{Z}),\label{eq:2.6}\\ \dot{x} & = & \cos\frac{\gamma}{2}\cosh z,\quad\dot{y}=\cos\frac{\gamma}{2}\sinh z,\quad\dot{z}=\sin\frac{\gamma}{2}.\label{eq:2.7}\end{aligned}$$ The total energy integral of the pendulum (\[eq:2.6\]) is given as: $$E=\frac{c^{2}}{2}-\cos\gamma,\quad E\in[-1,+\infty).\label{eq:E_pend}$$ The initial cylinder of the vertical subsystem is decomposed into the following subsets based upon the pendulum energy that correspond to various pendulum trajectories: $$\begin{aligned} C & = & \bigcup_{i=1}^{5}C_{i},\end{aligned}$$ where, $$\begin{aligned} C_{1} & = & \left\{ \lambda\in C\,\vert\, E\in(-1,1)\right\} ,\label{eq:2.8}\\ C_{2} & = & \left\{ \lambda\in C\,\vert\, E\in(1,\infty)\right\} ,\\ C_{3} & = & \left\{ \lambda\in C\,\vert\, E=1,c\neq0\right\} ,\\ C_{4} & = & \left\{ \lambda\in C\,\vert\, E=-1,\, c=0\right\} =\left\{ (\gamma,c)\in C\,\vert\,\gamma=2\pi n,\, c=0\right\} ,\quad n\in\mathbb{N},\\ C_{5} & = & \left\{ \lambda\in C\,\vert\, E=1,\, c=0\right\} =\left\{ (\gamma,c)\in C\,\vert\,\gamma=2\pi n+\pi,\, c=0\right\} ,\quad n\in\mathbb{N}.\label{eq:2.12}\end{aligned}$$ ![\[fig:Decomposition\]Stratification of the Phase Cylinder $C$ of the Pendulum](DecomPhCyl) We defined elliptic coordinates $(\varphi,k)$ for $\lambda\in\cup_{i=1}^{3}C_{i}\subset C$ and proved that the flow of the pendulum is rectified in these coordinates. Note that $k$ was defined as the reparametrized energy and $\varphi$ was defined as the reparametrized time of motion of the pendulum [@Extremal_Pseudo_Euclid]. Integration of the horizontal subsystem in elliptic coordinates follows from integration of the vertical subsystem and the resulting extremal trajectories are parametrized by the Jacobi elliptic functions $\mathrm{sn}(\varphi,k)$, $\mathrm{cn}(\varphi,k)$, $\mathrm{dn}(\varphi,k)$, $\mathrm{E}(\varphi,k)=\intop_{0}^{\varphi}\mathrm{dn^{2}}(t,k)dt$ (Theorems 5.1–5.5 [@Extremal_Pseudo_Euclid]). The results of integration for $\lambda\in C_{i},\quad i=1,\ldots,5,$ are summarized as: - Case 1 : $\lambda=(\varphi,k)\in C_{1}$ $$\left(\begin{array}{c} x_{t}\\ y_{t}\\ z_{t} \end{array}\right)=\left(\begin{array}{c} \frac{s_{1}}{2}\left[\left(w+\frac{1}{w\left(1-k^{2}\right)}\right)\left[\mathrm{E}(\varphi_{t})-\mathrm{E}(\varphi)\right]+\left(\frac{k}{w(1-k^{2})}-kw\right)\left[\mathrm{sn}\,\varphi_{t}-\mathrm{sn}\,\varphi\right]\right]\\ \frac{1}{2}\left[\left(w-\frac{1}{w\left(1-k^{2}\right)}\right)\left[\mathrm{E}(\varphi_{t})-\mathrm{E}(\varphi)\right]-\left(\frac{k}{w\left(1-k^{2}\right)}+kw\right)\left[\mathrm{sn}\,\varphi_{t}-\mathrm{sn}\,\varphi\right]\right]\\ s_{1}\ln\left[(\mathrm{dn}\,\varphi_{t}-k\mathrm{cn}\,\varphi_{t}).w\right] \end{array}\right),\label{eq:2.13}$$ where $w=\frac{1}{\mathrm{dn}\varphi-k\mathrm{cn}\varphi}$, $s_{1}=\mathrm{sgn}\left(\cos\frac{\gamma}{2}\right)$ and $\varphi_{t}=\varphi+t$. - Case 2 : $\lambda=(\psi,k)\in C_{2}$ $$\begin{aligned} x_{t} & = & \frac{1}{2}\left(\frac{1}{w(1-k^{2})}-w\right)\left[\mathrm{E}(\psi_{t})-\mathrm{E}(\psi)-k^{\prime2}\left(\psi_{t}-\psi\right)\right]\nonumber \\ & + & \frac{1}{2}\left(kw+\frac{k}{w(1-k^{2})}\right)\left[\mathrm{sn}\,\psi_{t}-\mathrm{sn}\,\psi\right],\nonumber \\ y_{t} & = & -\frac{s_{2}}{2}\left(\frac{1}{w(1-k^{2})}+w\right)\left[\mathrm{E}(\psi_{t})-\mathrm{E}(\psi)-k^{\prime2}(\psi_{t}-\psi)\right]\nonumber \\ & + & \frac{s_{2}}{2}\left(kw-\frac{k}{w(1-k^{2})}\right)\left[\mathrm{sn}\,\psi_{t}-\mathrm{sn}\,\psi\right],\nonumber \\ z_{t} & = & s_{2}\ln\left[\left(\mathrm{dn}\,\psi_{t}-k\mathrm{cn}\,\psi_{t}\right).w\right],\label{eq:2.14}\end{aligned}$$ where $\psi=\frac{\varphi}{k}$, $\quad\psi_{t}=\frac{\varphi_{t}}{k}=\psi+\frac{t}{k}$ and $w=\frac{1}{\mathrm{dn}\,\psi-k\mathrm{cn}\,\psi}$, $s_{2}=\mathrm{sgn}\, c$, $k^{\prime}=\sqrt{1-k^{2}}$. - Case 3 : $\lambda=(\varphi,k)\in C_{3}$ $$\left(\begin{array}{c} x_{t}\\ y_{t}\\ z_{t} \end{array}\right)=\left(\begin{array}{c} \frac{s_{1}}{2}\left[\frac{1}{w}\left(\varphi_{t}-\varphi\right)+w\left(\tanh\varphi_{t}-\tanh\varphi\right)\right]\\ \frac{s_{2}}{2}\left[\frac{1}{w}\left(\varphi_{t}-\varphi\right)-w\left(\tanh\varphi_{t}-\tanh\varphi\right)\right]\\ -s_{1}s_{2}\ln[w\,\textrm{sech}\,\varphi_{t}] \end{array}\right),\label{eq:2.15}$$ where $w=\cosh\varphi$. - Case 4 : $\lambda=(\varphi,k)\in C_{4}$ $$\left(\begin{array}{c} x\\ y\\ z \end{array}\right)=\left(\begin{array}{c} \mathrm{sgn}\left(\cos\frac{\gamma}{2}\right)t\\ 0\\ 0 \end{array}\right).\label{eq:2.16}$$ - Case 5 : $\lambda=(\varphi,k)\in C_{5}$ $$\left(\begin{array}{c} x\\ y\\ z \end{array}\right)=\left(\begin{array}{c} 0\\ 0\\ \mathrm{sgn}\left(\sin\frac{\gamma}{2}\right)t \end{array}\right).\label{eq:2.17}$$ The phase portrait of the pendulum admits a discrete group of symmetries $G=\{Id,\varepsilon^{1},\ldots,\varepsilon^{7}\}$. The symmetries $\varepsilon^{i}$ are reflections and translations about the coordinates axes $(\gamma,c)$. The reflection symmetries in the phase portrait of a standard pendulum are given as: $$\begin{alignedat}{1}\varepsilon^{1}:(\gamma,c) & \rightarrow(\gamma,-c),\\ \varepsilon^{2}:(\gamma,c) & \rightarrow(-\gamma,c),\\ \varepsilon^{3}:(\gamma,c) & \rightarrow(-\gamma,-c),\\ \varepsilon^{4}:(\gamma,c) & \rightarrow(\gamma+2\pi,c),\\ \varepsilon^{5}:(\gamma,c) & \rightarrow(\gamma+2\pi,-c),\\ \varepsilon^{6}:(\gamma,c) & \rightarrow(-\gamma+2\pi,c),\\ \varepsilon^{7}:(\gamma,c) & \rightarrow(-\gamma+2\pi,-c). \end{alignedat} \label{eq:symm}$$ According to Proposition 6.3 [@Extremal_Pseudo_Euclid], the action of reflections on endpoints of extremal trajectories can be defined as $\varepsilon^{i}:q\mapsto q^{i}$, where $q=(x,y,z)\in M,\quad q^{i}=(x^{i},y^{i},z^{i})\in M$ and, $$\begin{aligned} (x^{1},y^{1},z^{1}) & =(x\cosh z-y\sinh z,\, x\sinh z-y\cosh z,\, z),\nonumber \\ (x^{2},y^{2},z^{2}) & =(x\cosh z-y\sinh z,\,-x\sinh z+y\cosh z,\,-z),\nonumber \\ (x^{3},y^{3},z^{3}) & =(x,\,-y,\,-z),\nonumber \\ (x^{4},y^{4},z^{4}) & =(-x,\, y,\,-z),\label{eq:symm_M}\\ (x^{5},y^{5},z^{5}) & =(-x\cosh z+y\sinh z,\, x\sinh z-y\cosh z,\,-z),\nonumber \\ (x^{6},y^{6},z^{6}) & =(-x\cosh z+y\sinh z,\,-x\sinh z+y\cosh z,\, z),\nonumber \\ (x^{7},y^{7},z^{7}) & =(-x,\,-y,\, z).\nonumber \end{aligned}$$ These symmetries are exploited to state the general conditions on Maxwell strata in terms of the functions $z_{t}$ and $R_{i}(q)$ given as: $$R_{1}=y\cosh\frac{z}{2}-x\sinh\frac{z}{2},\quad R_{2}=x\cosh\frac{z}{2}-y\sinh\frac{z}{2}.\label{eq:2.18}$$ We define the Maxwell sets $MAX^{i},\quad i=1,\ldots,7$, resulting from the reflections $\varepsilon^{i}$ of the extremals in the preimage of the exponential mapping $N$ as: $$\mathrm{MAX}^{i}=\left\{ \text{\ensuremath{\nu}}=(\text{\ensuremath{\lambda}},t)\text{\ensuremath{\in}}N=C\times\mathbb{R}^{+}\quad|\quad\lambda\neq\lambda{}^{i},\quad\mathrm{Exp}(\lambda,t)=\mathrm{Exp}(\lambda^{i},t)\right\} ,$$ where $\lambda=\varepsilon^{i}(\lambda).$ The corresponding Maxwell strata in the image of the exponential mapping are defined as: $$\mathrm{Max}^{i}=\mathrm{Exp}(\mathrm{MAX}^{i})\subset M.$$ In [@Max_Conj_SH2] Proposition 3.7 we proved that the first Maxwell points corresponding to the reflection symmetries of the vertical subsystem lie on the plane $z=0$ and the corresponding Maxwell time $t_{1}^{\mathrm{Max}}(\lambda)$ is given as : $$\begin{aligned} \lambda\in C_{1} & \implies & t_{1}^{\mathrm{Max}}(\lambda)=4K(k),\label{eq:2.20}\\ \lambda\in C_{2} & \implies & t_{1}^{\mathrm{Max}}(\lambda)=4kK(k),\label{eq:2.23}\\ \lambda\in C_{3}\cup C_{4}\cup C_{5} & \implies & t_{1}^{\mathrm{Max}}(\lambda)=+\infty.\label{eq:2.24}\end{aligned}$$ Similarly we proved that the first conjugate time $t_{1}^{\mathrm{conj}}(\lambda)$ is bounded as (Theorems 4.1–4.3) [@Max_Conj_SH2]: $$\begin{aligned} \lambda\in C_{1} & \implies & 4K(k)\leq t_{1}^{\mathrm{conj}}(\lambda)\leq2p_{1}^{1}(k),\label{eq:2.25}\\ \lambda\in C_{2} & \implies & 4kK(k)\leq t_{1}^{\mathrm{conj}}(\lambda)\leq2k\, p_{1}^{1}(k),\label{eq:2.26}\\ \lambda\in C_{4} & \implies & t_{1}^{\mathrm{conj}}(\lambda)=2\pi,\label{eq:2.27}\\ \lambda\in C_{3}\cup C_{5} & \implies & t_{1}^{\mathrm{conj}}(\lambda)=+\infty.\label{eq:2.28}\end{aligned}$$ where $p_{1}^{1}(k)$ is the first positive root of the function $f_{1}(p)=\mathrm{cn}p\,\mathrm{E}(p)-\mathrm{sn}p\,\mathrm{dn}p,$ which is bounded as $p_{1}^{1}(k)\in(2K(k),3K(k))$. Note that we defined: $$\begin{aligned} \varphi_{t}=\tau+p,\quad\varphi=\tau-p\implies\tau & = & \frac{1}{2}\left(\varphi_{t}+\varphi\right),\, p=\frac{t}{2}\textrm{ when }\nu=(\lambda,t)\in N_{1}\cup N_{3},\label{eq:2.29}\\ \psi_{t}=\frac{\varphi_{t}}{k}=\tau+p,\quad\psi=\frac{\varphi}{k}=\tau-p\implies\tau & = & \frac{1}{2k}\left(\varphi_{t}+\varphi\right),\, p=\frac{t}{2k}\textrm{ when }\nu=(\lambda,t)\in N_{2}.\label{eq:2.30}\end{aligned}$$ Here and below $N_{i}=C_{i}\times\mathbb{R}_{+}$. Upper Bound on Cut Time ======================= In this section we describe the basic properties of the upper bound on cut time obtained in [@Max_Conj_SH2]. Define the following function $\mathbf{t}:C\rightarrow(0,+\infty]$, $$\mathbf{t}(\lambda)=\min\left(t_{1}^{\mathrm{Max}}(\lambda),t_{1}^{\mathrm{conj}}(\lambda)\right),\quad\lambda\in C.$$ Equalities (\[eq:2.20\])–(\[eq:2.28\]) yield the explicit representation of this function: $$\begin{aligned} \lambda\in C_{1} & \implies & \mathbf{t}(\lambda)=4K(k),\label{eq:ttC1}\\ \lambda\in C_{2} & \implies & \mathbf{t}(\lambda)=4kK(k),\label{eq:ttC2}\\ \lambda\in C_{4} & \implies & \mathbf{t}(\lambda)=2\pi,\label{eq:ttC4}\\ \lambda\in C_{3}\cup C_{5} & \implies & \mathbf{t}(\lambda)=+\infty.\label{eq:ttC35}\end{aligned}$$ In [@Max_Conj_SH2] we proved the upper bound: $$t_{\mathrm{cut}}(\lambda)\leq\mathbf{t}(\lambda),\quad\lambda\in C.\label{eq:tCutbound}$$ We now prove that inequality (\[eq:tCutbound\]) is in fact an equality (see Theorem \[thm:cut\_time\_exact\]). The general scheme of the proof is as follows [@cut_sre1], [@cut_engel]: 1. The exponential mapping $\mathrm{Exp}:N=C\times\mathbb{R}_{+}\rightarrow M$ parametrizes all optimal geodesics, but also all non-optimal ones, since all the geodesics $\mathrm{Exp}(\lambda,t)$ with $t>\mathbf{t}(\lambda)$ are not optimal. 2. We reduce the domain of the exponential mapping so that it does not include these a priori non-optimal geodesics: $$\widehat{N}=\left\{ (\lambda,t)\in N\quad\mid\quad t\leq\mathbf{t}(\lambda)\right\} .$$ We also reduce the range of the exponential mapping so that it does not contain the initial point for which the optimal geodesic is trivial: $$\widehat{M}=M\backslash\left\{ q_{0}\right\} .$$ Then $\mathrm{Exp}:\widehat{N}\rightarrow\widehat{M}$ is surjective, but not injective, due to Maxwell points. 3. We exclude Maxwell points in the image of $\mathrm{Exp}$: $$\widetilde{M}=\left\{ q\in M\quad\mid\quad\varepsilon^{i}(q)\neq q\right\} ,$$ and reduce respectively the preimage of $\mathrm{Exp}$: $$\widetilde{N}=\mathrm{Exp}^{-1}\left(\widetilde{M}\right).$$ The mapping $\mathrm{Exp}:\widetilde{N}\rightarrow\widetilde{M}$ is injective. Moreover, it is non-degenerate since $t_{1}^{\mathrm{conj}}(\lambda)\geq\mathbf{t}(\lambda)$. 4. We take connected components in preimage and image of $\mathrm{Exp}:$ $$\widetilde{N}=\cup D_{i},\qquad\widetilde{M}=\cup M_{i}.$$ Each of the mappings $\mathrm{Exp}:D_{i}\rightarrow M_{i}$ is non-degenerate and proper. Moreover, all $D_{i}$ and $M_{i}$ are smooth 3-dimensional manifolds, connected and simply connected. By Hadamard’s global diffeomorphism theorem [@diffeo], each $\mathrm{Exp}:D_{i}\rightarrow M_{i}$ is a diffeomorphism. Thus $\mathrm{Exp}:\widetilde{N}\rightarrow\widetilde{M}$ is a diffeomorphism as well. 5. Further, we consider the action of the exponential mapping on the boundary of the 3-dimensional diffeomorphic domains: $$\begin{aligned} \mathrm{Exp}:N^{\prime} & \rightarrow & M^{\prime},\quad N^{\prime}=\widehat{N}\backslash\widetilde{N},\qquad M^{\prime}=\widehat{M}\backslash\widetilde{M}.\end{aligned}$$ We construct a stratification in the preimage and the image of $\mathrm{Exp}:$ $$\begin{aligned} N^{\prime} & = & \cup N_{i}^{\prime},\quad M^{\prime}=\cup M_{i}^{\prime},\\ \textrm{dim }N_{i}^{\prime},\textrm{\,\ dim }M_{i}^{\prime} & \in & \left\{ 0,1,2\right\} ,\end{aligned}$$ where all $N_{i}^{\prime}$ are disjoint, while some $M_{i}^{\prime}$ coincide with others. Further, we prove that all $\mathrm{Exp}:N_{i}^{\prime}\rightarrow M_{i}^{\prime}$ are diffeomorphisms by the same argument. 6. On the basis of the global diffeomorphic structure of the exponential mapping thus described, we get the following results: $$\begin{aligned} t_{\mathrm{cut}}(\lambda) & = & \mathbf{t}(\lambda),\quad\lambda\in C,\\ \mathrm{Max} & = & \cup\left\{ M_{i}^{\prime}\quad\mid\quad\exists\, j\neq i\textrm{ such that }M_{j}^{\prime}=M_{i}^{\prime}\right\} ,\\ \mathrm{Cut} & = & \mathrm{cl}(\mathrm{Max})\backslash\left\{ q_{0}\right\} ,\\ \mathrm{Cut}\cap\mathrm{Conj} & = & \partial(\mathrm{Max})\backslash\left\{ q_{0}\right\} .\end{aligned}$$ We show that the optimal synthesis is double valued on the Maxwell set $\mathrm{Max}$, and is one valued on $\widehat{M}\backslash\mathrm{Max}$.\ The central notion of our approach is the stratification in the preimage and in the image of $\mathrm{Exp}:$ $$\begin{aligned} {2} \widehat{N} & =\left(\cup D_{i}\right)\cup\left(\cup N_{i}^{\prime}\right),\\ \widehat{M} & =\left(\cup M_{i}\right)\cup\left(\cup M_{i}^{\prime}\right),\\ \textrm{dim}(D_{i}) & =\textrm{dim}(M_{i})=3,\\ \textrm{dim}(N_{i}^{\prime}),\textrm{\,\ dim}(M_{i}^{\prime}) & \in\left\{ 0,1,2\right\} , & {}\end{aligned}$$ such that all the corresponding strata are diffeomorphic via the exponential mapping, i.e., $\mathrm{Exp}:D_{i}\rightarrow M_{i}$ and $\mathrm{Exp}:N_{i}^{\prime}\rightarrow M_{i}^{\prime}$ are diffeomorphisms. It is well known [@cut_engel],[@diffeo] that for any smooth manifolds $X$ and $Y$ of equal dimensions, a smooth mapping $f:X\rightarrow Y$ is a diffeomorphism if $f$, $X$ and $Y$ satisfy the following conditions **P1** – **P4**: **P1** - $X$ is connected, **P2** - $Y$ is connected and simply connected, **P3** - **** $f$ is non-degenerate, **P4** - $f$ is proper, i.e., for any compact set $K\subset Y$ the inverse image $f^{-1}(K)\subset X$ is also compact. We now consider the invariance properties of the function $\mathbf{t}$ with respect to the reflections $\varepsilon^{i}\in G$ and the vertical part of the Hamiltonian vector field: $$\overrightarrow{H}_{\nu}=c\frac{\partial}{\partial\gamma}-\sin\gamma\frac{\partial}{\partial c}\in\mathrm{Vec}(C).$$ \[prop:tt\_invar\] $\qquad$\ $\quad$The function $\mathbf{t}$ is invariant w.r.t. the reflections $\varepsilon^{i}\in G$ and the flow of $\overrightarrow{H}_{\nu}$: $$\mathbf{t}\circ\varepsilon^{i}(\lambda)=\mathbf{t}\circ e^{t\overrightarrow{H}_{\nu}}(\lambda)=\mathbf{t}(\lambda),\quad\lambda\in C,\quad\varepsilon^{i}\in G,\quad t\in\mathbb{R}.$$ $\quad$The function $\mathbf{t}:C\rightarrow(0,+\infty]$ is in fact a function $\mathbf{t}(E)$ of the energy $E=\frac{c^{2}}{2}-\cos\gamma$ of pendulum . The reflections $\varepsilon^{i}\in G$ (\[eq:symm\]) and the flow of $\overrightarrow{H}_{\nu}$ preserve the subsets $C_{i}$ of the cylinder $C$ and on each of these subsets, the function $\mathbf{t}$ is expressed as a function of the energy $E$ of the pendulum since we have equalities (\[eq:ttC1\])–(\[eq:ttC35\]) and, $$\begin{aligned} \lambda\in C_{1} & \implies & k=\sqrt{\frac{E+1}{2}},\\ \lambda\in C_{2} & \implies & k=\sqrt{\frac{2}{E+1}},\\ \lambda\in C_{4} & \implies & E=-1,\\ \lambda\in C_{3}\cup C_{5} & \implies & E=1.\end{aligned}$$ This proves item (2) of this proposition. Item (1) follows since the energy $E$ is invariant w.r.t. $\varepsilon^{i}$ and $\overrightarrow{H}_{\nu}$. $\square$ A plot of $\mathbf{t}(E)$ is shown in Figure \[fig:tt\]. Regularity properties of the function $\mathbf{t}(E)$ visible in its plot are proved in the following statement. ![\[fig:tt\]Plot of the function $\mathbf{t}(E)$](tcut_SH2) \[prop:tt-reg\]$\qquad$\ $\quad$The function $\mathbf{t}(\lambda)$ is smooth on $C_{1}\cup C_{2}$. $\quad$$\lim_{E\rightarrow-1}\mathbf{t}(E)=2\pi,\quad\lim_{E\rightarrow1}\mathbf{t}(E)=+\infty,\quad\lim_{E\rightarrow+\infty}\mathbf{t}(E)=0$. $\quad$The function $\mathbf{t}:C\rightarrow(0,+\infty]$ is continuous. Item (1) follows from (\[eq:ttC1\]) and (\[eq:ttC2\]). The limits in item (2) follow from (\[eq:ttC1\]) and (\[eq:ttC2\]), and from the limits $\lim_{k\rightarrow+0}K(k)=\frac{\pi}{2},\quad\lim_{k\rightarrow1-}K(k)=+\infty$. Then continuity of $\mathbf{t}(\lambda)$ follows on $C_{4}$: $$\lambda\rightarrow\bar{\lambda}\in C_{4}\implies E(\lambda)\rightarrow E(\bar{\lambda})=-1\implies\mathbf{t}(\lambda)\rightarrow2\pi=\mathbf{t}(\bar{\lambda}).$$ Continuity on $C_{3}\cup C_{5}$ follows since $$\lambda\rightarrow\bar{\lambda}\in C_{3}\cup C_{5}\implies E(\lambda)\rightarrow E(\bar{\lambda})=1\implies\mathbf{t}(\lambda)\rightarrow+\infty=\mathbf{t}(\bar{\lambda}).$$ Thus $\mathbf{t}(\lambda)$ is continuous on $C$ and item (3) is proved. $\square$ Decompositions in the Image of the Exponential Mapping ------------------------------------------------------ Consider the set $\widehat{M}=M\backslash\{q_{0}\}$. From Filippov’s theorem and Pontryagin’s Maximum Principle [@agrachev_sachkov], we already know that any point $q\in\widehat{M}$ can be joined with $q_{0}$ by an optimal trajectory $q(s)=\mathrm{Exp}(\lambda,s)$ such that $q(t)=q,\quad(\lambda,t)\in N$. Then $\mathrm{Exp}(N)\supset\widehat{M}$. However the Maxwell points $q\in\widehat{M}$ have non unique preimage under the exponential mapping. Hence the mapping $\mathrm{Exp}:N\rightarrow\widehat{M}$ is surjective, but not injective. In order to separate Maxwell points we consider the set that contains all such points: $$M^{\prime}=\left\{ q\in M\quad\mid\quad z=0,\quad x^{2}+y^{2}\neq0\right\} ,$$ and its complement $\widetilde{M}$ in $\widehat{M}$: $$\begin{aligned} \widetilde{M} & =\left\{ q\in M\quad\mid\quad z\neq0\right\} ,\\ \widehat{M} & =\widetilde{M}\sqcup M^{\prime},\end{aligned}$$ where $\sqcup$ is the union of disjoint sets. ### Decompositions in $\widetilde{M}$ The plane $z=0$ cuts the domain $\widetilde{M}$ into two half spaces as: $$\begin{aligned} \widetilde{M} & = & M_{1}\sqcup M_{2},\nonumber \\ M_{1} & = & \left\{ q\in M\quad\mid\quad z>0\right\} ,\label{eq:M1}\\ M_{2} & = & \left\{ q\in M\quad\mid\quad z<0\right\} .\label{eq:M2}\end{aligned}$$ Note that the decomposition of the manifold $M$ is simpler in description of cut time on $\mathrm{SH}(2)$ than similar decomposition of $M$ in related problems on $\mathrm{SE(2)}$ [@cut_sre1] and on the Engel group [@cut_engel]. \[prop:Ref\_Mi\]Reflections $\varepsilon^{j}\in G$ permute the domains $M_{1}$ and $M_{2}$ according to Table . $\mathrm{Id}$,$\varepsilon^{1}$,$\varepsilon^{6}$,$\varepsilon^{7}$ $\varepsilon^{2}$,$\varepsilon^{3}$,$\varepsilon^{4}$,$\varepsilon^{5}$ --------------------------------------------------------------------- ------------------------------------------------------------------------- $M_{1}$ $M_{2}$ $M_{2}$ $M_{1}$ : \[tab:1\]Action of $\varepsilon^{i}$ on $M_{j}$ Follows immediately from the definitions of the actions of reflections (\[eq:symm\_M\]).$\square$ \[prop:Mi-topol\]The domains $M_{1},M_{2}$ are open, connected and simply connected. From the definition of the sets $M_{1}$, $M_{2}$ (\[eq:M1\])–(\[eq:M2\]) it follows that the domains $M_{i}$ are homeomorphic to $\mathbb{R}^{3}$ and therefore they are open, connected and simply connected. $\square$ Decomposition in the Preimage of the Exponential Mapping -------------------------------------------------------- We now consider the following set $\widehat{N}\subset N$ corresponding to all potentially optimal geodesics: $$\widehat{N}=\left\{ (\lambda,t)\in N\quad\mid\quad t\leq\mathbf{t}(\lambda)\right\} .$$ By existence of the optimal geodesics, $\mathrm{Exp}(\widehat{N})\supset\widehat{M}$. In order to separate the Maxwell points in the preimage of the exponential mapping, introduce further the sets: $$\begin{aligned} \widehat{N} & =\widetilde{N}\sqcup N^{\prime},\\ N^{\prime} & =\left\{ (\lambda,t)\in\cup{}_{i=1}^{3}\widehat{N}_{i}\quad\mid\quad t=\mathbf{t}(\lambda)\textrm{ or }\sin\frac{\gamma_{t/2}}{2}=0\right\} \cup\widehat{N}_{4},\\ \widehat{N}_{i} & =N_{i}\cap\widehat{N},\quad i=1,\ldots,4,\\ \widetilde{N} & =\left\{ (\lambda,t)\in\cup_{i=1}^{3}N_{i}\quad\mid\quad t<\mathbf{t}(\lambda),\quad\sin\frac{\gamma_{t/2}}{2}\neq0\right\} \cup N_{5}.\end{aligned}$$ ### Decomposition in $\widetilde{N}$ We now introduce the connected components $D_{i}$ of the set $\widetilde{N}$: $$\begin{aligned} \widetilde{N} & = & D_{1}\sqcup D_{2},\\ D_{1} & = & \left\{ (\lambda,t)\in\cup_{i=1}^{3}N_{i}\quad\mid\quad t<\mathbf{t}(\lambda),\quad\sin\left(\frac{\gamma_{t/2}}{2}\right)>0\right\} ,\\ D_{2} & = & \left\{ (\lambda,t)\in\cup_{i=1}^{3}N_{i}\quad\mid\quad t<\mathbf{t}(\lambda),\quad\sin\left(\frac{\gamma_{t/2}}{2}\right)<0\right\} ,\end{aligned}$$ where $D_{i}$ are defined explicitly in coordinates in Table \[tab:2\] (in the sets $N_{1},N_{2},N_{3}$). Projections of the sets $D_{i}$ to the initial phase cylinder are shown in Figure \[fig:Projs\_Di\]. We note that for $t<\mathbf{t}(\lambda)=t_{1}^{\mathrm{Max}}(\lambda)$ the values of $p$ are given from formulas (\[eq:2.29\])–(\[eq:2.30\]), and the values of $t_{1}^{\mathrm{Max}}(\lambda)$ are given in (\[eq:2.20\])–(\[eq:2.24\]). The values of $\tau$ in Table \[tab:2\] were calculated by using the definition of elliptic coordinates [@Extremal_Pseudo_Euclid], formulas for Jacobi elliptic functions [@Table_Int] and values of $\gamma$ and $c$ from Figure \[fig:Decomposition\]. Note that enumeration of the sets $D_{i}$ is chosen to correspond to the sets $M_{i}$ for further analysis. ![\[fig:Projs\_Di\]Projections of $D_{i}$ to Phase Cylinder $C$ of the Pendulum at $t=0$](D_i) ------------------------------------------------------------------------------------------------------------------------------------------- $D_{i}$ ------------------- ----------------------------- ----------------------------- ----------------------------- ----------------------------- $\begin{array}{c} $\begin{array}{c} $\begin{array}{c} $\begin{array}{c} $\begin{array}{c} \lambda\\ C_{1}^{0}\\ C_{1}^{1}\\ C_{1}^{0}\\ C_{1}^{1}\\ p\\ (0,2K)\\ (0,2K)\\ (0,2K)\\ (0,2K)\\ \tau (0,2K) (2K,4K) (2K,4K) (0,2K) \end{array}$ \end{array}$ \end{array}$ \end{array}$ \end{array}$ $\begin{array}{c} $\begin{array}{c} $\begin{array}{c} $\begin{array}{c} $\begin{array}{c} \lambda\\ C_{2}^{+}\\ C_{2}^{-}\\ C_{2}^{+}\\ C_{2}^{-}\\ p\\ (0,2K)\\ (0,2K)\\ (0,2K)\\ (0,2K)\\ \tau (0,2K) (-2K,0) (2K,4K) (0,2K) \end{array}$ \end{array}$ \end{array}$ \end{array}$ \end{array}$ $\begin{array}{c} $\begin{array}{c} $\begin{array}{c} $\begin{array}{c} $\begin{array}{c} \lambda\\ C_{3}^{0+}\cup C_{3}^{1-}\\ C_{3}^{0-}\cup C_{3}^{1+}\\ C_{3}^{0+}\cup C_{3}^{1-}\\ C_{3}^{0-}\cup C_{3}^{1+}\\ p\\ (0,+\infty)\\ (0,+\infty)\\ (0,+\infty)\\ (0,+\infty)\\ \tau (0,+\infty) (-\infty,0) (-\infty,0) (0,+\infty) \end{array}$ \end{array}$ \end{array}$ \end{array}$ \end{array}$ ------------------------------------------------------------------------------------------------------------------------------------------- : \[tab:2\]Decomposition $\widetilde{N}=\cup_{i=1}^{2}D_{i}$ We now establish an important fact about the domains $D_{i}$ that is vital in proving that the exponential mapping transforms $D_{i}$ diffeomorphically. \[prop:Ref\_Di\]Reflections $\varepsilon^{j}\in G$ permute the domains $D_{1}$ and $D_{2}$ as shown in Table . $\mathrm{Id}$,$\varepsilon^{1}$,$\varepsilon^{6}$,$\varepsilon^{7}$ $\varepsilon^{2}$,$\varepsilon^{3}$,$\varepsilon^{4}$,$\varepsilon^{5}$ --------------------------------------------------------------------- ------------------------------------------------------------------------- $D_{1}$ $D_{2}$ $D_{2}$ $D_{1}$ : \[tab:3\]Action of $\varepsilon^{i}$ on $D_{j}\subset\widetilde{N}$ In paper [@Extremal_Pseudo_Euclid] we defined the action of reflections $\varepsilon^{j}:N\rightarrow N$ so that it satisfies the following properties: $$\begin{aligned} \varepsilon^{j}(\lambda,t) & = & \left(\varepsilon^{j}\circ e^{t\overrightarrow{H}_{\nu}}(\lambda),t\right),\quad\textrm{if \quad}\varepsilon_{*}^{j}\overrightarrow{H}_{\nu}=-\overrightarrow{H}_{\nu},\\ \varepsilon^{j}(\lambda,t) & = & \left(\varepsilon^{j}(\lambda),t\right),\quad\textrm{if \quad}\varepsilon_{*}^{j}\overrightarrow{H}_{\nu}=\overrightarrow{H}_{\nu},\end{aligned}$$ where $\varepsilon_{*}^{j}\left(\overrightarrow{H}_{\nu}\right)$ is the pushforward of $\overrightarrow{H}_{\nu}$ under the reflection $\varepsilon^{j}$. Recall that $\varepsilon_{*}^{j}\overrightarrow{H}_{\nu}=-\overrightarrow{H}_{\nu},\textrm{ for }j=1,2,5,6$ because these symmetries reverse the direction of time and $\varepsilon_{*}^{j}\overrightarrow{H}_{\nu}=\overrightarrow{H}_{\nu},\textrm{ for }j=3,4,7$ because these symmetries preserve the direction of time [@Extremal_Pseudo_Euclid]. Hence, it is sufficient to prove the case $\varepsilon^{2}(D_{1})=D_{2}$ as proof of all other cases $\varepsilon^{j}(D_{i})=D_{k}$ is similar. In order to prove the inclusion $\varepsilon^{j}(D_{1})\subset D_{2}$ we take any $(\lambda,t)=(\gamma,c,t)\in D_{1}$ and prove that $$\varepsilon^{2}:(\lambda,t)\mapsto(\lambda^{2},t)=(\gamma^{2},c^{2},t)\in D_{2}.$$ By Proposition \[prop:tt\_invar\], $$\mathbf{t}(\lambda^{2})=\mathbf{t}\circ\varepsilon^{2}\circ e^{t\overrightarrow{H}_{\nu}}(\lambda)=\mathbf{t}(\lambda).$$ Thus $t<\mathbf{t}(\lambda)$. Moreover, at instant $t/2$ the trajectories of the vertical subsystem are given as: $$\begin{aligned} \lambda_{t/2} & = & (\gamma_{t/2},c_{t/2})=e^{\overrightarrow{H}_{\nu}t/2}(\lambda),\\ \lambda_{t/2}^{2} & = & \left(\gamma_{t/2}^{2},c_{t/2}^{2}\right)=e^{\overrightarrow{H}_{\nu}t/2}\left(\lambda^{2}\right),\end{aligned}$$ Since $\lambda^{2}=\varepsilon^{2}\circ e^{\overrightarrow{H}_{\nu}t}(\lambda)$, we have $$\begin{aligned} \lambda_{t/2}^{2} & = & e^{\overrightarrow{H}_{\nu}t/2}\circ\varepsilon^{2}\circ e^{\overrightarrow{H}_{\nu}t}(\lambda)=\varepsilon^{2}\circ e^{-\overrightarrow{H}_{\nu}t/2}\circ e^{\overrightarrow{H}_{\nu}t}(\lambda)=\varepsilon^{2}\circ e^{\overrightarrow{H}_{\nu}t/2}(\lambda)=\varepsilon^{2}(\lambda_{t/2}).\label{eq:3.8}\end{aligned}$$ In proof of (\[eq:3.8\]) we used the fact that for any diffeomorphism $F:M\rightarrow M$ and a vector field $\overrightarrow{V}$ on a manifold $M$, $F_{*}\overrightarrow{V}=-\overrightarrow{V}\Longleftrightarrow F\circ e^{t\overrightarrow{V}}=e^{-t\overrightarrow{V}}\circ F$. Clearly, $\varepsilon^{2}(\lambda_{t/2})=\left(\gamma_{t/2}^{2},c_{t/2}^{2}\right)$ and from (6.3) [@Extremal_Pseudo_Euclid] we have: $$\left(\gamma_{t/2}^{2},c_{t/2}^{2}\right)=\left(-\gamma_{t/2},c_{t/2}\right).$$ Thus $\sin\frac{\gamma_{t/2}^{2}}{2}=\sin\frac{-\gamma_{t/2}}{2}<0$. We proved that $(\lambda^{2},t)\in D_{2}$, thus $\varepsilon^{2}(D_{1})\subset D_{2}$. Similarly it follows that $\varepsilon^{2}(D_{2})\subset D_{1}$. Since $\varepsilon^{2}\circ\varepsilon^{2}=\mathrm{Id}$, then $\varepsilon^{2}\circ\varepsilon^{2}(D_{1})=D_{1}\implies\varepsilon^{2}(D_{1})=D_{2}$. $\square$ \[prop:Di-topol\]The domains $D_{1},D_{2}\subset\widetilde{N}$ are open and connected. Since $\varepsilon^{2}:N\rightarrow N$ is a diffeomorphism and $\varepsilon^{2}(D_{1})=D_{2}$ it suffices to prove that $D_{1}$ is open and connected. Consider a vector field $$P=\frac{t}{2}\left(c\frac{\partial}{\partial\gamma}-\sin\gamma\frac{\partial}{\partial c}\right)\in\mathrm{Vec}(N).$$ The flow of this vector field $e^{P}$ is given as: $$e^{P}(\gamma,c,t)=e^{P}(\lambda,t)=\left(e^{\frac{t}{2}\overrightarrow{H}_{\nu}}(\lambda),t\right)=\left(\gamma_{t/2},c_{t/2},t\right).$$ Thus $e^{P}(D_{1})=\widetilde{D}_{1}$ where $$\widetilde{D}_{1}=\left\{ (\lambda,t)\in N\quad\mid\quad\sin\frac{\gamma}{2}>0,\quad t<\mathbf{t}(\lambda)\right\} .$$ The set $\widetilde{D}_{1}$ is a subgraph of a continuous function $\lambda\mapsto\mathbf{t}(\lambda)$ on an open connected 2-dimensional domain $\left\{ (\gamma,c)\in C\quad\mid\quad\gamma\in(0,2\pi),\quad c\in\mathbb{R}\right\} $, thus $\widetilde{D}_{1}$ is open and connected. Since $D_{1}=e^{-P}(\widetilde{D}_{1})$ therefore $D_{1}$ is also open and connected. $\square$ \[prop:Exp Di Mi\]There hold the inclusions: 1. $\mathrm{Exp}(D_{i})\subset M_{i},\quad i=1,2,$ 2. $\mathrm{Exp}(\widetilde{N})\subset\widetilde{M},$ 3. $\mathrm{Exp}(N^{\prime})\subset M^{\prime}.$ $\qquad$ 1. It suffices to prove only that $\mathrm{Exp}(D_{1})\subset M_{1}$, in view of the reflections $\varepsilon^{j}$. Notice the decomposition: $$D_{1}=\left(D_{1}\cap N_{1}\right)\sqcup\left(D_{1}\cap N_{2}\right)\sqcup\left(D_{1}\cap N_{3}\right)\sqcup\left(D_{1}\cap N_{5}\right).\label{eq:D1decomp}$$ Let $(\lambda,t)\in D_{1}\cap N_{1}=\left\{ (\lambda,t)\in N_{1}\quad\mid\quad t<\mathbf{t}(\lambda),\quad\sin\frac{\gamma_{t/2}}{2}>0\right\} ,$ thus $p=\frac{t}{2}\in(0,2K(k))$. Further, from formula (5.3) [@Extremal_Pseudo_Euclid] we have $s_{1}\mathrm{sn}\tau>0$. Now recall formula (3.2) [@Max_Conj_SH2]: $$\sinh z_{t}=s_{1}\frac{2k\,\mathrm{sn}p\:\mathrm{sn}\tau}{\Delta},\quad\Delta=1-k^{2}\mathrm{sn^{2}}p\,\mathrm{sn^{2}}\tau.\label{eq:3.10}$$ Then we get $\sinh z_{t}>0$, thus $z_{t}>0$, i.e., $\mathrm{Exp}(\lambda,t)\in M_{1}$. We proved that $\mathrm{Exp}(D_{1}\cap N_{1})\subset M_{1}$. All other required inclusions $\mathrm{Exp}(D_{1}\cap N_{j})\subset M_{1},\quad j=2,3,5$, are proved similarly, and the inclusion $\mathrm{Exp}(D_{1})\subset M_{1}$ follows. 2. Since $\widetilde{N}=D_{1}\cup D_{2}$ and $\widetilde{M}=M_{1}\cup M_{2}$, the inclusion $\mathrm{Exp}(\widetilde{N})\subset\widetilde{M}$ follows from item (1). 3. We have $N^{\prime}=\left(N^{\prime}\cap N_{1}\right)\sqcup\left(N^{\prime}\cap N_{2}\right)\sqcup\left(N^{\prime}\cap N_{3}\right)\sqcup N_{4}$.\ Let $(\lambda,t)\in N^{\prime}\cap N_{1}=\left\{ (\lambda,t)\in\widehat{N}_{1}\quad\mid\quad t=\mathbf{t}(\lambda)\textrm{ or }\sin\frac{\gamma_{t/2}}{2}=0\right\} ,$ then similarly to the proof of item (1) we get $p=2K(k)$ or $\mathrm{sn}\tau=0$, thus $z_{t}=0$ by (\[eq:3.10\]). From (3.6) [@Max_Conj_SH2] we get $R_{2}(q_{t})=\frac{2s_{1}}{1-k^{2}}\mathrm{dn}\tau\, f_{2}(p)\neq0$, and therefore $x^{2}+y^{2}\neq0$. We proved that $\mathrm{Exp}(N^{\prime}\cap N_{1})\subset M^{\prime}$. It follows similarly that $\mathrm{Exp}(N^{\prime}\cap N_{j})\subset M^{\prime},\quad j=2,3$. Finally, if $(\lambda,t)\in\widehat{N}_{4},$ then $$q_{t}=(x_{t},y_{t},z_{t})=(t,0,0)\in M^{\prime}.$$ Consequently, $\mathrm{Exp}(N^{\prime})\subset M^{\prime}$.$\square$ \[thm:Max\_Conj\]For $\lambda\in\cup_{i=1}^{5}C_{i}$, we have $t_{1}^{\mathrm{conj}}(\lambda)\geq t_{1}^{\mathrm{Max}}(\lambda)$. Apply equations (\[eq:2.20\])–(\[eq:2.24\]) and (\[eq:2.25\])–(\[eq:2.28\]).$\square$ \[prop:Exp Nondeg\]The restriction $\mathrm{Exp}:\widetilde{N}\rightarrow\widetilde{M}$ is non-degenerate. From Theorem \[thm:Max\_Conj\], $t_{1}^{\mathrm{conj}}(\lambda)\geq t_{1}^{\mathrm{Max}}(\lambda)$. Since for any $\nu=(\lambda,t)\in\widetilde{N}$ we have $t<\mathbf{t}(\lambda)$ and therefore exponential mapping is non-degenerate $\forall\nu=(\lambda,t)\in\widetilde{N}$ . $\square$ Hence we proved properties **P1, P2** and **P3** for the exponential mapping $\mathrm{Exp}:D_{i}\rightarrow M_{i}$. It only remains to prove condition **P4** now to establish that the exponential mapping $\mathrm{Exp}:D_{i}\rightarrow M_{i}$ is indeed a diffeomorphism. Diffeomorphic Properties of the Exponential Mapping --------------------------------------------------- In this subsection we prove that the exponential mapping $\mathrm{Exp}:D_{i}\rightarrow M_{i},\quad i=1,2$, is proper. First we recall an equivalent formulation of the properness property. Let $X$ be a topological space and $\{x_{n}\}\subset X$ a sequence. We write $x_{n}\rightarrow\partial X$ if there is no compact $K\subset X$ such that $x_{n}\in K$ for any $n\in\mathbb{N}$. Let $X,Y$ be topological spaces and $F:X\rightarrow Y$ a continuous mapping. The mapping $F$ is proper iff for any sequence $\{x_{n}\}\subset X$ there holds the implication: $$x_{n}\rightarrow\partial X\implies F(x_{n})\rightarrow\partial Y.$$ Below we apply this properness test to the mapping $\mathrm{Exp}:D_{1}\rightarrow M_{1}$. \[lem:dM1\]Let $\{q_{n}\}\subset M_{1}$. We have $q_{n}\rightarrow\partial M_{1}$ iff there is a subsequence $\{n_{k}\}$ on which one of the conditions holds: 1. $z\rightarrow0,$ 2. $z\rightarrow+\infty,$ 3. $x\rightarrow\infty,$ 4. $y\rightarrow\infty.$ Any compact set in $M_{1}$ is contained in a compact set $\left\{ q\in M_{1}\quad\mid\quad\varepsilon\leq z\leq\frac{1}{\varepsilon},\quad\left|x\right|\leq\frac{1}{\varepsilon},\quad\left|y\right|\leq\frac{1}{\varepsilon}\right\} $ for some $\varepsilon\in(0,1)$. $\square$ \[lem:dD1\]Let $\{\nu_{n}\}\subset D_{1}$, then $\nu_{n}\rightarrow\partial D_{1}$ iff there is a subsequence $\{n_{k}\}$ on which one of the following conditions hold: 1. $\gamma_{t/2}\rightarrow0,$ 2. $\gamma_{t/2}\rightarrow2\pi,$ 3. $c_{t/2}\rightarrow\infty,$ 4. $t\rightarrow0,$ 5. $t\rightarrow+\infty,$ 6. $\mathbf{t}(\lambda)-t\rightarrow0.$ Any compact set in $D_{1}$ is contained in a compact set $$\left\{ \nu\in N\,\mid\,\gamma_{t/2}\in\left[\varepsilon,2\pi-\varepsilon\right],\left|c_{t/2}\right|\leq\frac{1}{\varepsilon},\, t\in[\varepsilon,\frac{1}{\varepsilon}],\,\mathbf{t}(\lambda)-t\geq\varepsilon\right\} ,$$ for some $\varepsilon\in(0,1)$.$\square$ \[prop:Proper\]The mapping $\mathrm{Exp}:D_{i}\rightarrow M_{i},\quad i=1,2$, is proper. In view of the reflections $\varepsilon^{j}$, it suffices to consider the case $\mathrm{Exp}:D_{1}\rightarrow M_{1}$. Let $\left\{ \nu_{n}\right\} \subset D_{1}$, $\nu_{n}\rightarrow\partial D_{1}$, we have to show that $q_{n}=\mathrm{Exp}(\nu_{n})\rightarrow\partial M_{1}$. Taking into account decomposition (\[eq:D1decomp\]), we can consider the cases $\left\{ \nu_{n}\right\} \subset D_{1}\cap N_{j},\quad j=1,2,3,5$. Let $\left\{ \nu_{n}\right\} \subset D_{1}\cap N_{1}$, $\nu_{n}\rightarrow\partial D_{1}$. We will need the following formulas for the extremals $\lambda_{t}=e^{t\overrightarrow{H}}(\lambda),\quad\lambda\in C_{1}$, obtained in [@Extremal_Pseudo_Euclid] and [@Max_Conj_SH2]: $$\begin{aligned} \sin\frac{\gamma_{t}}{2} & = & s_{1}k\,\mathrm{sn}(\varphi_{t}),\\ \frac{c_{t}}{2} & = & k\,\mathrm{cn}(\varphi_{t}),\\ \sinh z_{t} & = & s_{1}\frac{k\,\mathrm{sn}p\,\mathrm{sn}\tau}{\Delta},\quad\Delta=1-k^{2}\mathrm{sn}^{2}p\,\mathrm{sn}^{2}\tau,\\ R_{2}(q_{t}) & = & f_{2}(p)\frac{2s_{1}}{1-k^{2}}\mathrm{dn}\tau,\quad f_{2}(p)=\mathrm{dn}p\,\mathrm{E}(p)-k^{2}\mathrm{sn}p\,\mathrm{cn}p.\end{aligned}$$ Notice that $p=\frac{t}{2},\quad\tau=\varphi+\frac{t}{2}$, and consider all the cases (1)–(6) of Lemma \[lem:dD1\]. 1. If $\gamma_{t/2}\rightarrow0$, then $\sin\frac{\gamma_{t/2}}{2}=s_{1}k\,\mathrm{sn}\tau\rightarrow0$, thus $\sinh z_{t}\rightarrow0$, so $z_{t}\rightarrow0$, hence $q_{n}\rightarrow\partial M_{1}$(Lemma \[lem:dM1\], (1)). 2. If $\gamma_{t/2}\rightarrow2\pi$, then $\sin\frac{\gamma_{t/2}}{2}=s_{1}k\,\mathrm{sn}\tau\rightarrow0$, thus $\sinh z_{t}\rightarrow0$, so $z_{t}\rightarrow0$, hence $q_{n}\rightarrow\partial M_{1}$. 3. The case $c_{t/2}\rightarrow\infty$ is impossible. 4. If $t\rightarrow0$, then $p\rightarrow0$, thus $z_{t}\rightarrow0$. 5. Let $t\rightarrow+\infty$, then $p\rightarrow+\infty$. Since $p\in(0,2K(k))$ then $k\rightarrow1$. Denote $u=\mathrm{am}(p)\in(0,\pi)$. On a subsequence we have $u\rightarrow\bar{u}\in[0,\pi]$ and we will suppose so in the sequel. 1. If $\bar{u}\in[0,\pi)$, then $p=F(u,k)\rightarrow F(\bar{u},1)=\intop_{0}^{\bar{u}}\frac{dt}{\cos(t)}<+\infty$, a contradiction. 2. Let $\bar{u}=\frac{\pi}{2}$, thus $\mathrm{sn}p=\sin u\rightarrow1$, $\mathrm{cn}p=\cos(u)\rightarrow c$. 1. If $\mathrm{sn}\tau\rightarrow1$, then $\Delta\rightarrow0$, thus $z_{t}\rightarrow\infty$. 2. Let $\mathrm{sn}\tau\rightarrow\bar{s}\neq1$, then $\mathrm{dn}\tau\rightarrow\sqrt{1-\bar{s}^{2}}\neq0$. Denote $$\begin{aligned} g_{2}(u) & = & f_{2}(F(u,k))=\sqrt{1-k^{2}\sin^{2}u}\mathrm{E}(u,k)-k^{2}\sin(u)\cos(u).\end{aligned}$$ We prove now that $\frac{g_{2}(u)}{1-k^{2}}\rightarrow+\infty$, then $\frac{f_{2}(u)}{1-k^{2}}\rightarrow+\infty$, thus $R_{2}(q_{t})\rightarrow\infty$, so $x_{t}^{2}+y_{t}^{2}+z_{t}^{2}\rightarrow\infty$, whence $q_{t}\rightarrow\partial M_{1}$. Denote $k^{\prime}=\sqrt{1-k^{2}}\rightarrow0$. We can suppose that on a subsequence $\frac{\cos u}{k^{\prime}}\rightarrow\alpha\in[0,+\infty]$. We have $$\begin{aligned} k^{2}\sin(u)\cos(u) & = & \sin(u)\cos(u)+o(k^{\prime2}),\\ \sqrt{1-k^{2}\sin^{2}u} & = & \sqrt{\cos^{2}u+k^{\prime2}-k^{\prime2}\cos^{2}u}.\end{aligned}$$ Now we estimate $E(u,k)$ from below: $$\begin{aligned} E(u,k)-\sin(u) & = & \intop_{0}^{u}\sqrt{1-k^{2}\sin^{2}t}dt-\intop_{0}^{u}\cos(t)dt=\intop_{0}^{u}\frac{1-k^{2}\sin^{2}t-\cos^{2}t}{\sqrt{1-k^{2}\sin^{2}t}+\cos t}dt\\ & > & \frac{1-k^{2}}{2}\intop_{0}^{u}\sin^{2}t\, dt\\ & = & \frac{1-k^{2}}{4}\left(u-\frac{\sin(2u)}{2}\right)\\ & = & \frac{\pi}{8}k^{\prime2}(1+o(1)).\end{aligned}$$ Thus, $$E(u,k)>\sin(u)+\frac{\pi}{8}k^{\prime2}(1+o(1)).$$ 1. Let $\alpha\in[0,+\infty).$ Then $\cos(u)=\alpha k^{\prime}+o(k^{\prime}),\quad\sin(u)=1+o(1),$ thus $$\begin{aligned} k^{2}\sin(u)\cos(u) & = & \alpha k^{\prime}+o(k^{\prime}),\\ \sqrt{1-k^{2}\sin^{2}(u)} & = & \sqrt{1+\alpha^{2}}k^{\prime}+o(k^{\prime}),\\ E(u,k) & = & 1+o(1),\\ \sqrt{1-k^{2}\sin^{2}u}\, E(u,k) & = & \sqrt{1+\alpha^{2}}k^{\prime}+o(k^{\prime}),\\ g_{2}(u) & = & \left(\sqrt{1+\alpha^{2}}-\alpha\right)k^{\prime}+o(k^{\prime}),\\ \frac{g_{2}(u)}{k^{\prime2}} & = & \frac{\left(\sqrt{1+\alpha^{2}}-\alpha\right)}{k^{\prime}}(1+o(1))\rightarrow\infty,\end{aligned}$$ and the claim follows. 2. Let $\alpha=+\infty$, thus $k^{\prime}=o(\cos(u))$. Then $$\begin{aligned} k^{2}\sin(u)\cos(u) & = & \sin(u)\cos(u)-k^{\prime2}\cos(u)+o\left(k^{\prime2}\cos(u)\right),\\ \sqrt{1-k^{2}\sin^{2}u} & = & \cos(u)\sqrt{1+\frac{k^{\prime2}}{\cos^{2}u}+o\left(\frac{k^{\prime2}}{\cos^{2}u}\right)}\\ & = & \cos(u)+\frac{1}{2}\frac{k^{\prime2}}{\cos(u)}+o\left(\frac{k^{\prime2}}{\cos(u)}\right),\\ \sqrt{1-k^{2}\sin^{2}u}\, E(u,k) & > & \cos(u)\sin(u)+\frac{1}{2}\frac{k^{\prime2}}{\cos(u)}+o\left(\frac{k^{\prime2}}{\cos(u)}\right),\\ g_{2}(u) & > & \frac{1}{2}\frac{k^{\prime2}}{C}(1+o\left(1\right)),\\ \frac{g_{2}(u)}{k^{\prime2}} & > & \frac{1}{2C}(1+o\left(1\right))\rightarrow+\infty,\end{aligned}$$ and the claim follows. 3. Let $u\in(0,\pi)$, then $f_{2}(p)=g_{2}(u)\rightarrow\left|\cos\bar{u}\right|\left(E(\bar{u},1)+\sin\bar{u}\right)>0,$ thus $$\frac{f_{2}(p)}{\sqrt{1-k^{2}}}\rightarrow+\infty.$$ Since $\frac{\mathrm{dn}\tau}{\sqrt{1-k^{2}}}\geq1$, then $R_{2}(q_{t})\rightarrow\infty$, so $x_{t}^{2}+y_{t}^{2}+z_{t}^{2}\rightarrow\infty$, whence $q_{t}\rightarrow\partial M_{1}$. 4. If $\bar{u}=\pi$, then $\mathrm{sn}p=\sin(u)\rightarrow0$, thus $z_{t}\rightarrow0$. 6. Let $\mathbf{t}(\lambda)-t\rightarrow0$. Recall that $\mathbf{t}(\lambda)=4K(k)$ for $\lambda\in C_{1}$, thus $4K(k)-t\rightarrow0$. Since $k\in(0,1)$, then there is a subsequence $\{n_{m}\}$ on which $k\rightarrow\bar{k}\in[0,1]$. If $\bar{k}\in[0,1)$, then $K(k)\rightarrow K(\bar{k})<+\infty$, thus $t\rightarrow4K(\bar{k})$, so $p=2K(\bar{k})$. Consequently, $\sinh z_{t}\rightarrow0$, whence $q_{n}\rightarrow\partial M_{1}$ (Lemma \[lem:dM1\], (1)). If $\bar{k}=1$, then $K(k)\rightarrow+\infty$, thus $t\rightarrow+\infty$, $q_{n}\rightarrow\partial M_{1}$ by item (5). Consequently, in each of the cases (1)–(6) of Lemma \[lem:dD1\] we get $q_{n}\rightarrow\partial M_{1}$ for a sequence $\{\nu_{n}\}\subset D_{1}\cap N_{1},\quad\nu_{n}\rightarrow\partial D_{1}$. All the rest cases $\{\nu_{n}\}\subset D_{1}\cap N_{j},\quad j=2,3,5,$ are considered similarly. Summing up, for any sequence $\{\nu_{n}\}\subset D_{1}$ with $\nu_{n}\rightarrow\partial D_{1}$ we have $\mathrm{Exp}(\nu_{n})\rightarrow\partial M_{1}$. Thus the mapping $\mathrm{Exp}:D_{1}\rightarrow M_{1}$ is proper. $\square$ Now we get the main result of this section. \[thm:Exp Di Diffeo\]The mapping $\mathrm{Exp}:D_{i}\rightarrow M_{i},\quad i=1,2$, is a diffeomorphism. All of the conditions **P1–P4** are satisfied for the mapping $\mathrm{Exp}:D_{1}\rightarrow M_{1}$: - $D_{1}\subset N$ and $M_{1}\subset M$ are open subsets thus 3-dimensional manifolds (Proposition \[prop:Di-topol\], Proposition \[prop:Mi-topol\]), - **P1 -** $D_{1}$ is connected (Proposition \[prop:Di-topol\]), - **P2 - $M_{1}$** is connected and simply connected **** (Proposition \[prop:Mi-topol\]), - **P3 -** $\left.\mathrm{Exp}\right|_{D_{1}}$ is non-degenerate (Proposition \[prop:Exp Nondeg\]), - **P4 -** $\mathrm{Exp}:D_{1}\rightarrow M_{1}$ is proper (Proposition \[prop:Proper\]). Thus $\mathrm{Exp}:D_{1}\rightarrow M_{1}$ is a diffeomorphism. By virtue of the reflections, $\mathrm{Exp}:D_{2}\rightarrow M_{2}$ is a diffeomorphism as well. $\square$ \[cor:Diffeo\]The exponential mapping $\mathrm{Exp}:\widetilde{N}\rightarrow\widetilde{M}$ is a diffeomorphism. Follows from Theorem \[thm:Exp Di Diffeo\].$\square$ Cut Time -------- Now we can prove that inequality (\[eq:tCutbound\]) is in fact an equality for $\lambda\in C\backslash C_{4}$. \[thm:Cut\_time\]If $\lambda\in C\backslash C_{4}$, then $t_{\mathrm{cut}}(\lambda)=\mathbf{t}(\lambda)$. Let $\lambda\in C\backslash C_{4}=\cup_{i=1}^{3}C_{i}\cup C_{5}$. In view of inequality (\[eq:tCutbound\]), it remains to prove that $t_{\mathrm{cut}}(\lambda)\geq\mathbf{t}(\lambda)$. Take any $t_{1}\in(0,\mathbf{t}(\lambda)).$We need to prove that the geodesic $\mathrm{Exp}(\lambda,t)$ is optimal on the segment $t\in[0,t_{1}].$ Consider first the case $\lambda\in\cup_{i=1}^{3}C_{i}$. If $\sin\frac{\gamma_{t_{1}/2}}{2}\neq0$, then $(\lambda,t_{1})\in\widetilde{N}$, and $q_{1}=\mathrm{Exp}(\lambda,t_{1})\in\widetilde{M}$. By virtue of Proposition \[prop:Exp Di Mi\] and Theorem \[thm:Exp Di Diffeo\], the point $q_{1}$ has a unique preimage under the mapping $\mathrm{Exp}:\widehat{N}\rightarrow\widehat{M}$. Thus the geodesic $\mathrm{Exp}(\lambda,t)$ is optimal on the segment $t\in[0,t_{1}]$. If $\lambda\in\cup_{i=1}^{3}C_{i}$ and $\sin\frac{\gamma_{t_{1}/2}}{2}=0$, then we can choose $t_{2}\in(t_{1},\mathbf{t}(\lambda))$ such that $\sin\frac{\gamma_{t_{2}/2}}{2}\neq0$. By the argument of the preceding paragraph, the geodesic $\mathrm{Exp}(\lambda,t)$ is optimal at the segment $[0,t_{2}]$, thus at the segment $[0,t_{1}]\subset[0,t_{2}]$ as well. Finally, if $\lambda\in C_{5}$, then $(\lambda,t_{1})\in\widetilde{N}$, and the geodesic $\mathrm{Exp}(\lambda,t),\quad t\in[0,t_{1}]$, is optimal as above. We proved that $t_{\mathrm{cut}}(\lambda)\geq\mathbf{t}(\lambda)$, thus $t_{\mathrm{cut}}(\lambda)=\mathbf{t}(\lambda)$ for any $\lambda\in C\backslash C_{4}$.$\square$ We will be able to prove the equality $t_{\mathrm{cut}}(\lambda)=\mathbf{t}(\lambda)$ for $\lambda\in C_{4}$ below after the description of the structure of the exponential mapping $\mathrm{Exp}:N^{\prime}\rightarrow M^{\prime}$. The geodesic $\mathrm{Exp}(\lambda,t),\quad\lambda\in C_{4}$, requires a separate study since it belongs to the set $M^{\prime}$ for all $t>0$. Intuitively, Theorem \[thm:Cut\_time\] establishes the fact that since $\mathrm{Exp}:\widetilde{N}\rightarrow\widetilde{M}$ is a diffeomorphism, hence upto time $t<\mathbf{t}(\lambda)$ there is a unique point $\nu=(\lambda,s)\in\widetilde{N}$ that is mapped to a unique extremal trajectory $q_{s}=\mathrm{Exp}(\lambda,s)\in\widetilde{M}$ that joins $q_{0}\in M$ to $q_{1}\in\widetilde{M}\subset M$. Hence, the trajectory $q_{s}=\mathrm{Exp}(\lambda,s)\in\widetilde{M}$ is optimal and therefore $t_{\mathrm{cut}}(\lambda)=\mathbf{t}(\lambda)$. It therefore follows that optimal synthesis in the domain $\widetilde{M}$ is given by: $$u_{i}(q)=h_{i}(\lambda),\quad i=1,2,\quad(\lambda,t)=\mathrm{Exp}^{-1}(q)\in\widetilde{N},\quad q\in\widetilde{M},$$ where $u_{i}$ are the control variables (i.e., translational and rotational velocities) and $h_{i}$ are the optimal controls defined in (4.8) [@Extremal_Pseudo_Euclid]. Exponential Mapping on the Boundary of Diffeomorphic Domains ============================================================ Until now we have studied the mapping $\mathrm{Exp}:\widetilde{N}\rightarrow\widetilde{M}$ and proved that it is a diffeomorphism. This allowed us to prove that the cut time $t_{\mathrm{cut}}(\lambda)=t_{1}^{\mathrm{Max}}(\lambda),\quad\lambda\in C\backslash C_{4}$. In this section we obtain the global structure of the exponential mapping in order to characterize the cut locus and the Maxwell strata and to construct the optimal synthesis. Specifically we study the mapping $\mathrm{Exp}:N^{\prime}\rightarrow M^{\prime}$ where: $$\begin{aligned} N^{\prime} & = & \left\{ (\lambda,t)\in\cup_{i=1}^{3}N_{i}\quad\mid\quad t=t_{1}^{\mathrm{Max}}(\lambda)\quad\textrm{or}\quad\sin\left(\frac{\gamma_{t/2}}{2}\right)=0\right\} \cup\left\{ (\lambda,t)\in N_{4}\quad\mid\quad t\leq2\pi=t_{1}^{\mathrm{conj}}(\lambda)\right\} ,\\ M^{\prime} & = & \left\{ q\in M\quad\mid\quad x^{2}+y^{2}\neq0,\quad z=0\right\} .\end{aligned}$$ Stratification of $N^{\prime}$ ------------------------------ We define subsets $N_{j}^{\prime}\subset N^{\prime},\quad j=1,\ldots,40$, as follows: - for $j\in\left\{ 1,9,17,21,25,29\right\} $ the sets $N_{j}^{\prime}$ are given by Table \[tab:Nj\], for $j=35$ by Table \[tab:N35\] and for $j\in\left\{ 33,39\right\} $ by Table \[tab:N33\_39\], - for all the rest $j$ the set $N_{j}^{\prime}$ are defined by the action of reflections $\varepsilon^{i}$ as in (\[eq:epsiNj\])–(\[eq:eps4Nj\]): $j$ $\lambda$ $p$ $\tau$ $k$ ----- ------------- ------ --------- --------- 1 $C_{1}^{0}$ $2K$ $(0,K)$ $(0,1)$ 9 $C_{2}^{+}$ $2K$ $(0,K)$ $(0,1)$ 17 $C_{1}^{0}$ $2K$ $K$ $(0,1)$ 21 $C_{1}^{0}$ $2K$ $0$ $(0,1)$ 25 $C_{2}^{+}$ $2K$ $0$ $(0,1)$ 29 $C_{2}^{+}$ $2K$ $K$ $(0,1)$ : \[tab:Nj\]Decomposition $N_{j}^{\prime},\quad j\in\left\{ 1,9,17,21,25,29\right\} $ $\lambda$ $p$ $\tau$ $k$ -------------- --------------- -------- -------------------- $C_{1}^{0}$ $(0,2K)$ 0 $\left(0,1\right)$ $C_{2}^{+}$ $(0,2K)$ 0 $\left(0,1\right)$ $C_{3}^{0+}$ $(0,+\infty)$ 0 1 : \[tab:N35\]Decomposition $N_{j}^{\prime},\quad j=35$ $j$ $\lambda$ $t$ ----- ------------- ------------ 33 $C_{4}^{0}$ $2\pi$ 39 $C_{4}^{0}$ $(0,2\pi)$ : \[tab:N33\_39\]Decomposition $N_{j}^{\prime},\quad j\in\left\{ 33,39\right\} $ $$\begin{aligned} \varepsilon^{i}\left(N_{j}^{\prime}\right) & = & N_{j+i}^{\prime},\quad i=1,\ldots,7,\quad j=1,9,\label{eq:epsiNj}\\ \varepsilon^{2i}\left(N_{17}^{\prime}\right) & = & N_{17+i}^{\prime},\quad i=1,2,3,\label{eq:eps2iNj}\\ \varepsilon^{2+i}\left(N_{j}^{\prime}\right) & = & N_{j+i}^{\prime},\quad i=1,2,3,\quad j=21,25,29,35,\label{eq:eps2+iNj}\\ \varepsilon^{4}\left(N_{j}^{\prime}\right) & = & N_{j+1}^{\prime},\quad j=33,39.\label{eq:eps4Nj}\end{aligned}$$ The following stratification of the set $N^{\prime}$ follows from the definition of the sets $N_{j}^{\prime}$. The stratification of $N^{\prime}$ shown in Figures , is given as: $$N^{\prime}=\sqcup_{j=1}^{40}N_{j}^{\prime}.\label{eq:N'Decomposn}$$ ![\[fig:N’decomp1\]The sets $N_{j}^{\prime}$ with $t=t_{1}^{\mathrm{Max}}(\lambda)\quad\textrm{or}\quad\sin\left(\frac{\gamma_{t/2}}{2}\right)=0$](Nj) ![\[fig:N’decomp2\]The sets $N_{j}^{\prime}$ with $t<t_{1}^{\mathrm{Max}}(\lambda)$, $\sin\frac{\gamma_{t/2}}{2}=0$](Nj2) From Figures \[fig:N’decomp1\], \[fig:N’decomp2\] we see the sets $N_{j}^{\prime}$ given in Tables \[tab:Nj\], \[tab:N35\], \[tab:N33\_39\] pertain to the quadrant of the phase portrait of vertical subsystem for which $\lambda=(\gamma,c)\in C$ such that $\gamma\in[0,\pi]$ and $c\in[0,\infty)$. For $\lambda=(\gamma,c)$ in other parts of phase portrait, the sets $N_{j}^{\prime}$ are obtained by the reflection symmetries (\[eq:epsiNj\])–(\[eq:eps4Nj\]) of the vertical subsystem. Stratification of a Quadrant of the Plane $z=0$ ----------------------------------------------- Define the following curves and points in the quadrant $Q=\left\{ (x,y)\in\mathbb{R}^{2}\quad\mid\quad x\geq0,\, y\leq0\right\} $ (see Figure \[fig:g1g5\]): ![\[fig:g1g5\]Stratification of the quadrant $Q$](figg1g5sp) \_[1]{}:& x=0,y=y\_[1]{}(k)=-,k(0,1),\ \_[2]{}:& x=x\_[2]{}(k)=,y=y\_[2]{}(k)=-,k(0,1),\ \_[3]{}:& x=x\_[3]{}(k)=E(k),y=y\_[3]{}(k)=-E(k),k(0,1),\ \_[4]{}:& x=x\_[4]{}(t)=t,y=0,t(0,2),\ \_[5]{}:& x=x\_[5]{}(k)=E(k),y=0,k(0,1),\ P:& x=2,y=0,\ O:& x=0,y=0, & where $a(k)=E(k)-(1-k^{2})K(k),\quad k\in(0,1)$. The curves $\gamma_{1},\ldots,\gamma_{5}$ result from substitution of $t=t_{1}^{\mathrm{Max}}(\lambda)$, and $\varphi=\tau-p$ from Table \[tab:Nj\] in the equations of extremal trajectories for $\lambda\in\cup_{i=1}^{5}C_{i}$. The curves $\gamma_{1},\ldots,\gamma_{5}$ and the point $P$ are the images of certain sets $\mathrm{Exp}\left(N_{j}^{\prime}\right)$ under the projection $$p:\left\{ q\in M\quad\mid\quad z=0\right\} \rightarrow\mathbb{R}_{x,y}^{2},\quad\left(x,y,0\right)\mapsto(x,y).\label{eq:P}$$ \_[1]{}= & p(N\_[29]{}\^),\ \_[2]{}= & p(N\_[25]{}\^),\ \_[3]{}= & p(N\_[21]{}\^),\ \_[4]{}= & p(N\_[39]{}\^),\ \_[5]{}= & p(N\_[17]{}\^),\ P= & p(N\_[33]{}\^). These equalities can be verified easily. From [@Max_Conj_SH2] we know that the first Maxwell points with $t=t_{1}^{\mathrm{Max}}(\lambda)$ and conjugate points with $t=t_{1}^{\mathrm{Max}}(\lambda)$ and $\mathrm{sn}\tau\,\mathrm{cn}\tau=0$ lie in the plane $z=0$. Hence, the curves $\gamma_{1},\ldots,\gamma_{5}$ decompose the fourth quadrant of the plane $z=0$ into various regions (see Figure \[fig:g1g5\]). The regularity and mutual disposition of the curves $\gamma_{1},\ldots,\gamma_{5}$ are described in the following lemmas. \[lem:a(k)\]The function $a(k)$ satisfies the following properties: $$\begin{aligned} a:(0,1) & \rightarrow & (0,1)\textrm{ is a diffeomorphism,}\label{eq:adiff}\\ k & \rightarrow & 0\implies a(k)=\frac{\pi}{4}k^{2}+o(k^{2}),\label{eq:ak0}\\ k & \rightarrow & 1-0\implies a(k)=1-\frac{1}{2}k^{\prime2}\ln\left(\frac{1}{k^{\prime}}\right)+O(k^{\prime2})\label{eq:ak1}\end{aligned}$$ where $k^{\prime}=\sqrt{1-k^{2}}$. Moreover, the function $a(k)$ is convex. If $k\rightarrow0$, then $$\begin{aligned} K(k) & = & \frac{\pi}{2}\left(1+\frac{k^{2}}{4}\right)+o(k^{2}),\\ E(k) & = & \frac{\pi}{4}\left(1-\frac{k^{2}}{4}\right)+o(k^{2}),\end{aligned}$$ which gives asymptotics (\[eq:ak0\]). If $k\rightarrow1-0$, then $$\begin{aligned} K(k) & = & \ln\left(\frac{1}{k^{\prime}}\right)+o(k^{\prime}),\\ E(k) & = & 1+\frac{1}{2}k^{\prime2}\ln\left(\frac{1}{k^{\prime}}\right)+O(k^{\prime2}),\end{aligned}$$ which gives asymptotics (\[eq:ak1\]). Finally, property (\[eq:adiff\]) follows since $$\begin{aligned} \frac{da}{dk} & = & k\, K(k)>0,\\ \lim_{k\rightarrow0}a(k) & = & 0,\\ \lim_{k\rightarrow1-0}a(k) & = & 1.\end{aligned}$$ The function $a(k)$ is convex since $\frac{da}{dk}=k\, K(k)$ increases $\forall k\in(0,1)$. $\square$ \[lem:g1\]The function $y=y_{1}(k)$ defines a diffeomorphism $y_{1}:(0,1)\rightarrow(-\infty,0).$ Moreover, $$\begin{aligned} \lim_{k\rightarrow0^{+}}y_{1}(k) & = & 0,\label{eq:lim_y1_0}\\ \lim_{k\rightarrow1^{-}}y_{1}(k) & = & -\infty.\label{eq:lim_y1_inf}\end{aligned}$$ The function $y=y_{1}(k)$ is a strictly decreasing function with: $$\frac{dy_{1}}{dk}=\frac{-4kE(k)}{(1-k^{2})^{\frac{3}{2}}}<0,\quad k\in(0,1).$$ Further, Lemma \[lem:a(k)\] yields the asymptotics: $$\begin{aligned} k & \rightarrow & 0\implies y_{1}(k)=\frac{-4a(k)}{\sqrt{1-k^{2}}}\rightarrow0,\\ k & \rightarrow & 1-0\implies y_{1}(k)\thicksim-\frac{4}{k^{\prime}}\rightarrow-\infty,\end{aligned}$$ and the statement of this lemma follows. $\square$ \[lem:g4\]The function $x=x_{4}(t)$ defines a diffeomorphism $x_{4}:(0,2\pi)\rightarrow(0,2\pi)$. Moreover, $$\begin{aligned} \lim_{t\rightarrow0^{+}}x_{4}(t) & = & 0,\\ \lim_{k\rightarrow2\pi^{-}}x_{4}(t) & = & 2\pi.\end{aligned}$$ Clearly $x_{4}(t)$ is a smooth bijection with a smooth inverse. Hence it is a diffeomorphsim. The limits can be calculated by direct substitution in $x_{4}(t)$. $\square$ \[lem:g5\]The function $x=x_{5}(k)$ defines a diffeomorphism $x_{5}:(0,1)\rightarrow(2\pi,+\infty).$ Moreover, $$\begin{aligned} \lim_{k\rightarrow0^{+}}x_{5}(k) & = & 2\pi,\\ \lim_{k\rightarrow1^{-}}x_{5}(k) & = & +\infty.\end{aligned}$$ The function $x=x_{5}(k)$ is a strictly decreasing function with: $$\frac{dx_{5}}{dk}=\frac{4a(k)}{k(1-k^{2})^{\frac{3}{2}}}>0,$$ and $$\begin{aligned} k & \rightarrow & 0\implies E(k)\rightarrow\frac{\pi}{2}\implies x_{5}(k)\rightarrow2\pi,\\ k & \rightarrow & 1-0\implies E(k)\rightarrow1\implies x_{5}(k)\rightarrow+\infty,\end{aligned}$$ and the statement of the lemma follows. $\square$ \[lem:g2\]The functions $x=x_{2}(k),\quad y=y_{2}(k)\quad k\in(0,1),$ define parametrically a function $x=x_{2}(y)$ which is a diffeomorphism $x_{2}:(-\infty,0)\rightarrow(0,+\infty)$ with $\lim_{y\rightarrow-\infty}x_{2}(y)=+\infty,\quad\lim_{y\rightarrow0^{-}}x_{2}(y)=0$. Moreover, $$-y-2<x_{2}(y)<-y,\quad y\in(-\infty,0).\label{eq:x2bound}$$ The curve $\gamma_{2}$ is convex, has near the origin the asymptotics $$y=-\pi^{\frac{1}{3}}x^{\frac{2}{3}}+o\left(x^{\frac{2}{3}}\right),\quad x\rightarrow0,\label{eq:g2as}$$ and has an asymptote $y+x+2=0$ as $x\rightarrow\infty$. Notice that $$\begin{aligned} k & \rightarrow & 0\implies x_{2}(k)\rightarrow0,\quad y_{2}(k)\rightarrow0,\\ k & \rightarrow & 1\implies x_{2}(k)\rightarrow+\infty,\quad y_{2}(k)\rightarrow-\infty.\end{aligned}$$ Also, $$\begin{aligned} \frac{dx_{2}}{dk} & = & \frac{4\left(\left(1+k^{2}\right)E(k)-(1-k^{2})K(k)\right)}{\left(1-k^{2}\right)^{2}}=\frac{4\left(a(k)+k^{2}E(k)\right)}{k\left(1-k^{2}\right)^{2}}>0,\\ \frac{dy_{2}}{dk} & = & -\frac{4k\left(2E(k)-(1-k^{2})K(k)\right)}{\left(1-k^{2}\right)^{2}}=-\frac{4k\left(a(k)+E(k)\right)}{\left(1-k^{2}\right)^{2}}<0,\end{aligned}$$ thus the functions $x_{2}(k)$ and $y_{2}(k)$ define diffeomorphisms $x_{2}:(0,1)\rightarrow(0,+\infty)$ and $y_{2}:(0,1)\rightarrow(-\infty,0)$. So these functions define parametrically the diffeomorphism $$\begin{aligned} x & = & x_{2}(y),\quad y\in(-\infty,0),\quad x\in(0,+\infty),\\ y & = & y_{2}(x),\quad x\in(0,+\infty),\quad y\in(-\infty,0).\end{aligned}$$ Notice that $$\begin{aligned} \lim_{y\rightarrow-\infty}x_{2}(y) & = & \lim_{k\rightarrow1}x_{2}(k)=+\infty,\\ \lim_{y\rightarrow0-}x_{2}(y) & = & \lim_{k\rightarrow0-}x_{2}(k)=0.\end{aligned}$$ Now we show that the curve $\gamma_{2}$ is convex. We have $$\begin{aligned} \frac{dy_{2}}{dx} & = & \frac{dy_{2}/dk}{dx_{2}/dk}=\alpha(k),\nonumber \\ \alpha(k) & = & -k\frac{2E(k)-(1-k^{2})K(k)}{(1+k^{2})E(k)-(1-k^{2})K(k)},\label{eq:alpha(k)}\\ \frac{d\alpha}{dk} & = & -\left(1-k^{2}\right)\frac{3E^{2}(k)-(5-k^{2})E(k)\, K(k)+2(1-k^{2})K^{2}(k)}{\left(\left(1+k^{2}\right)E(k)-\left(1-k^{2}\right)K(k)\right)^{2}}.\label{eq:da(k)}\end{aligned}$$ Since $a(k)=E(k)-\left(1-k^{2}\right)K(k)\in(0,1)$, then $\frac{E(k)}{K(k)}\in\left(\left(1-k^{2}\right),1\right)$. But the numerator of the function $t=\frac{E(k)}{K(k)}\mapsto3t^{2}-\left(5-k^{2}\right)t+2\left(1-k^{2}\right)$ is negative for $t\in\left(\left(1-k^{2}\right),1\right)$ thus the numerator of fraction (\[eq:da(k)\]) is positive. Therefore, $\frac{d\alpha}{dk}>0$, i.e., $\frac{dy_{2}}{dx}$ is increasing for $k\in(0,1)$ and also increasing for $x\in(0,+\infty)$. Thus the function $y_{2}(x)$ and its graph, i.e., the curve $\gamma_{2}$, are convex. The second inequality in (\[eq:x2bound\]) follows since $$\frac{x_{2}(k)}{y_{2}(k)}=-k>-1,\quad k\in(0,1).$$ The first inequality in (\[eq:x2bound\]) and existence of the asymptote $y+x+2=0$ follows from equalities: $$\begin{aligned} \lim_{k\rightarrow1-}\frac{y_{2}(k)}{x_{2}(k)} & = & -1,\\ \lim_{k\rightarrow1-}\left(y_{2}(x)+x_{2}(y)\right) & = & -2,\\ \left(y_{2}(x)+x_{2}(y)\right)+2 & = & \frac{2}{1+k}\left(1+k-2a(k)\right)>0,\end{aligned}$$ since $a(k)<k<\frac{1+k}{2}$ for $k\in(0,1)$. Finally asymptotics (\[eq:g2as\]) follows since $$x_{2}(k)=\pi k^{3}+o(k^{3}),\quad y_{2}(k)=-\pi k^{2}+o(k^{2}),\quad k\rightarrow0.$$ $\square$ A plot of the curve $\gamma_{2}$ with its bounds given by (\[eq:x2bound\]) is shown in Figure \[fig:g2\]. ![\[fig:g2\]The curve $\gamma_{2}$ and its bounds $y+x=-2,\quad y+x=0$.](figg2b1) \[lem:g3\]The functions $x=x_{3}(k),\quad y=y_{3}(k),$ define parametrically a function $x=x_{3}(y)$ which is a diffeomorphism $x_{3}:(-\infty,0)\rightarrow(2\pi,+\infty)$ with $\lim_{y\rightarrow-\infty}x_{3}(y)=+\infty,\quad\lim_{y\rightarrow0^{+}}x_{3}(y)=2\pi$. Moreover, $$x_{3}(y)>2\pi,\quad x_{3}(y)>2-y,\quad y\in(-\infty,0).\label{eq:x3bound}$$ The curve $\gamma_{3}$ is convex and has an asymptote $y+x=2$ as $x\rightarrow\infty$. Notice that $$\begin{aligned} k & \rightarrow & 0\implies x_{3}(k)\rightarrow2\pi,\quad y_{3}(k)\rightarrow0,\\ k & \rightarrow & 1\implies x_{3}(k)\rightarrow+\infty,\quad y_{3}(k)\rightarrow-\infty.\end{aligned}$$ Furthermore, $$\begin{aligned} \frac{dx_{3}}{dk} & = & \frac{4\left(\left(1+k^{2}\right)E(k)-(1-k^{2})K(k)\right)}{k\left(1-k^{2}\right)^{2}}=\frac{4\left(a(k)+k^{2}E(k)\right)}{k\left(1-k^{2}\right)^{2}}>0,\\ \frac{dy_{3}}{dk} & = & -\frac{4\left(2E(k)-(1-k^{2})K(k)\right)}{k\left(1-k^{2}\right)^{2}}=-\frac{4\left(a(k)+E(k)\right)}{k\left(1-k^{2}\right)^{2}}<0,\end{aligned}$$ thus the functions $x_{3}(k)$ and $y_{3}(k)$ define diffeomorphisms $x_{3}:(0,1)\rightarrow(2\pi,+\infty)$ and $y_{3}:(0,1)\rightarrow(-\infty,0)$. So these functions define parametrically a diffeomorphism $$\begin{aligned} x & = & x_{3}(y),\quad y\in(-\infty,0),\quad x\in(2\pi,+\infty).\end{aligned}$$ Notice that $$\begin{aligned} \lim_{y\rightarrow-\infty}x_{3}(y) & = & \lim_{k\rightarrow1}x_{3}(k)=+\infty,\\ \lim_{y\rightarrow0+}x_{3}(y) & = & \lim_{k\rightarrow0+}x_{3}(k)=2\pi.\end{aligned}$$ Since $\frac{dx_{3}}{dk}>0$, therefore $x_{3}(k)>2\pi$ for $k\in(0,1)$, which gives the first inequality in (\[eq:x3bound\]). The second inequality in (\[eq:x3bound\]) and existence of the asymptote $y+x=2$ follow from the equalities: $$\begin{aligned} \lim_{k\rightarrow1}\frac{y_{3}(k)}{x_{3}(k)} & = & -1,\\ \lim_{k\rightarrow1}\left(y_{3}(x)+x_{3}(y)\right) & = & 2,\\ \left(y_{3}(x)+x_{3}(y)\right)-2 & = & \frac{4}{1+k}\left(E(k)-\frac{1+k}{2}\right)>0.\end{aligned}$$ Finally, convexity of the curve $\gamma_{3}$ follows since $$\frac{dy_{3}}{dx}=\frac{dy_{3}/dk}{dx_{3}/dk}=\alpha(k),$$ where $\alpha(k)$ is given by (\[eq:alpha(k)\]), which is increasing by the proof of Lemma \[lem:g2\]. $\square$ A plot of the curve $\gamma_{3}$ with its bounds given by (\[eq:x3bound\]) is shown in Fig \[fig:g3\]. ![\[fig:g3\]The curve $\gamma_{3}$ and its bounds $y+x=2,\quad x=2\pi$.](figg3b1) \[lem:g23\]For any $y\in(-\infty,0)$, we have $x_{2}(y)<x_{3}(y)$. It follows from Lemmas \[lem:g2\] and \[lem:g3\] that $x_{2}(y)<-y<2-y<x_{3}(y),\quad y\in(-\infty,0)$. $\square$ Lemmas \[lem:g1\]–\[lem:g23\] allow us to define the following domains in the plane $Q\subset\mathbb{R}_{x,y}^{2}$: $$\begin{aligned} m_{1} & = & \left\{ (x,y)\in\mathbb{R}^{2}\quad\mid\quad y<0,\quad0<x<x_{2}(y)\right\} ,\\ m_{2} & = & \left\{ (x,y)\in\mathbb{R}^{2}\quad\mid\quad y<0,\quad x_{2}(y)<x<x_{3}(y)\right\} ,\\ m_{3} & = & \left\{ (x,y)\in\mathbb{R}^{2}\quad\mid\quad y<0,\quad x_{3}(y)<x\right\} ,\end{aligned}$$ see Figure \[fig:g1g5\]. \[lem:m13\]The domains $m_{1},m_{2},m_{3}\subset\mathbb{R}_{x,y}^{2}$ are open, connected and simply connected, with the following boundaries: $$\begin{aligned} \partial m_{1} & = & \gamma_{1}\cup\gamma_{2}\cup\{O\},\\ \partial m_{2} & = & \gamma_{2}\cup\gamma_{3}\cup\gamma_{4}\cup\{O,P\},\\ \partial m_{3} & = & \gamma_{3}\cup\gamma_{5}\cup\{P\}.\end{aligned}$$ Moreover, the quadrant $Q$ has the following decomposition into disjoint subsets: $$Q=\left(\cup_{i=1}^{3}m_{i}\right)\cup\left(\cup_{i=1}^{5}\gamma_{i}\right)\cup\{O,P\}.$$ Follows from the definition of the domains $m_{i}$ and from Lemmas \[lem:g1\]–\[lem:g23\]. $\square$ Define the inverse images of the sets $m_{i},\gamma_{i},$ and $P$ via the projection $p$ (\[eq:P\]): $$\begin{aligned} M_{9}^{\prime}=p^{-1}(m_{1}), & \quad M_{35}^{\prime}=p^{-1}(m_{2}), & \quad M_{1}^{\prime}=p^{-1}(m_{3}),\\ M_{29}^{\prime}=p^{-1}(\gamma_{1}), & \quad M_{25}^{\prime}=p^{-1}(\gamma_{2}), & \quad M_{21}^{\prime}=p^{-1}(\gamma_{3}),\\ M_{39}^{\prime}=p^{-1}(\gamma_{4}), & \quad M_{17}^{\prime}=p^{-1}(\gamma_{5}), & \quad M_{33}^{\prime}=p^{-1}(P).\end{aligned}$$ Explicitly, these sets are defined in Table \[tab:M\_j\]. $j$ $y$ $x$ $z$ ----- --------------- ----------------------- ----- 1 $(-\infty,0)$ $(x_{3}(y),+\infty)$ 0 9 $(-\infty,0)$ $(0,x_{2}(y))$ 0 17 0 $(2\pi,+\infty)$ 0 21 $(-\infty,0)$ $x_{3}(y)$ 0 25 $(-\infty,0)$ $x_{2}(y)$ 0 29 $(-\infty,0)$ 0 0 33 0 $2\pi$ 0 35 $(-\infty,0)$ $(x_{2}(y),x_{3}(y))$ 0 39 0 $(0,2\pi)$ 0 : \[tab:M\_j\]Definition of $M_{j}^{\prime}\subset p^{-1}(Q).$ Now we aim to prove that all the mappings $\mathrm{Exp}:N_{j}^{\prime}\rightarrow M_{j}^{\prime}$ are diffeomorphisms for the sets $N_{j}^{\prime}$ and $M_{j}^{\prime}$ defined by Tables \[tab:Nj\], \[tab:N35\], \[tab:N33\_39\], \[tab:M\_j\]. \[lem:Exp\_curve\]For any $j\in\left\{ 17,21,25,29,33,39\right\} $ the mapping $\mathrm{Exp}:N_{j}^{\prime}\rightarrow M_{j}^{\prime}$ is a diffeomorphism. Follows immediately from above lemmas: - Lemma \[lem:g5\] for $j=17$, - Lemma \[lem:g3\] for $j=21$, - Lemma \[lem:g2\] for $j=25$, - Lemma \[lem:g1\] for $j=29$, - Lemma \[lem:g4\] for $j=39$, - and it is obvious for $j=33$.$\square$ Now we consider the mappings of 2-dimensional domains. \[lem:ExpN9\]The mapping $\mathrm{Exp}:N_{9}^{\prime}\rightarrow M_{9}^{\prime}$ is a diffeomorphism. In the coordinates $p=\frac{t}{2k}$ and $\tau=\left(\varphi+\frac{t}{2}\right)/k,$ the domain $N_{9}^{\prime}$ is given as follows: $$N_{9}^{\prime}:\lambda\in C_{2}^{+},\quad s_{2}=0,\quad p=2K(k),\quad\tau\in(0,K(k)),\quad k\in(0,1).$$ Introduce further the coordinate $u=\mathrm{am}(\tau)$, then, $$N_{9}^{\prime}:s_{2}=0,\quad p=2K(k),\quad u\in\left(0,\frac{\pi}{2}\right),\quad k\in(0,1).$$ In these coordinates the exponential mapping $\mathrm{Exp}(\lambda,t)=(x,y,z)$ is given as follows: $$\begin{aligned} x & = & x_{9}(u,k)=\frac{4ka(k)\cos(u)}{1-k^{2}},\\ y & = & y_{9}(u,k)=-\frac{4a(k)\sqrt{1-k^{2}\sin^{2}(u)}}{1-k^{2}},\\ z & = & 0.\end{aligned}$$ Consider the mapping: $$\begin{aligned} f_{9}:D_{u,k} & \rightarrow & \mathbb{R}_{x,y}^{2},\quad(u,k)\mapsto(x_{9},y_{9}),\\ D_{u,k} & = & \left(0,\frac{\pi}{2}\right)_{u}\times(0,1)_{k}.\end{aligned}$$ We have to show that the mapping $f_{9}:D\rightarrow m_{1}$ is a diffeomorphism. 1. First we show that $f_{9}(D)\subset m_{1}$.\ We fix any $k\in(0,1)$ and show that the curve $\Gamma:u\rightarrow(x_{9},y_{9}),\quad u\in\left(0,\frac{\pi}{2}\right),$ is contained in $m_{1}$. Compute first the boundary points of $\Gamma$: $$\begin{aligned} u & \rightarrow & 0\implies\Gamma(u)\rightarrow(x_{2}(k),y_{2}(k))\in\gamma_{2},\\ u & \rightarrow & \frac{\pi}{2}\implies\Gamma(u)\rightarrow(0,y_{2}(k))\in\gamma_{1}.\end{aligned}$$ Further, since $$\begin{aligned} \frac{\partial x_{9}}{\partial u} & = & -\frac{4ka(k)}{1-k^{2}}\sin(u)<0,\\ \frac{\partial y_{9}}{\partial u} & = & \frac{4k^{2}a(k)}{1-k^{2}}\frac{\sin(u)\cos(u)}{\sqrt{1-k^{2}\sin^{2}(u)}}>0,\end{aligned}$$ then the curve $\Gamma$ is a graph of the smooth function $x\mapsto y_{9}(x)$. Since $$\frac{dy_{9}}{dx}=\frac{\partial y_{9}/\partial u}{\partial x_{9}/\partial u}=-\frac{k\cos(u)}{\sqrt{1-k^{2}\sin^{2}(u)}},\quad\textrm{for }u\in\left(0,\frac{\pi}{2}\right),$$ then the curve $\Gamma$ is concave. Moreover, $$\left.\frac{dy_{9}}{dx}\right|_{u=0}=-k>\alpha(k)=\frac{dy_{2}}{dx},$$ where $\alpha(k)$ is given by (\[eq:alpha(k)\]). Since the curve $\gamma_{2}$ is convex, it follows that the curve $\Gamma$ lies below the curve $\gamma_{2}$. Thus $\Gamma\subset m_{1}.$ Consequently, $f_{9}(D)\subset m_{1}$. 2. Since $$\frac{\partial(x_{9},y_{9})}{\partial(u,k)}=\frac{16k^{2}E(k)a(k)\sin(u)}{\left(1-k^{2}\right)^{2}\sqrt{1-k^{2}\sin^{2}(u)}}>0,$$ then the mapping $f_{9}:D\rightarrow m_{1}$ is non-degenerate. 3. Finally we show that the mapping $f_{9}:D\rightarrow m_{1}$ is proper.\ It is obvious that a sequence $(u_{n},k_{n})\rightarrow\partial D$ iff it has a subsequence on which at least one of the conditions hold: $$u\rightarrow0,\quad u\rightarrow\frac{\pi}{2},\quad k\rightarrow0,\quad k\rightarrow1.\label{eq:u0pi/2}$$ On the other hand, a sequence $(x_{n},y_{n})\rightarrow\partial m_{1}$ iff it has a subsequence on which at least one of the conditions hold: $$x\rightarrow0,\quad x\rightarrow+\infty,\quad y\rightarrow0,\quad y\rightarrow-\infty,\quad x_{2}(y)-x\rightarrow0.\label{eq:x0+infty}$$ We show that in each of the cases (\[eq:u0pi/2\]) we have one of the cases (\[eq:x0+infty\]). If $k\rightarrow0$, then $x_{9}\rightarrow0$ and $y_{9}\rightarrow0$. We can assume below that $k\rightarrow\bar{k}\in(0,1]$.\ Let $\bar{k}\in(0,1)$. If $u\rightarrow0$, then $\left(x_{9},y_{9}\right)\rightarrow\left(x_{2}(k),y_{2}(k)\right)\in\gamma_{2}$ thus $x_{2}(y)-x\rightarrow0$. If $u\rightarrow\frac{\pi}{2}$, then $x_{9}\rightarrow0$. Let $\bar{k}=1$. If $u\rightarrow0$, then $x_{9}\rightarrow\infty$. If $u\rightarrow\frac{\pi}{2}$, then $y_{9}\rightarrow\infty$.\ We proved that the mapping $f_{9}:D\rightarrow m_{1}$ is proper. 4. The sets $D,\, m_{1}\subset\mathbb{R}^{2}$ are open, connected and simply connected.\ Thus $f_{9}:D\rightarrow m_{1}$ is a diffeomorphism, as well as $\mathrm{Exp}:N_{9}^{\prime}\rightarrow M_{9}^{\prime}$.$\square$ \[lem:ExpN1\]The mapping $\mathrm{Exp}:N_{1}^{\prime}\rightarrow M_{1}^{\prime}$ is a diffeomorphism. In the coordinates $p=\frac{t}{2}$ and $\tau=\varphi+\frac{t}{2},$ the domain $N_{1}^{\prime}$ is given as follows: $$N_{1}^{\prime}:\lambda\in C_{1}^{0},\quad s_{1}=0,\quad p=2K(k),\quad\tau\in(0,K(k)),\quad k\in(0,1).$$ Introduce further the coordinate $u=\mathrm{am}(\tau)$, then $$N_{1}^{\prime}:s_{1}=0,\quad p=2K(k),\quad u\in\left(0,\frac{\pi}{2}\right),\quad k\in(0,1).$$ In these coordinates the exponential mapping $\mathrm{Exp}(\lambda,t)=(x,y,z)$ is given as follows: $$\begin{aligned} x & = & x_{1}(u,k)=\frac{4E(k)\sqrt{1-k^{2}\sin^{2}(u)}}{1-k^{2}},\\ y & = & y_{1}(u,k)=-\frac{4k\, E(k)\cos(u)}{1-k^{2}},\\ z & = & 0.\end{aligned}$$ Consider the mapping: $$\begin{aligned} f_{1}:D_{u,k} & \rightarrow & \mathbb{R}_{x,y}^{2},\quad(u,k)\mapsto(x_{1},y_{1}),\\ D_{u,k} & = & \left(0,\frac{\pi}{2}\right)_{u}\times(0,1)_{k}.\end{aligned}$$ We have to show that the mapping $f_{1}:D\rightarrow m_{3}$ is a diffeomorphism. 1. First we show that $f_{1}(D)\subset m_{3}$.\ If $(u,k)\in D$, then $x_{1}(u,k)>0,\quad y_{1}(u,k)<0$, thus $f_{1}(D)\subset\mathbb{R}_{+-}^{2}=\left\{ (x,y)\in\mathbb{R}^{2}\quad\mid\quad x>0,\quad y<0\right\} $. The boundary of the domain $m_{3}$ in $\mathbb{R}_{+-}^{2}$ is the curve $\gamma_{3}$ and along this curve we have $\frac{y_{4}(k)}{x_{4}(k)}=-k$. Thus $$\gamma_{3}=\left\{ (x,y)\in\mathbb{R}_{+-}^{2}\quad\mid\quad x=\frac{4E\left(-\frac{y}{x}\right)}{1-\frac{y^{2}}{x^{2}}}\right\} ,$$ so $$m_{3}=\left\{ (x,y)\in\mathbb{R}_{+-}^{2}\quad\mid\quad x>\frac{4E\left(-\frac{y}{x}\right)}{1-\frac{y^{2}}{x^{2}}}\right\} .$$ Consider the function $$\varphi_{1}(u,k)=\left.x-\frac{4E\left(-\frac{y}{x}\right)}{1-\frac{y^{2}}{x^{2}}}\right|_{x=x_{1}(u,k),\, y=y_{1}(u,k)}.$$ We have to show that $\varphi_{1}(u,k)>0$ for $(u,k)\in D$. Since $$\begin{aligned} \varphi_{1}(u,k) & = & \frac{4E(k)\sqrt{1-k^{2}\sin^{2}(u)}}{1-k^{2}}-\frac{4E(\bar{k})}{1-\frac{k^{2}\cos^{2}u}{1-k^{2}\sin^{2}u}}\\ & = & \frac{4\sqrt{1-k^{2}\sin^{2}(u)}}{1-k^{2}}\left(E(k)-E(\bar{k})\sqrt{1-k^{2}\sin^{2}(u)}\right),\end{aligned}$$ where $\bar{k}=\frac{k\cos(u)}{\sqrt{1-k^{2}\sin^{2}u}}$, we have to show that $$\varphi_{2}(u,k)=E(k)-E(\bar{k})\sqrt{1-k^{2}\sin^{2}(u)}>0,\quad(u,k)\in D.$$ Since $\varphi_{2}(0,k)=0$ and $$\frac{\partial\varphi_{2}}{\partial u}=\frac{\tan(u)}{\sqrt{1-k^{2}\sin^{2}(u)}}\varphi_{3}(u,k),$$ where $\varphi_{3}(u,k)=\left(1-k^{2}\sin^{2}(u)\right)E(\bar{k})-\left(1-k^{2}\right)K(\bar{k})$, it is sufficient to show that $\varphi_{3}(u,k)>0$ for all $(u,k)\in D$. By Lemma \[lem:a(k)\], we have $$a(k)=E(k)-\left(1-k^{2}\right)K(k)>0,\quad k\in(0,1),$$ thus $$\begin{aligned} a(\bar{k}) & = & E(\bar{k})-\left(1-\bar{k}^{2}\right)K(\bar{k})\\ & = & \frac{\left(1-k^{2}\sin^{2}(u)\right)E(\bar{k})-\left(1-k^{2}\right)K(\bar{k})}{1-k^{2}\sin^{2}(u)}>0.\end{aligned}$$ That is, $\varphi_{3}(u,k)>0,\quad\forall(u,k)\in D$. Thus it follows that $f_{1}(D)\subset m_{3}$, i.e., $\mathrm{Exp}(N_{1}^{\prime})\subset M_{1}^{\prime}$. 2. Since $$\frac{\partial(x_{1},y_{1})}{\partial(u,k)}=-\frac{16E(k)\, a(k)\sin(u)}{\left(1-k^{2}\right)^{2}\sqrt{1-k^{2}\sin^{2}(u)}}<0,$$ then the mapping $f_{1}:D\rightarrow m_{3}$ is non-degenerate. 3. Finally we show that the mapping $f_{1}:D\rightarrow m_{3}$ is proper.\ In order to show that the mapping $f_{1}:D\rightarrow m_{3}$ is proper, we show that if a sequence $(u_{n},k_{n})\in D$ satisfies one of the conditions: $$u\rightarrow0,\quad u\rightarrow\frac{\pi}{2},\quad k\rightarrow0,\quad k\rightarrow1,$$ then its image $(x_{n},y_{n})=f_{1}(u_{n},k_{n})$ satisfies one of the conditions: $$x\rightarrow0,\quad x\rightarrow+\infty,\quad y\rightarrow0,\quad y\rightarrow\infty,\quad x_{3}(y)-x\rightarrow0.$$ We can assume that $k\rightarrow\bar{k}\in(0,1],\quad u\in\bar{u}\in[0,\frac{\pi}{2}]$. If $\bar{k}=0$, then $y_{1}\rightarrow0$.\ Let $\bar{k}\in(0,1)$. If $\bar{u}\rightarrow0$, then $\left(x_{1},y_{1}\right)\rightarrow\left(x_{3}(k),y_{3}(k)\right)\in\gamma_{3}$, thus $x_{3}(y)-x\rightarrow0$. If $\bar{u}=\frac{\pi}{2}$, then $y_{1}\rightarrow0$. Let $\bar{k}=1$. If $\bar{u}\in[0,\frac{\pi}{2})$, then $x_{1}\rightarrow\infty$, $y_{1}\rightarrow\infty$. Let $\bar{u}=\frac{\pi}{2}$, then $$\begin{aligned} y_{1} & \sim & -\frac{4\cos(u)}{1-k^{2}},\\ x_{1} & \sim & 4\sqrt{\frac{1}{1-k^{2}}+k^{2}\left(\frac{\cos(u)}{1-k^{2}}\right)^{2}}.\end{aligned}$$ We can assume that $\frac{\cos(u)}{1-k^{2}}\rightarrow d\in[0,+\infty)$. If $d\in[0,+\infty)$, then $x_{1}\rightarrow+\infty$, and if $d=+\infty$, then $y_{1}\rightarrow\infty$.\ We proved that the mapping $f_{1}:D\rightarrow m_{3}$ is proper. 4. The sets $D,\, m_{3}\subset\mathbb{R}^{2}$ are open, connected and simply connected. Thus $f_{1}:D\rightarrow m_{3}$ is a diffeomorphism, as well as the mapping $\mathrm{Exp}:N_{1}^{\prime}\rightarrow M_{1}^{\prime}$.$\square$ \[lem:ExpN35\]The mapping $\mathrm{Exp}:N_{35}^{\prime}\rightarrow M_{35}^{\prime}$ is a diffeomorphism. It follows from Tables \[tab:N35\], \[tab:M\_j\] that $$\begin{aligned} N_{35}^{\prime} & = & \left\{ (\lambda,t)\in N\quad\mid\quad\gamma_{\frac{t}{2}}=0,\quad c_{\frac{t}{2}}>0,\quad t\in(0,\mathbf{t}(\lambda))\right\} ,\\ M_{35}^{\prime} & = & \left\{ q\in M\quad\mid\quad z=0,\quad y<0,\quad x_{2}(y)<x<x_{3}(y)\right\} .\end{aligned}$$ Further we have an obvious decomposition $$\begin{aligned} N_{35}^{\prime} & = & N_{35,1}^{\prime}\sqcup N_{35,2}^{\prime}\sqcup N_{35,3}^{\prime},\\ N_{35,j}^{\prime} & = & N_{35}^{\prime}\cap N_{j},\quad j=1,2,3.\end{aligned}$$ 1. We show first that $\mathrm{Exp}(N_{35}^{\prime})\subset M_{35}^{\prime}$.\ Consider the set $N_{35,2}^{\prime}$. In the coordinates $p=\frac{t}{2k}$ and $\tau=\left(\varphi+\frac{t}{2}\right)/k,$ the domain $N_{35,2}^{\prime}$ is given as follows: $$N_{35,2}^{\prime}:\lambda\in C_{2}^{+},\quad s_{2}=1,\quad p=(0,2K(k)),\quad\tau=0,\quad k\in(0,1).$$ Introduce further the coordinate $u=\mathrm{am}(p)$, then $$N_{35,2}^{\prime}:\lambda\in C_{2}^{+},\quad s_{2}=1,\quad u=(0,2\pi),\quad\tau=0,\quad k\in(0,1).$$ In these coordinates the exponential mapping $\mathrm{Exp}(\lambda,t)=(x,y,z),\quad(\lambda,t)\in N_{35,2}^{\prime}$ is given as follows: $$\begin{aligned} x & = & x_{35}(u,k)=\frac{2k}{1-k^{2}}\left[\sin(u)\sqrt{1-k^{2}\sin^{2}(u)}-\cos(u)\,\alpha(u,k)\right],\\ y & = & y_{35}(u,k)=-\frac{2}{1-k^{2}}\left[\sqrt{1-k^{2}\sin^{2}(u)}\,\alpha(u,k)-k^{2}\sin(u)\cos(u)\right],\\ z & = & 0,\end{aligned}$$ where $\alpha(u,k)=E(u,k)-\left(1-k^{2}\right)F(u,k).$ Thus $\mathrm{Exp}(N_{35,2}^{\prime})\subset\left\{ q\in M\quad\mid\quad z=0\right\} $. Now we show that $x_{35}(u,k)>0,\quad y_{35}(u,k)<0$ for $(u,k)\in(0,\frac{\pi}{2})\times(0,1)$. We have to prove the double inequality $$\begin{aligned} \alpha_{1}(u,k) & < & \alpha(u,k)<\alpha_{2}(u,k),\quad(u,k)\in(0,\frac{\pi}{2})\times(0,1),\\ \alpha_{1}(u,k) & = & \frac{k^{2}\sin(u)\cos(u)}{\sqrt{1-k^{2}\sin^{2}(u)}},\\ \alpha_{2}(u,k) & = & \frac{\sin(u)\sqrt{1-k^{2}\sin^{2}(u)}}{\cos(u)}.\end{aligned}$$ This double inequality follows since $$\begin{aligned} \alpha_{1}(0,k) & = & \alpha(0,k)=\alpha_{2}(0,k)=0,\\ \frac{\partial}{\partial u}\left(\alpha(u,k)-\alpha_{1}(u,k)\right) & = & \left(1-k^{2}\right)\sin^{2}(u)>0,\\ \frac{\partial}{\partial u}\left(\alpha_{2}(u,k)-\alpha(u,k)\right) & = & 1-k^{2}>0.\end{aligned}$$ Thus $x_{35}(u,k)>0,\quad y_{35}(u,k)<0$ for $(u,k)\in\left(0,\frac{\pi}{2}\right)\times(0,1)$. If $u\in[\frac{\pi}{2},\pi),\quad k\in(0,1)$, then $\sin(u)>0,\quad\cos(u)\leq0,\quad\alpha(u,k)>0$, thus $x_{35}(u,k)>0,\quad y_{35}(u,k)<0$. We proved that\ $\mathrm{Exp}(N_{35,2}^{\prime})\subset\left\{ q\in M\quad\mid\quad z=0,\quad x>0,\quad y<0\right\} $. The sets $N_{35,1}^{\prime}$ and $N_{35,3}^{\prime}$ are considered similarly. Thus it follows that $$\mathrm{Exp}(N_{35}^{\prime})\subset\mathbb{R}_{+-}^{2}:=\left\{ q\in M\quad\mid\quad z=0,\quad x>0,\quad y<0\right\} .$$ We now show that $\mathrm{Exp}(N_{35}^{\prime})\subset M_{35}^{\prime}$. Notice the decomposition $$\mathbb{R}_{+-}^{2}=M_{1}^{\prime}\sqcup M_{9}^{\prime}\sqcup M_{21}^{\prime}\sqcup M_{25}^{\prime}\sqcup M_{35}^{\prime}.$$ By contradiction, let $\mathrm{Exp}(N_{35}^{\prime})\not\subset M_{35}^{\prime}$, then $\mathrm{Exp}(N_{35}^{\prime})\cap\left(M_{1}^{\prime}\sqcup M_{9}^{\prime}\sqcup M_{21}^{\prime}\sqcup M_{25}^{\prime}\right)\ni q$. Let $q\in\mathrm{Exp}(N_{35}^{\prime})\cap M_{1}^{\prime}$ (the cases of intersection with $M_{9}^{\prime},M_{21}^{\prime},M_{25}^{\prime}$ are considered similarly). Then there exist $\left(\lambda_{35},t_{35}\right)\in N_{35}^{\prime}$, $\left(\lambda_{1},t_{1}\right)\in N_{1}^{\prime}$ such that $q=\mathrm{Exp}\left(\lambda_{35},t_{35}\right)=\mathrm{Exp}\left(\lambda_{1},t_{1}\right)$. Notice that $$\begin{aligned} \left(\lambda_{35},t_{35}\right) & \in & N_{35}^{\prime}\implies t_{35}<t_{\mathrm{cut}}\left(\lambda_{35}\right),\label{eq:lam35}\\ \left(\lambda_{1},t_{1}\right) & \in & N_{1}^{\prime}\implies t_{1}<t_{\mathrm{cut}}\left(\lambda_{1}\right).\label{eq:lam1}\end{aligned}$$ If $t_{35}<t_{1}$, then the trajectory $\mathrm{Exp}\left(\lambda_{1},t\right),\quad t\in[0,t_{1}],$ is not optimal which contradicts to (\[eq:lam1\]) . If $t_{35}\geq t_{1}$, then the trajectory $\mathrm{Exp}(\lambda_{35},t),\quad t\in[0,t_{35}+\varepsilon]$ is not optimal for small $\varepsilon>0$ which contradicts to (\[eq:lam35\]). Thus $\mathrm{Exp}(N_{35}^{\prime})\cap M_{1}^{\prime}=\emptyset$. Then it follows that $\mathrm{Exp}(N_{35}^{\prime})\subset M_{35}^{\prime}$. 2. We now prove that $\mathrm{Exp}:N_{35}^{\prime}\rightarrow M_{35}^{\prime}$ is non-degenerate.\ Let $\nu=(\lambda,t)\in N_{35,2}^{\prime}$. In the coordinates $(p,\tau,k)$ on $N_{35,2}^{\prime}$, we have $p\in(0,2K(k)),\quad\tau=0,\quad k\in(0,1)$. Since $t<4K(k)=t_{\mathrm{cut}}(\lambda)\leq t_{1}^{\mathrm{conj}}(\lambda)$, therefore the Jacobian $\frac{\partial q}{\partial\nu}(\nu)\neq0$. We have $$\frac{\partial q}{\partial\nu}=\frac{\partial(x,y,z)}{\partial(p,\tau,k)}=\left|\begin{array}{ccc} x_{p} & x_{\tau} & x_{k}\\ y_{p} & y_{\tau} & y_{k}\\ z_{p} & z_{\tau} & z_{k} \end{array}\right|.$$ Since $\mathrm{Exp}\left(N_{i,2}^{\prime}\right)\subset\left\{ q\in M\quad\mid\quad z=0\right\} ,$ then $z_{p}(\nu)=z_{k}(\nu)=0$, thus $$\frac{\partial q}{\partial\nu}(\nu)=\frac{\partial(x,y)}{\partial(p,k)}(\nu)\, z_{\tau}(\nu)\neq0,$$ so $\frac{\partial(x,y)}{\partial(p,k)}(\nu)\neq0$. Since $\nu\in N_{35,2}^{\prime}$ is arbitrary, then $\left.\mathrm{Exp}\right|_{N_{35,2}^{\prime}}$ is non-degenerate. Similarly it follows that $\mathrm{Exp}$ is non-degenerate at any point $\nu\in N_{35,1}^{\prime}\cup N_{35,3}^{\prime}$. 3. The mapping $\mathrm{Exp}:N_{35}^{\prime}\rightarrow M_{35}^{\prime}$ is proper. This follows similarly to the proof of properness of $\mathrm{Exp}:D_{1}\rightarrow M_{1}$. 4. It is obvious that $M_{35}^{\prime}$ is a connected, simply connected 2-dimensional manifold. In order to prove the same property for $N_{35}^{\prime}$, consider the vector field $$\overrightarrow{P}=c\frac{\partial}{\partial\gamma}-\sin\gamma\frac{\partial}{\partial c}\in\mathrm{Vec}(N).$$ Since $$e^{t/2\overrightarrow{P}}\left(N_{35}^{\prime}\right)=\left\{ (\lambda,t)\in N\quad\mid\quad\gamma=0,\quad c>0,\quad t<\mathbf{t}(\lambda)\right\}$$ is a connected, simply connected 2-dimensional manifold, the same properties hold for the set $N_{35}^{\prime}$.\ Then it follows that $\mathrm{Exp}:N_{35}^{\prime}\rightarrow M_{35}^{\prime}$ is a diffeomorphism. $\square$ Stratification of the set $M^{\prime}$ -------------------------------------- Define subsets $M_{j}^{\prime}\subset M^{\prime},\quad j=1,\ldots,40,$ as follows: - For $j\in\left\{ 1,9,17,21,25,29,33,35,39\right\} ,$ the sets $M_{j}$ are given by Table \[tab:M\_j\], - For the rest $j$ the sets $M_{j}^{\prime}$ are given by equalities (\[eq:epsiMj\])–(\[eq:eps4Mj\]): $$\begin{aligned} \varepsilon^{i}\left(M_{j}^{\prime}\right) & = & M_{j+i}^{\prime},\quad i=1,\ldots,7,\quad j=1,9,\label{eq:epsiMj}\\ \varepsilon^{2i}\left(M_{17}^{\prime}\right) & = & M_{17+i}^{\prime},\quad i=1,2,3,\label{eq:eps2iMj}\\ \varepsilon^{2+i}\left(M_{j}^{\prime}\right) & = & M_{j+i}^{\prime},\quad i=1,2,3,\quad j=21,25,29,35,\label{eq:eps2+iMj}\\ \varepsilon^{4}\left(M_{j}^{\prime}\right) & = & M_{j+1}^{\prime},\quad j=33,39.\label{eq:eps4Mj}\end{aligned}$$ A stratification of $M^{\prime}$ is given as: $$M^{\prime}=\sqcup_{j=1}^{40}M_{j}^{\prime}.\label{eq:M'Decomposn}$$ Follows from Lemma \[lem:m13\] and the description of the action of reflections $\varepsilon^{i}$ in the plane $\left\{ z=0\right\} $, see Table \[tab:epsiz\_0\]. ** $i$ 1 2 3 4 5 6 7 ----- ------ ----- ------ ------ ------ ------ ------ $x$ $x$ $x$ $x$ $-x$ $-x$ $-x$ $-x$ $y$ $-y$ $y$ $-y$ $y$ $-y$ $y$ $-y$ : \[tab:epsiz\_0\]Action of $\varepsilon^{i}$ in the plane $\left\{ z=0\right\} $ $\square$ Stratification (\[eq:M’Decomposn\]) is shown in Figure \[fig:M’decomp\]. ![\[fig:M’decomp\]Stratification of $M^{\prime}$](figMpdecomp_No) \[thm:expNi\_Mi\]For any $i=1,\ldots,40,$ the mapping $\mathrm{Exp}:N_{i}^{\prime}\rightarrow M_{i}^{\prime}$ is a diffeomorphism. Follows from Lemmas \[lem:Exp\_curve\]–\[lem:ExpN35\] via the symmetries $\varepsilon^{i}$ of the exponential mapping. $\square$ Define the following important sets: - the cut locus $\mathrm{Cut}=\left\{ \mathrm{Exp}(\lambda,t_{\mathrm{cut}}(\lambda))\quad\mid\quad\lambda\in C\right\} ,$ - the first Maxwell set\ $\mathrm{Max}=\left\{ q_{1}\in M\quad\mid\quad\exists\textrm{ minimizers }q^{\prime}(t)\not\equiv q^{\prime\prime}(t),\quad t\in[0,t_{1}],\textrm{ such that }q^{\prime}(t_{1})=q^{\prime\prime}(t_{1})=q_{1}\right\} .$ - the first conjugate locus $\mathrm{Conj}=\left\{ \mathrm{Exp}(\lambda,t_{1}^{\mathrm{conj}}(\lambda))\quad\mid\quad\lambda\in C\right\} ,$ - the rest of the points in $M^{\prime}$ compared with $\mathrm{Cut}$, i.e., $\mathrm{Rest}=M^{\prime}\backslash\mathrm{Cut}$. We have the following explicit description of these sets: $$\begin{aligned} \mathrm{Cut} & = & \cup\left\{ M_{i}^{\prime}\quad\mid\quad i=1,\ldots,34\right\} ,\\ \mathrm{Max} & = & \cup\left\{ M_{i}^{\prime}\quad\mid\quad i=1,\ldots,20,29,\ldots,32\right\} ,\\ \mathrm{Conj}\,\cap\,\mathrm{Cut} & = & \cup\left\{ M_{i}^{\prime}\quad\mid\quad i=21,\ldots,28,33,34\right\} ,\\ \mathrm{Rest} & = & \cup\left\{ M_{i}^{\prime}\quad\mid\quad i=35,\ldots,40\right\} ,\end{aligned}$$ Thus we get the following decomposition of the sets $M^{\prime}$: $$\begin{aligned} M^{\prime} & = & \mathrm{Cut}\,\sqcup\,\mathrm{Rest},\\ \mathrm{Cut} & = & \mathrm{Max}\,\sqcup(\mathrm{Conj}\,\cap\,\mathrm{Cut}).\end{aligned}$$ The global structure of the cut locus is shown in Figure \[fig:cut\_locus\]. ![\[fig:cut\_locus\]Cut Locus](cut_sh2) From our analysis of the exponential mapping, we get the following description of the cut time and the optimal synthesis on $\mathrm{SH}(2)$. \[thm:cut\_time\_exact\]We have the following explicit description of the cut time, $t_{\mathrm{cut}}(\lambda)=\mathbf{t}(\lambda)$ for any $\lambda\in C$. In detail: $$\begin{aligned} \lambda & \in & C_{1}\implies t_{\mathrm{cut}}(\lambda)=t_{1}^{\mathrm{Max}}(\lambda)=4K(k),\\ \lambda & \in & C_{2}\implies t_{\mathrm{cut}}(\lambda)=t_{1}^{\mathrm{Max}}(\lambda)=4kK(k),\\ \lambda & \in & C_{4}\implies t_{\mathrm{cut}}(\lambda)=t_{1}^{\mathrm{conj}}(\lambda)=2\pi,\\ \lambda & \in & C_{3}\cup C_{5}\implies t_{\mathrm{cut}}(\lambda)=+\infty.\end{aligned}$$ If $\lambda\in C\backslash C_{4}$, then we know from Theorem \[thm:Cut\_time\] that $t_{\mathrm{cut}}(\lambda)=\mathbf{t}(\lambda)=t_{1}^{\mathrm{Max}}(\lambda)$. It remains to consider the case $\lambda\in C_{4}^{0}\cup C_{4}^{1}$. Let $\lambda\in C_{4}^{0}$, then $q_{t}=\mathrm{Exp}(\lambda,t)=(t,0,0).$ For any $t\in[0,t_{1}],\quad t_{1}=\mathbf{t}(\lambda)=2\pi$, the point $q_{t}$ is connected with $q_{0}$ by a unique geodesic $\mathrm{Exp}(\lambda^{1},s),\quad s\in(0,s_{1}]$, with $(\lambda^{1},s_{1})\in\widehat{N}$, namely $(\lambda^{1},s_{1})=(\lambda,t)\in N_{39}^{\prime}$ for $t\in(0,2\pi)$, and $(\lambda^{1},s_{1})=(\lambda,t)\in N_{33}^{\prime}$ for $t=2\pi$. Thus the geodesic $q_{t},\quad t\in[0,t_{1}]$ is a minimizer. It follows that $t_{\mathrm{cut}}(\lambda)=\mathbf{t}(\lambda)=t_{1}^{\mathrm{conj}}(\lambda)=2\pi$ for $\lambda\in C_{4}^{0}$. By applying a reflection $\varepsilon^{i}$, we get a similar equality for $\lambda\in C_{4}^{1}$. $\square$ From the above description of the structure of the exponential mapping, we get the following statement. \[thm:syn\] 1. For every point $q_{1}\in\widetilde{M}\cup\mathrm{Rest}$, there exists a unique minimizer $q(t),\quad t\in[0,t_{1}]$, for which the endpoint $q(t_{1})=q_{1}$ is neither a cut point nor a conjugate point. 2. For any point $q_{1}\in\mathrm{Max}$, there exist exactly two minimizers that connect $q_{0}$ to $q_{1}$ for which $q_{1}$ is a cut point but not a conjugate point. 3. For any point $q_{1}\in\mathrm{Conj}\,\cap\,\mathrm{Cut}$, there exists a unique minimizer that connects $q_{0}$ to $q_{1}$ for which $q_{1}$ is both a cut and a conjugate point, but not a Maxwell point. Sub-Riemannian Caustics and Sphere ================================== In [@Max_Conj_SH2] we presented plots of sub-Riemannian sphere and sub-Riemannian wavefront in the rectifying coordinates $(R_{1},R_{2},z)$. Here we perform another graphic study of the essential sub-Riemannian objects, i.e., sub-Riemannian caustic and sub-Riemannian sphere. Recall that the sub-Riemannian caustic which is the first conjugate locus is given as: $$\mathrm{Conj}=\left\{ \mathrm{Exp}\left(\lambda,t_{1}^{\mathrm{conj}}(\lambda)\right)\quad|\quad\lambda\in C\right\} .$$ The caustic is presented in Figure \[fig:Caustic\_C1\]. The component starting at $(0,0,0)$ is the local component of the caustic whereas other two parts on right and left side are the parts of the global component of the first caustic. The red colored surface inside the local and global components of the caustic is the cut locus whereas we see that the boundary of cut locus forms the boundary of the caustic. A zoomed version of the local component of the caustic is separately shown in Figure \[fig:Caustic\_Local\]. It is evident that it is a four cusp surface as predicted in [@Agrchev_Barilari_Boscain_SR]. A combined plot of first and second caustic is also shown in Figure \[fig:Caustic\_1\_2\]. Note that in the local component of the caustic, the first caustic is solid and the second caustic is transparent whereas in the global component of the caustic, the second caustic is solid and the first caustic is transparent. ![\[fig:Caustic\_C1\]Sub-Riemannian caustic and cut locus](Caustic) ![\[fig:Caustic\_Local\]Local component of sub-Riemannian caustic and cut locus](Caustic_Local) ![\[fig:Caustic\_1\_2\]Sub-Riemannian first and second caustic](sh2_conj12_trans) The sub-Riemannian sphere $S_{R}(q_{0};R)$ at $q_{0}$ is the set of end-points of minimizing geodesics of sub-Riemannian length $R$ and starting from $q{}_{0}$: $$\begin{aligned} S_{R} & = & \left\{ \mathrm{Exp}(\lambda,R)\in M\quad\vert\quad\lambda\in C,\quad t_{\mathrm{cut}}(\lambda)\geq R\right\} =\left\{ q\in M\quad\vert\quad d(q_{0},q)=R\right\} .\end{aligned}$$ The following plots are presented: 1. Sphere of radius $R=\pi$ (Figure \[fig:sphereRpiR12\]), 2. Sphere of radius $R=2\pi$ (Figure \[fig:sphereR2piR12\]), 3. Intersection of the cut locus with the hemisphere $z<0$ of radius $R=\pi$ (Figure \[fig:cutnegsphereRpiR12\]), 4. Intersection of the cut locus with the hemisphere $z<0$ of radius $R=2\pi$ (Figure \[fig:cutnegsphereR2piR12\]), 5. Intersection of the cut locus with the hemisphere $z<0$ of radius $R=3\pi$ (Figure \[fig:cutnegsphereR3piR12\]), 6. Matryoshka of hemispheres $z<0$ of radii $R=\pi$ and $R=2\pi$ (Figure \[fig:matr2R12\]). *![\[fig:sphereR2piR12\]Sub-Riemannian sphere of radius $R=2\pi$](sphereRpiR12 "fig:")* ** *![\[fig:sphereR2piR12\]Sub-Riemannian sphere of radius $R=2\pi$](sphereR2piR12 "fig:")* ![\[fig:cutnegsphereR2piR12\]Intersection of the cut locus with the hemisphere $z<0$ of radius $R=2\pi$ ](cutnegsphereRpiR12)   ![\[fig:cutnegsphereR2piR12\]Intersection of the cut locus with the hemisphere $z<0$ of radius $R=2\pi$ ](cutnegsphereR2piR12) ![\[fig:cutnegsphereR3piR12\]Intersection of the cut locus with the hemisphere $z<0$ of radius $R=2\pi$ ](cutnegsphereR3piR12) ![\[fig:matr2R12\]Matryoshka of hemispheres $z<0$ of radii $R=\pi$ and $R=2\pi$ ](matr2R12) Conclusion ========== The global optimality analysis and structure of exponential mapping for the sub-Riemannian problem on the Lie group SH(2) was considered. We cutout open dense domains by Maxwell strata in the preimage and in the image of exponential mapping and prove that restriction of the exponential mapping to these domains is a diffeomorphism. This fact leads to the proof that the cut time in the sub-Riemannian problem on the Lie group $\mathrm{\ensuremath{SH(2)}}$ is equal to the first Maxwell time. We then describe the global structure of the exponential mapping and obtain a stratification of the cut locus in the plane $z=0$. Consequently, the problem of finding optimal trajectories from any initial point $q_{0}\in M$ to another point $q_{1}\in M,\quad z\neq0$ is reduced to solving a set of algebraic equations. Summing up, a complete optimal synthesis for the sub-Riemannian problem on the Lie group $\mathrm{SH}(2)$ was constructed.
--- abstract: 'We determine the structure over ${\mathbb{Z}}$ of the ring of symmetric Hermitian modular forms with respect to $\mathbb{Q}(\sqrt{-1})$ of degree $2$ (with a character), whose Fourier coefficients are integers. Namely, we give a set of generators consisting of $24$ modular forms. As an application of our structure theorem, we give the Sturm bounds of such the modular forms of weight $k$ with $4\mid k$, in the case $p=2$, $3$. We remark that the bounds for $p\ge 5$ are already known.' author: - Toshiyuki Kikuta title: A ring of symmetric Hermitian modular forms of degree $2$ with integral Fourier coefficients --- [**2010 Mathematics subject classification**]{}: Primary 11F30 $\cdot$ Secondary 11F55\ [**Key words**]{}: ring of modular forms, Hermitian modular forms, generators. Introduction ============ Let $e_4$, $e_6$ be the normalized Eisenstein series of respective weight $4$, $6$ for $\Gamma _1:=SL_2(\mathbb{Z})$ and $\delta $ the Ramanujan delta function defined by $\delta =2^{-6}\cdot 3^{-3}(e_4^3-e_6^2)$. For the ${\mathbb{Z}}$-module $M_k(\Gamma _1;{\mathbb{Z}})$ consisting of modular forms of weight $k$ for $\Gamma _1$ whose Fourier coefficients are in ${\mathbb{Z}}$, we define an algebra over ${\mathbb{Z}}$ as $$A(\Gamma _1;{\mathbb{Z}}):=\bigoplus_{k\in {\mathbb{Z}}}M_k(\Gamma _1;{\mathbb{Z}}).$$ Then it is well-known as a classical result that all the Fourier coefficients of the modular forms $e_4$, $e_6$, $\delta $ are integers and they generate $A(\Gamma _1;{\mathbb{Z}})$. Namely we have $$A(\Gamma _1;{\mathbb{Z}})={\mathbb{Z}}[e_4,e_6,\delta ].$$ In the case of Siegel modular forms for the symplectic group $\Gamma _2:=Sp_2(\mathbb{Z})$ of degree $2$, there is a famous result of Igusa. He showed such the ring over ${\mathbb{Z}}$ are generated by $15$ modular forms. He showed also that its set of generators are minimal. In this paper, we consider the ring of symmetric Hermitian modular forms of degree $2$ with respect to $\mathbb{Q}(\sqrt{-1})$ whose Fourier coefficients are in $\mathbb{Z}$. Since it seems to be difficult to give generators of the full space of them, we restrict our selves to the case where the weights are multiples of $4$. We remark that, the ring of Siegel modular forms whose weights are multiples of $4$ are generated over $\mathbb{Z}$ by $23$ modular forms. This is an easy conclusion of Igusa’s result. In our case, there exists a set of generators consisting of $24$ modular forms whose weights are $$\begin{aligned} &4,\ 8,\ 12,\ 12,\ 12,\ 16,\ 16,\ 20,\ 24,\ 24,\ 28,\ 28,\ 32,\\ &36,\ 36,\ 36,\ 40,\ 40,\ 48,\ 48,\ 52,\ 60,\ 60,\ 72,\ 84. \end{aligned}$$ The precise statement can be found in Theorem \[Thm1\]. We construct explicitly these generators in Subsection \[Const\]. As an application of this result, we can obtain the Sturm bounds for $p=2$, $3$, in the Hermitian modular forms whose weights are multiples of $4$ (Theorem \[Thm2\]). We remark that the Sturm bounds for $p\ge 5$ are already known in [@Ki-Na2]. Preliminaries {#sec:4} ============= Hermitian modular forms of degree $2$ {#sec:4.1} ------------------------------------- We deal with Hermitian modular forms of degree $2$ only for ${\boldsymbol K}:=\mathbb{Q}(\sqrt{-1})$. Let ${\mathcal O}$ be the ring of Gauss integers, i.e. ${\mathcal O}=\mathbb{Z}[\sqrt{-1}]$. Let $\mathbb{H}_2$ be the Hermitian upper half-space of degree $2$ defined as $$\mathbb{H}_2:=\{ Z\in M_2(\mathbb{C})\;|\; \tfrac{1}{2i}(Z-{}^t\overline{Z})>0\; \}$$ where ${}^t\overline{Z}$ is the transposed complex conjugate of $Z$. The Hermitian modular group of degree $2$ $$U_2(\mathcal{O}):=\left\{\;M\in M_{4}(\mathcal{O})\;|\; {}^t\overline{M}J_2M=J_2,\; J_2=\binom{\;0_2\;\,-1_2}{1_2\;\;\;0_2}\right\}$$ acts on $\mathbb{H}_2$ by fractional transformation $$\mathbb{H}_2\ni Z\longmapsto M\langle Z\rangle :=(AZ+B)(CZ+D)^{-1}, \;M=\begin{pmatrix} A & B \\ C & D\end{pmatrix}\in U_2(\mathcal{O}).$$ We denote by $M_k(U_2({\mathcal O}))=M_k(U_2({\mathcal O}),{\det}^k)$ the space of symmetric Hermitian modular forms of weight $k$ and character ${\det}^k$ with respect to $U_2({\mathcal O})$. (We deal with modular forms with character ${\det }^{k}$, but we drop this in the notation). Namely, it consists of holomorphic functions $F:\mathbb{H}_2\longrightarrow \mathbb{C}$ satisfying $$F\mid_kM(Z):={\det}(CZ+D)^{-k}F(M\langle Z\rangle )={\det (M)}^k\cdot F(Z),$$ for all $M=\begin{pmatrix}A & B \\ C & D\end{pmatrix} \in U_2({\mathcal O})$ and $F({}^tZ)=F(Z)$. Note that one has $M_k(U_2({\mathcal O}))=\{0\}$ if $k$ is odd. The cusp forms are characterized by the condition $$\Phi \Big(F\mid_k\binom{\!{}^t\overline{U}\;0}{0\;\;U}\Big) \equiv 0\quad \text{for}\; \text{all}\; U\in GL_2(\mathbb{Q}(\sqrt{-1}))$$ where $\Phi$ is the Siegel $\Phi$-operator. We denote by $S_k(U_2({\mathcal O}))$ the subspace consisting of all cusp forms in $M_k(U_2({\mathcal O}))$. Fourier expansion {#sec:2.2} ----------------- Since all of $F$ in $M_k(U_2({\mathcal O}))$ satisfies the condition $$F(Z+B)=F(Z) \quad \text{for}\;\text{all}\; B\in Her_2(\mathcal{O}),$$ it has a Fourier expansion of the form $$F(Z)=\sum_{0\leq H\in\Lambda_2(\boldsymbol{K})}a_F(H)e^{2\pi i\text{tr}(HZ)},$$ where $$\Lambda_2(\boldsymbol{K}):=\{ H=(h_{ij})\in Her_2(\boldsymbol{K})\; |\; h_{ii}\in\mathbb{Z}, 2 h_{ij}\in\mathcal{O} \}.$$ We write $H=(m,r,s,n)$ for $H=\begin{pmatrix} m & \frac{r+si}{2} \\ \frac{r-si}{2} & n \end{pmatrix}\in \Lambda _2({\mathcal O})$ and also $a_F(m,r,s,n)$ for $a_F\begin{pmatrix}m & \frac{r+si}{2} \\ \frac{r-si}{2} & n \end{pmatrix}$ simply. Let $R$ be a subring of ${\mathbb{C}}$, we define $M_k(U_2({\mathcal O});R)$ as an $R$-module of all of $F\in M_k(U_2({\mathcal O}))$ such that $a_F(H)\in R$ for any $H\in \Lambda _2({\mathcal O})$. We put also $S_k(U_2({\mathcal O});R):=M_k(U_2({\mathcal O});R)\cap S_k(U_2({\mathcal O}))$. We put $$\begin{aligned} &\dot{q}_{11}:=\exp(2\pi i z_{11}),\quad \dot{q}_{22}:=\exp(2\pi i z_{22}), \\ &\dot{q}_{12}:=\exp\left( 2\pi i \frac{z_{12}-z_{21}}{-2i} \right),\quad \ddot{q}_{12}:=\exp\left(2\pi i \frac{z_{12}+z_{21}}{2}\right).\end{aligned}$$ Then for $H=(m,r,s,n)$ we have $$e^{2\pi i\text{tr}(HZ)}=\dot{q_{1}}^{m}\dot{q_{12}}^{r}\ddot{q_{12}}^{s}\dot{q_{2}}^{n}.$$ Then any element $F\in M_k(U_2({\mathcal O});R)$ can be regarded as an element of $$R[\![\dot{\boldsymbol{q}}]\!]:= R[\dot{q}_{12}^{\pm 1},\ddot{q}_{12}^{\pm }][\![\dot{q_{1}},\dot{q_{2}}]\!].$$ This notation is useful to calculate the Fourier expansion of Hermitian modular forms. We consider the Hermitian Eisenstein series of degree $2$ $$E_k(Z):=\sum_{M=\left(\begin{smallmatrix} * & * \\ C & D \end{smallmatrix}\right)} ({\det}M)^{\frac{k}{2}}{\det}(CZ+D)^{-k},\quad Z\in\mathbb{H}_2,$$ where $k>4$ is even and $M=\begin{pmatrix} * & * \\ C & D \end{pmatrix}$ runs over a set of representatives of $\left\{\begin{pmatrix} * & * \\ 0_2& * \end{pmatrix} \right\} \backslash U_2(\mathcal{O})$. Then we have $$E_k\in M_k(U_2(\mathcal{O})).$$ Moreover $E_4\in M_4(U_2(\mathcal{O}))$ is constructed by the Maass lift ([@Kri]). The Fourier coefficient of $E_k$ is given by the following formula: \[GHE\] The Fourier coefficient $a_{E_k}(H)$ of $E_k$ is given as follows. $$\begin{aligned} & a_{E_k}(H)\\ &=\begin{cases} 1 & \text{if}\;\; H=0_2,\\ \displaystyle -\frac{2k}{B_k}\,\sigma_{k-1}(\varepsilon (H)) & \text{if}\;\; {\rm rank}(H)=1,\\ \displaystyle \frac{4k(k-1)}{B_k\cdot B_{k-1,\chi_{-4}}}\sum_{0< d|\varepsilon (H)} d^{k-1} G_{\boldsymbol{K}}(k-2,4\,{\det}(H)/d^2) & \text{if}\;\; {\rm rank}(H)=2. \end{cases}\end{aligned}$$ where\ $B_m$ is the $m$-th Bernoulli number,\ $B_{m,\chi_{-4}}$ is the $m$-th generalized Bernoulli number associated with the Kronecker character $\chi_{-4}=\left(\frac{-4}{*}\right)$,\ $\varepsilon (H):={\rm max}\{ l\in\mathbb{N}\,|\, l^{-1}H\in \Lambda_2(\boldsymbol{K})\,\}$,\ and $$\label{GK} \begin{split} &G_{\boldsymbol{K}}(m,N):=\frac{1}{1+|\chi_{-4}(N)|} (\sigma_{m,\chi_{-4}}(N)- \sigma^*_{m,\chi_{-4}}(N))\\ &\sigma_{m,\chi_{-4}}(N):=\sum_{0< d|N}\chi_{-4}(d)d^m,\quad \sigma^*_{m,\chi_{-4}}(N):=\sum_{0< d|N}\chi_{-4}(N/d)d^m. \end{split}$$ We can construct cusp forms by the Hermitian Eisenstein series (cf. [@D-K], Corollary 2); $$\begin{aligned} &E_{10}-E_4E_6\in S_{10}(U_2(\mathcal{O})),\\ &E_{12}-\frac{441}{691}E_4^3-\frac{250}{691}E_6^2\in S_{12}(U_2(\mathcal{O})).\end{aligned}$$ Siegel modular forms of degree $2$ ---------------------------------- Let $M_k(\Gamma_2)$ denote the space of Siegel modular forms of weight $k$ $(\in\mathbb{Z})$ for the Siegel modular group $\Gamma_2:=Sp_2(\mathbb{Z})$ and $S_k(\Gamma_2)$ the subspace of cusp forms. Any Siegel modular form $F$ in $M_k(\Gamma_2)$ has a Fourier expansion of the form $$F(Z)=\sum_{0\leq T\in\Lambda_2}a_F(T)e^{2\pi i\text{tr}(TZ)},$$ where $$\Lambda_2=Sym_2^*(\mathbb{Z}) :=\{ T=(t_{ij})\in Sym_2(\mathbb{Q})\;|\; t_{ii},\;2t_{ij}\in\mathbb{Z}\; \}$$ (the lattice in $Sym_2(\mathbb{R})$ of half-integral, symmetric matrices). We write $T=(m,r,n)$ for $T=\begin{pmatrix}m & \frac{r}{2} \\ \frac{r}{2} & n \end{pmatrix}$ and also $a_F(m,r,n)$ for $a_F\begin{pmatrix}m & \frac{r}{2} \\ \frac{r}{2} & n\end{pmatrix}$. Taking $q_{ij}:=\text{exp}(2\pi iz_{ij})$ with $Z=(z_{ij})\in\mathbb{H}_2$, we have for $T=(m,r,n)$ $$e^{2\pi i\text{tr}(TZ)}=q_{11}^{m}q_{12}^{r}q_{22}^{n}.$$ For any subring $R\subset\mathbb{C}$, we adopt the notation, $$\begin{aligned} & M_k(\Gamma_2;R):=\{ F=\sum_{T\in\Lambda_n}a_F(T)q^T\;|\; a_F(T)\in R\;(\forall T\in\Lambda_2)\;\},\\ & S_k(\Gamma_n;R):=M_k(\Gamma_2)\cap S_k(\Gamma_2).\end{aligned}$$ Any element $F\in M_k(\Gamma_2;R)$ can be regarded as an element of $$R[\![\boldsymbol{q}]\!]:=R[q_{12}^{-1},q_{12}][\![ q_{11},q_{22}]\!].$$ The space $\mathbb{H}_2$ contains the Siegel upper half-space of degree $2$ $$\mathbb{S}_2:=\mathbb{H}_2\cap Sym_2(\mathbb{C}).$$ Hence we can define the restriction map $$\begin{aligned} R[\![\dot{\boldsymbol{q}}]\!]\longrightarrow R[\![\boldsymbol{q}]\!]\end{aligned}$$ via the correspondence $F\mapsto F|_{\mathbb{S}_2}:=F(z_{ij})|_{z_{21}=z_{12}}$ (this means $\dot{q}_{12}\mapsto 1$, $\ddot{q}_{12}\mapsto q_{12}$). In particular, if $F\in M_k(U_2({\mathcal O});R)\subset R[\![\dot{\boldsymbol{q}}]\!]$ then we have $F|_{\mathbb{S}_2}\in M_k(\Gamma _2;R)\subset R[\![\boldsymbol{q}]\!]$. This fact comes from each condition of the modularity. Igusa’s generators over $\mathbb{Z}$ {#sec:3.1} ------------------------------------ Let $k$ be an even integer with $k\ge 4$. The Siegel Eisenstein series $$G_k(Z):=\sum_{M=\left(\begin{smallmatrix}*&* \\ C & D\end{smallmatrix}\right)} {\det}(CZ+D)^{-k},\quad Z\in\mathbb{S}_2$$ defies an element of $M_k(\Gamma _2;\mathbb{Q})$. Here $M=\begin{pmatrix}* & * \\ C & D\end{pmatrix}$ runs over a set of representatives $\left\{\begin{pmatrix}* & * \\ 0_2 & * \end{pmatrix}\right\}\backslash\Gamma _2$. We write $X_4:=G_4$ and $X_6:=G_6$. We set $$\label{Siegel cusp} \begin{split} X_{10}:&=-\frac{43867}{2^{10}\cdot 3^5\cdot 5^2\cdot 7\cdot 53}(G_{10}-G_4G_6), \\ X_{12}:&=-\frac{691\cdot 1847}{2^{13}\cdot 3^6\cdot 5^3\cdot 7^2} (G_{12}-\frac{441}{691}G_4^3-\frac{250}{691}G_6^2). \end{split}$$ Then we have $X_k\in S_k(\Gamma_2;{\mathbb{Z}})$ $(k=10,12)$ and $a_{X_{10}}(1,1,1)=a_{X_{12}}(1,1,1)=1$. Let $k$ be an even integer with $k\ge 4$ and $G_k$ the normalized Siegel Eisenstein series of weight $k$. We set $$\begin{aligned} &Y_{12} := 2^{-6}\cdot 3^{-3}(X_4^3 - X_6^2)+2^4\cdot 3^2X_{12},\\ &X_{16} := 2^{-2}\cdot 3^{-1}(X_4X_{12} - X_6 X_{10}),\\ &X_{18} := 2^{-2}\cdot 3^{-1}(X_6 X_{12}-X_4^2X_{10}),\\ &X_{24} := 2^{-3}\cdot 3^{-1}(X_{12}^2 - X_4 X_{10}^2),\\ &X_{28} := 2^{-1}\cdot 3^{-1}(X_4 X_{24} - X_{10} X_{18}),\\ &X_{30} := 2^{-1}\cdot 3^{-1}(X_6 X_{24} - X_4 X_{10} X_{16}),\\ &X_{36} :=2^{-1}\cdot 3^{-2}(X_{12} X_{24} - X_{10}^2 X_{16}),\\ &X_{40} :=2^{-2}(X_4 X_{36} - X_{10} X_{30}),\\ &X_{42} := 2^{-2}\cdot 3^{-1}(X_{12} X_{30} - X_4 X_{10} X_{28}),\\ &X_{48} := 2^{-2}(X_{12}X_{36} - X_{24}^2).\end{aligned}$$ We write $$\begin{aligned} A^{(m)}(\Gamma _2;\mathbb{Z}):=\bigoplus _{k\in m\mathbb{Z}}M_k(\Gamma _2;{\mathbb{Z}}).\end{aligned}$$ The following structure theorem is due to Igusa. One has $X_k\in M_k(\Gamma _2;{\mathbb{Z}})$ ($k=4$, $6$, $\cdots $, $48$) and $Y_{12}\in M_{12}(\Gamma _2;{\mathbb{Z}})$ and the graded ring $A^{(2)}(\Gamma _2;\mathbb{Z})$ is generated over $\mathbb{Z}$ by them. Moreover, the set of $14$ generators are minimal. Actually, he determined the structure of the full space $A^{(1)}(\Gamma _2;{\mathbb{Z}})$ by using the cusp form of weight $35$. However, since we do not use this result we do not mention its detail. From his result, we have immediately the following property. \[Cor:S\_gen\] The ring $A^{(4)}(\Gamma _2;\mathbb{Z})$ is generated over $\mathbb{Z}$ by the following $23$ generators; $$\begin{aligned} &S_4:=X_4,\quad S_{12}:=X_{12},\quad T_{12}:=Y_{12},\quad U_{12}:=X_6^2,\quad S_{16}:=X_{10}X_6,\\ &T_{16}:=X_{16},\quad S_{20}:=X_{10}^2,\quad S_{24}:=X_{24},\quad T_{24}:=X_6X_{18},\\ &S_{28}:=X_{28},\quad T_{28}:=X_{10}X_{18},\quad S_{36}:=X_{36},\quad T_{36}:=X_{18}^2,\\ &U_{36}:=X_6X_{30},\quad S_{40}:=X_{40},\quad T_{40}:=X_{10}X_{30},\quad S_{48}:=X_{48},\\ &T_{48}:=X_{18}X_{30},\quad S_{52}:=X_{42}X_{10},\quad S_{60}:=X_{30}^2,\quad T_{60}:=X_{18}X_{42},\\ &S_{72}:=X_{30}X_{42},\quad S_{84}:=X_{42}^2.\end{aligned}$$ For later use, we introduce the Sturm bounds for Siegel modular forms of degree $2$. \[Stbd0\] Let $k$ be a positive integer and $p$ an any prime. Let $F\in M_{k}(\Gamma _2;\mathbb{Z}_{(p)})$. Suppose that $a_F(m,r,n)\equiv 0$ mod $p$ for any $m$, $r$, $n$ with $$m,\ n\le [k/10]$$ and $4mn-r^2\ge 0$. Then we have $F\equiv 0$ mod $p$. Structure over ${\mathbb{Z}}[1/2,1/3]$ -------------------------------------- We set $H_{4}:=E_4$ and $$\begin{aligned} &H_8 :=-\frac{61}{2^{10}\cdot 3^2 \cdot 5^2} (E_{8}-H_4^2),\\ &F_{10} := -\frac{277}{2^9\cdot 3^3\cdot 5^2 \cdot 7} (E_{10}-H_4 \cdot E_6),\\ &H_{12} := -\frac{19\cdot 691\cdot 2659}{2^{11}\cdot 3^7\cdot 5^3\cdot 7^2\cdot 73},\\ &~~~~~~~~~\times \left(E_{12}- \frac{3^2\cdot 7^2}{691}H _4^3 -\frac{2\cdot 5^3}{691}H _6^2 + \frac{2^9\cdot 3^4\cdot 5^2\cdot 7^2\cdot 6791}{19\cdot 691\cdot 2659}H _4\cdot H _8\right).\end{aligned}$$ We define the graded ring $A^{(m)}(U_2({\mathcal O});R)$ over $R$ by $$A^{(m)}(U_2({\mathcal O});R)=\bigoplus _{k\in m\mathbb{Z}}M_k(U_2({\mathcal O});R).$$ \[Thm:Ki-Na\] Then all of $H_4$, $E_6$, $H_8$, $F_{10}$, $H_{12}$ have Fourier coefficients in $\mathbb{Z}$ and they generate the graded ring $$A^{(2)}(U_2({\mathcal O});{\mathbb{Z}}[1/2,1/3]).$$ Moreover, these $5$ generators are algebraically independent over $\mathbb{C}$ and we have $$H_4|_{\mathbb{S}_2}=X_4, \quad E_6|_{\mathbb{S}_2}=X_6,\quad H_8|_{\mathbb{S}_2}=0,\quad F_{10}|_{\mathbb{S}_2}=6X_{10},\quad H_{12}|_{\mathbb{S}_2}=X_{12}.$$ The ring $A^{(2)}(U_2({\mathcal O});R)$ coincides with the ring of the full space $A^{(1)}(U_2({\mathcal O});R)$ of symmetric Hermitian modular forms, because of $M_k(U_2({\mathcal O}))=\{0\}$ for odd $k$. Let $p$ be a prime and $\mathbb{Z}_{(p)}$ the localization of ${\mathbb{Z}}$ at the prime ideal $(p)=p{\mathbb{Z}}$, namely, $\mathbb{Z}_{(p)}:=\mathbb{Q}\cap\mathbb{Z}_p$. The following lemma will be needed in later sections. For a formal Fourier series of the form $F=\sum a_F(H)e^{2\pi i {\rm tr}(HZ)}$, we define $v_p(F)\in\mathbb{Z}$ as usual by $$\label{vp} v_p(F):=\underset{{H\in\Lambda_2({\mathcal O})}}{\text{inf}}\text{ord}_p(a_F(H)).$$ \[Lem:ord\] For any $F_i=\sum a_ {F_i}(H)e^{2\pi i {\rm tr}(HZ)}$ ($i=1$, $2$) with $v_p(F_i)<\infty $, we have $$v_p(F_1F_2)=v_p(F_1)+v_p(F_2).$$ We can easily prove this property, if we define an order for two elements of $\Lambda _2({\mathcal O})$ in the same way as in [@Ki-Na2]. We will need the Sturm bounds in the later sections. \[Thm:Na-Ta\] Let $p$ be a prime with $p\ge 5$. Suppose that $F\in M_k(U_2({\mathcal O});\mathbb{Z}_{(p)})$ satisfies that $a_F(m,r,s,n)\equiv 0$ mod $p$ for all $m$, $n\le [k/8]$. Then we have $F\equiv 0$ mod $p$. In [@Ki-Na2] Theorem 2, we obtained the similar type bounds as this statement, but they are not same. We can modify the proof in the similar way as in [@Na-Ta] Proposition 4.5. In general, the Sturm bounds imply the ordinary vanishing conditions. \[Cor:Na-Ta\] Suppose that $F\in M_k(U_2({\mathcal O});\mathbb{Q})$ satisfies that $a_F(m,r,s,n)=0$ for all $m$, $n\le [k/8]$. Then we have $F=0$. We may apply Theorem \[Thm:Na-Ta\] to $F$ for infinitely many primes $p\ge 5$. Structure over $\mathbb{Z}$ =========================== Construction of generators {#Const} -------------------------- We set $$\begin{aligned} &I_{12} := 2^{-6}\cdot 3^{-3}(H _4^3 - E_6^2) + 2^4\cdot 3^2\cdot H _{12},\\ &J_{12}:=E_6^2,\\ &H_{16}:=2^{-1}\cdot 3^{-1}(E_6F_{10}-H _4^2H _8)\\ &I_{16} := 2^{-2}\cdot 3^{-1}(H _4 H _{12} - H_{16}),\\ &H_{20}:=2^{-2} \cdot 3^{-2}(F_{10}^2-H _4 H _8^2 -2^{2}\cdot 3 H _8 H_{12}),\\ &H_{24}:=2^{-3}\cdot 3^{-1} (H_{12}^2- H_4 H_{20}) - 2^{-1}\cdot 3^{-1}H_8\cdot I_{16}.\end{aligned}$$ In order to construct further generators, we use temporarily the alphabets $K$, $L$. $$\begin{aligned} &K_{14}:=2^{-1}\cdot 3^{-1}(H _4 F_{10}-E_6 H_8),\\ &K_{18}:=2^{-2}\cdot 3^{-1}(E_6 H _{12} - H _4 K_{14}),\\ &K_{22}:=2^{-1}\cdot 3^{-1}(F_{10} H_{12}-H_8 K_{14}),\\ &K_{26}:=2^{-1}\cdot 3^{-1}(F_{10} I_{16}-H_{8} K_{18}),\\ &K_{30}:=2^{-1}\cdot 3^{-1}(E_6H_{24}-K_{14}I_{16}) +3^{-1}H_8F_{10}I_{12},\\ &L_{30}:=2^{-1}\cdot 3^{-1}(F_{10} H_{20}-H_8 K_{22}),\\ &K_{34}:=2^{-1}\cdot 3^{-1}(F_{10}H_{24}-H_{8}K_{26}), \\ &K_{42}:=2^{-2}\cdot 3^{-1}(H_{12} K_{30}-K_{14} H_{28})- 2^{-1}H_8I_{12}K_{22}. \end{aligned}$$ From these definition and Theorem \[Thm:Ki-Na\], it is easy to see that $$\begin{aligned} &K_{14}|_{\mathbb{S}_2}=X_4X_{10},\quad K_{18}|_{\mathbb{S}_2}=X_{18},\quad K_{22}|_{\mathbb{S}_2}=X_{10}X_{12},\\ &K_{26}|_{\mathbb{S}_2}=X_6X_{16},\quad K_{30}|_{\mathbb{S}_2}=X_{30},\quad L_{30}|_{\mathbb{S}_2}=X_{10}^3,\\ &K_{34}|_{\mathbb{S}_2}=X_{10}X_{24},\quad K_{42}|_{\mathbb{S}_2}=X_{42}. \end{aligned}$$ Finally we put $$\begin{aligned} &I_{24}:=E_6K_{18},\\ &H_{28}:=2^{-1}\cdot 3^{-1}(H_4H_{24} - I_{28}) - 3^{-1}H_8^2I_{12},\\ &I_{28}:=2^{-1}\cdot 3^{-1}(F_{10}\cdot K_{18}-H_4\cdot H_8\cdot I_{16}),\\ &H_{36}:=2^{-1}\cdot 3^{-2}(H_{12}H_{24} - H_{20}I_{16}) + 7\cdot 3^{-2}H_{8}H_{28}+ 3^{-1}H_8^3H_{12},\\ &I_{36}:=K_{18}^2,\qquad J_{36}:=E_6K_{30}\\ &H_{40}:=2^{-2}(H_4H_{36} - \frac{1}{2\cdot 3}F_{10}\cdot K_{30}) - 5\cdot 2^{-3}\cdot 3^{-1}H_4\cdot H_8\cdot H_{28}, \\ &~~~~~~~~~~~~~~+ 2^{-2}\cdot {H_8}^3\cdot H_{16} + 2^{-1}H_8\cdot I_{12}\cdot H_{20},\\ &I_{40}:=2^{-1}\cdot 3^{-1}(F_{10}K_{30}-H_4\cdot H_8\cdot H_{28}),\\ &H_{48}:=2^{-2}(H_{12}\cdot H_{36}-H_{24}^2)-2^{-3}H_8(H_{12} H_{28}+ 2 H_{40} \\ &~~~~~~~~~~~~~~+4 H_{10}^2 H_{12} H_8- 2 H_{20} H_4 H_8^2 - 2 H_{12} H_4 H_8^3+ 4 H_{20} H_8 I_{12} \\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~+ 2 H_{12} H_8^2 I_{12} - H_{24} I_{16} - 2 H_8^3 I_{16} + 2 I_{40}),\\ &I_{48}:=K_{18}K_{30},\\ &H_{52}:=2^{-1}\cdot 3^{-1}(F_{10} K_{42} - 2 F_{10}^2 H_{12}^2 H_8 - 2^2 H_{12} H_{20} H_8 I_{12}\\ &~~~~~~~~~~~~~~- 5 H_{10} H_{22} H_8 I_{12} - H_{28} H_8 I_{16} - H_8^3 I_{12} I_{16}),\\ &H_{60}:=K_{30}^2,\qquad I_{60}:=K_{18}K_{42},\qquad H_{72}:=K_{30}K_{42},\qquad H_{84}:=K_{42}^2.\end{aligned}$$ By the definition of them and from Theorem \[Thm:Ki-Na\], we can easily confirm the following property. We have $$\begin{aligned} H_{k_1}|_{\mathbb{S}_2}=S_{k_1},\ I_{k_2}|_{\mathbb{S}_2}=T_{k_2}\quad \text{and}\quad J_{k_3}|_{\mathbb{S}_2}=U_{k_3} \end{aligned}$$ for each $k_1$, $k_2$, $k_3$ with $$\begin{aligned} &k_1\in \{4,12,16,20,24,28,32,36,40,48,52,60,72,84\},\\ &k_2\in \{12,16,24,28,36,40,48,60\},\quad k_3\in \{12,36\}.\end{aligned}$$ Integralities of generators {#Int} --------------------------- First our purpose is to prove that, all Fourier coefficients of the modular forms constructed in the previous subsection are integers. We start with proving several lemmas. We put $H_4=1+2^{4}\cdot 3S$, $E_6=1+2^3\cdot 3^2T$ with $S$, $T\in {\mathbb{Z}}[\![\dot{\boldsymbol{q}}]\!]$. \[Lem0\] We have $S\equiv T$ mod $2^2\cdot 3$. For $H\in \Lambda _2({\mathcal O})$ with ${\rm rank}(H)=1$, we have $$\begin{aligned} &a_{H_4}(H)= 2^4\cdot 3 \cdot 5\sum _{0<d\mid \varepsilon (H)}d^{3}, \\ &a_{E_6}(H)=-2^3\cdot 3^2 \cdot 7 \sum _{0<d\mid \varepsilon (H)}d^{5}. \end{aligned}$$ The assertion (for ${\rm rank}(H)=1$) follows from $5\equiv -7$ mod $2^2 \cdot 3$ and an application of the Euler congruence $$\sum _{0<d\mid \varepsilon (H)}d^{3}\equiv \sum _{0<d\mid \varepsilon (H)}d^{5} \bmod{2^2 \cdot 3}.$$ Let $H\in \Lambda _2({\mathcal O})$ with ${\rm rank}(H)=2$. Then $$\begin{aligned} &a_{H_4}(H)=-2^6\cdot 3\cdot 5\sum _{0<d\mid \varepsilon (H)}d^3G_{{\boldsymbol K}}(3,4\det H/d^2 ),\\ &a_{E_6}(H)=-2^5\cdot 3^2\cdot 5^{-1}\cdot 7 \sum _{0<d\mid \varepsilon (H)}d^5G_{{\boldsymbol K}}(5,4\det H/d^2 ).\end{aligned}$$ The Euler congruence implies that $$\sum _{0<d\mid \varepsilon (H)}d^3G_{{\boldsymbol K}}(3,4\det H/d^2 )\equiv \sum _{0<d\mid \varepsilon (H)}d^5G_{{\boldsymbol K}}(5,4\det H/d^2 ) \bmod 2^2\cdot 3.$$ On the other hand, we have $$\begin{aligned} 2^2\cdot 5\equiv 2^2\cdot 5^{-1}\cdot 7 \bmod{2^2\cdot 3}. \end{aligned}$$ Therefore the assertion holds. By this lemma, we can put $T=S+2^2\cdot 3 U$ with $U\in {\mathbb{Z}}[\![\dot{\boldsymbol{q}}]\!]$. Then we have $$\begin{aligned} &H_4=1+2^4\cdot 3 S, \\ &E_6=1+2^3\cdot 3^2 S+2^5\cdot 3^3 U. \end{aligned}$$ This is the one of important fact for our arguments on integralities of generators. #### Forms of weight $\bold{12}$ We remark that $J_{12}=E_6^2\in M_{12}(U_2({\mathcal O});{\mathbb{Z}})$ follows from $E_6\in M_{6}(U_2({\mathcal O});{\mathbb{Z}})$. \[Lem1\] We have $I_{12}\in M_{12}(U_2({\mathcal O});{\mathbb{Z}})$. We know by Theorem \[Thm:Ki-Na\] that $H_{12}\in M_{12}(U_2({\mathcal O});{\mathbb{Z}})$. Hence, it suffices to prove that $2^{-6}\cdot 3^{-3}(H_4^3 -E_6^2)\in M_{12}(U_2({\mathcal O});{\mathbb{Z}})$. This can be confirmed by the expansion $$2^{-6}\cdot 3^{-3}(H_4^3 -E_6^2)=S^2 + 64 S^3 - U - 72 S U - 432 U^2,$$ where $S$, $U$ is defined as above. #### Forms of weight $\bold{14}$, $\bold{16}$, $\bold{18}$ For the proof of their integralities, we use (as in [@Ki-Na]) the correspondence between the Maass space and the Kohnen plus subspace which given by Krieg [@Kri]. We review it briefly. We define the congruence subgroup of $\Gamma _1=SL_2({\mathbb{Z}})$ with level $N$ ($N\in \mathbb{N}$) as $$\Gamma _0^{(1)}(N):=\left\{\begin{pmatrix} a&b\\ c& d\end{pmatrix}\in \Gamma _1 \left| \right. c\equiv 0 \bmod{N} \right\}.$$ Let $M_k(\Gamma _0^{(1)}(4),\chi _{-4}^k)$ be the space of elliptic modular forms with character $\chi _{-4}^k$ for $\Gamma _0^{(1)}(4)$. Let ${\mathcal M}_k(U_2({\mathcal O}))$ be the Maass space consisting of all of $F \in M_k(U_2({\mathcal O}))$ satisfying the Maass relation. For the precise definition, see [@Kri], p.676. The Hermitian modular forms version of the Kohnen plus subspace is defined as $$\begin{aligned} &M_k^+(\Gamma _0^{(1)}(4),\chi _{-4}^k)\\ &~~~~~~~:=\left\{f=\sum _{n=0}^{\infty} a_f(n)q^n \in M_k(\Gamma _0^{(1)}(4),\chi _{-4}^k) \left| \right. a_f(n)=0\ \forall n\equiv 1 \bmod{4} \right\}\end{aligned}$$ Krieg [@Kri] gave the isomorphism as vector spaces $$M_{k-1}^+(\Gamma _0^{(1)}(4)\chi _{-4}^{k-1})\longrightarrow {\mathcal M}_k(U_2({\mathcal O})).$$ Let $$\theta :=1+2\sum _{n\ge 1}q^{n^2},\quad f_2:=\sum _{n\ge 1}\sigma _1(n)q^n$$ with $\sigma _1(n):=\sum _{0<d\mid n}d$ and $q:=e^{2\pi i \tau }$, $\tau \in \mathbb{H} _1:=\{ \tau =x+iy \; | \; y>0 \}$. Then it is known that $\theta^2\in M_1(\Gamma _0^{(1)}(4),\chi _{-4})$ and $f_2\in M_2(\Gamma _0^{(1)}(4),1)$ and they generate the graded ring $$\begin{aligned} \bigoplus _{k\in \mathbb{Z}}M_k(\Gamma _0^{(1)}(4),\chi _{-4}^k). \end{aligned}$$ Hence we can construct a Hermitian modular form ${\rm Lift}(h) \in M_k(U_2({\mathcal O}))$ from a polynomial $h\in {\mathbb{C}}[\theta ^2, f_2]$ (such that $h\in M_{k-1}^+(\Gamma _0^{(1)}(4),\chi _{-4}^{k-1})$), by the relation between their Fourier coefficients $$a_{{\rm Lift}(h)}(H)=\sum _{0<d \mid \varepsilon (H)}d^{k-1}\frac{1}{1+|\chi _{-4}(4\det H/d^2 )|}a_h(4\det H/d^2).$$ \[Lem2\] We have $I_{16} \in M_{16}(U_2({\mathcal O});\mathbb{Z})$ and $K_{k}\in M_{k}(U_2({\mathcal O});\mathbb{Z})$ for $k=14$, $18$. We set $$\begin{aligned} h_{15}&:=\theta^{14} f_2^4 -28\theta^{10} f_2^5 +192\theta^6f_2^6\\ &=q^4 + 12 q^6 + 64 q^7 + 36 q^8 - 128 q^{10} - 1152 q^{11} - 936 q^{12} - 504 q^{14} \cdots\\ &=\sum _{n\ge 4}a_{h_{15}}(n)q^n. \end{aligned}$$ Then we have $h_{15}\in M_{15}(\Gamma _0^{(1)}(4),\chi _{-4})$. By an easy numerical experiments, we can confirm that $a_{h_{15}}(n)=0$ for all $n$ with $n\le 500$ and $n\equiv 1$ mod $4$. In fact, we can prove $h_{15}\in M^+_{15}(\Gamma _0^{(1)}(4),\chi _{-4})$ as follows. We consider $$\begin{aligned} h_{15}+h_{15}|T_{\chi _{-4}}-h_{15}|U(2)V(2)=\sum _{n\equiv 1 \bmod{4}}a_{h_{15}(n)}q^n \in M_{15}(\Gamma _0^{(1)}(32),\chi _{-4}),\end{aligned}$$ where $U(l)$, $V(l)$ is the usual operators and $T_{\chi }$ is the twisting operator of the Dirichlet character $\chi $ given in Shimura [@Shim]. Namely, their action for $f=\sum _{n=0}^\infty a_f(n) q^n$ is described as $$\begin{aligned} &f|U(l)=\sum _{n=0}^\infty a_{f}(ln)q^n \\ &f|V(l)=\sum _{n=0}^\infty a_{f}(n)q^{ln}, \\ &f|T_{\chi }=\sum _{n=0}^\infty \chi (n)a_{f}(n)q^{n}. \end{aligned}$$ We remark that we have (at least) $f|U(l)\in M_k(\Gamma _0^{(1)}(Nl),\psi )$, $f|V(l)\in M_k(\Gamma _0^{(1)}(Nl^2),\psi )$ and $f|T_{\chi }\in M_k(\Gamma _0^{(1)}(Nl^2),\psi \chi ^2)$ when $f\in M_k(\Gamma ^{(1)}_0(N),\psi )$. Since the Sturm bound for $M_{15}(\Gamma _0^{(1)}(32),\chi _{-4})$ is $$\frac{15}{12}[\Gamma _1:\Gamma _0^{(1)}(32)]=\frac{15}{12}\cdot 32\left(1+\frac{1}{2}\right)=60,$$ our numerical experiment for $n\le 500$ is sufficient. Namely this shows that $\sum _{n\equiv 1 \bmod{4}}a_{h_{15}(n)}q^n=0$ and hence $h_{15}\in M_{15}^+(\Gamma _0^{(4)},\chi _{-4})$. Therefore we can apply the isomorphism constructed by Krieg, there exists ${\rm Lift}(h_{15})\in M_{16}(U_2({\mathcal O}))$ satisfying that $$\begin{aligned} a_{{\rm Lift}(h_{15})}(H) =\sum _{0<d\mid \varepsilon (H)}\frac{d^{15}}{1+|\chi _{-4}(4\det H/d^2)|} a_{h_{15}}(4\det H/d^2).\end{aligned}$$ By the definition of $h_{15}$, we see that $h_{15}\equiv f_2^4$ mod $2$ because of $\theta \equiv 1$ mod $2$. This implies immediately $$\frac{1}{1+|\chi _{-4}(4\det H/d^2)|}a_{h_{15}}(4\det H/d^2)\in \mathbb{Z}$$ for each $d$. Namely ${\rm Lift}(h_{15})\in M_{16}(U_2({\mathcal O});\mathbb{Z})$ follows. By a direct calculation, we see that $$a_{I_{16}}(m,r,s,n)=a_{{\rm Lift}(h_{15})}(m,r,s,n)-56a_{H_8^2}(m,r,s,n)$$ for all $(m,r,s,n)\in \Lambda _{2}({\mathcal O})$ with $m$, $n\le 2=[16/8]$. Applying Corollary \[Cor:Na-Ta\], we obtain $$I_{16}={\rm Lift}(h_{15})-56H_{8}^2.$$ Since ${\rm Lift}(h_{15})-56H_{8}^2\in M_{16}(U_2({\mathcal O});\mathbb{Z})$, we have the assertion $I_{16}\in M_{16}(U_2({\mathcal O});\mathbb{Z})$. Similarly, if we set $$\begin{aligned} &h_{13}:=2\theta^{14}f_2^3 - 60\theta^{10}f_2^4 + 448\theta^6f_2^5\in M^+_{13}(\Gamma _0^{(1)}(4),\chi _{-4})\\ &h_{17}:= \theta^{18} f_2^4 - 36\theta^{14}f_2^5 + 368\theta^{10}f_2^6 - 768\theta^6f_2^7\in M^+_{17}(\Gamma _0^{(1)}(4),\chi _{-4}), \end{aligned}$$ then we can prove the following equalities $$\begin{aligned} &K_{14}={\rm Lift}(h_{13}), \\ &K_{18}={\rm Lift}(h_{17})+256H_8H_{10}. \end{aligned}$$ The assertions for $K_{14}$, $K_{18}$ follow from these fact immediately. We will give numerical data we used in the proofs, in Subsection \[ProofPlus\] \[Lem3\] We have\ (1) $H_{16}\in M_{16}(U_2({\mathcal O});\mathbb{Z})$,\ (2) $6 H_4 H_{12} - E_6 H_{10} + H_4^2 H_8\equiv 0$ mod $2^3\cdot 3^3$. \(1) By the definition of $I_{16}$, we have $$2^2\cdot 3 I_{16}=H_4H_{12}-H_{16}.$$ Since $2^2\cdot 3I_{16}\equiv 0$ mod $2^2\cdot 3$ because of $I_{16}\in M_{16}(U_2({\mathcal O});{\mathbb{Z}})$, we have $H_{16}\in M_{16}(U_2{(\mathcal O});{\mathbb{Z}})$. \(2) By the definition of $H_{16}$, we have $$2\cdot 3 H_{16}=E_6F_{10}-H_4^2H_8.$$ Hence we can write as $$2^3\cdot 3^2 I_{16}=6H_4 H_{12}-E_6F_{10}+H_4^2H_8.$$ Since $I_{16}\in M_{16}(U_2({\mathcal O});{\mathbb{Z}})$, we have $6H_4 H_{12}-E_6F_{10}+H_4^2H_8\equiv 0$ mod $2^3\cdot 3^2$. Using the fact that $H_4\equiv 1$ mod $2^4\cdot 3$, $E_6\equiv 1$ mod $2^3\cdot 3^2$, we get $$6H_{12}-F_{10}+H_4^2H_8 \equiv 0 \bmod{2^3\cdot 3^2}.$$ From (2) in this lemma, we may write as $$6H_{12}-F_{10}+H_4^2H_8=2^3\cdot 3^2 V$$ with $V\in {\mathbb{Z}}[\![\dot{\boldsymbol{q}}]\!]$. This description is another important thing for our arguments. #### Forms of weight $\boldsymbol{k}$ with $\boldsymbol{k \ge 20}$ First we remark that $I_{24}\in M_{24}(U_2({\mathcal O});{\mathbb{Z}})$ is trivial because of $I_{24}=E_6K_{18}$ and $E_6\in M_{6}(U_2({\mathcal O});{\mathbb{Z}})$, $K_{18}\in M_{18}(U_2({\mathcal O});{\mathbb{Z}})$. Similarly, the integralities of $I_{36}=K_{18}^2$, $J_{36}=E_6K_{30}$, $I_{48}=K_{18}K_{30}$, $H_{60}=K_{30}^2$, $I_{60}=K_{18}K_{42}$, $H_{72}=K_{32}K_{42}$, $H_{84}=K_{42}^2$ follow from that of $E_6$, $K_{18}$, $K_{30}$, $K_{32}$, $K_{42}$. We have the integralities of all the generators constructed in Section \[Subsec:gen\]. By the definition of $H_{20}$, we can write as $$H_{20}=2^{-2}\cdot 3^{-2}(F_{10}^2 - 12 H_{12} H_8 - H_4 H_8^2).$$ If we use the descriptions $$\begin{aligned} &F_{10}=6H_{12}+H_{4}^2H_8-2^3\cdot 3^2V,\\ &H_4=1+2^4\cdot 3 S, \\ &E_6=1+2^3\cdot 3^2 S+2^5\cdot 3^3 U, \end{aligned}$$ then we have $$\begin{aligned} H_{20}&= H_{12}^2 + 32 H_{12} H_8 S + 4 H_8^2 S + 768 H_{12} H_8 S^2 + 384 H_8^2 S^2 \\ &+ 12288 H_8^2 S^3 + 147456 H_8^2 S^4 + 24 H_{12} V + 4 H_8 V + 384 H_8 S V \\ &+ 9216 H_8 S^2 V + 144 V^2. \end{aligned}$$ This shows $H_{20}\in M_{20}(U_2({\mathcal O});\mathbb{Z})$. Similarly, we can prove the integralities of all the generators. In fact we can confirm that, all the generators have descriptions as polynomials of $H_{12}$, $H_8$, $S$, $U$, $V\in {\mathbb{Z}}[\![\dot{\boldsymbol{q}}]\!]$ with integral coefficients (see Subsection \[List\]). Now we could prove the integralities of our generators: All of the modular forms $$\begin{aligned} &H_4,\ H_8,\ H_{12},\ I_{12},\ J_{12},\ H_{16},\ I_{16},\ H_{20},\ H_{24},\ I_{24},\ H_{28},\ I_{28},\\ &H_{36},\ I_{36},\ J_{36},\ H_{40},\ I_{40},\ H_{48},\ I_{48},\ H_{52},\ H_{60},\ I_{60},\ H_{72},\ H_{84}\end{aligned}$$ and also $$\begin{aligned} &K_{14},\ K_{18},\ K_{22},\ K_{26},\ K_{30},\ L_{30},\ K_{34},\ K_{42},\ K_{38}\end{aligned}$$ are elements of ${\mathbb{Z}}[\![\dot{\boldsymbol{q}}]\!]$. Proof of the structure theorem ------------------------------ We are now in a position to prove the following main result. \[Thm1\] The graded ring $A^{(4)}(U_2({\mathcal O});\mathbb{Z})$ over $\mathbb{Z}$ is generated by $24$ modular forms $$\begin{aligned} &H_4,\ H_8,\ H_{12},\ I_{12},\ J_{12},\ H_{16},\ I_{16},\ H_{20},\ H_{24},\ I_{24},\ H_{28},\ I_{28},\\ &H_{36},\ I_{36},\ J_{36},\ H_{40},\ I_{40},\ H_{48},\ I_{48},\ H_{52},\ H_{60},\ I_{60},\ H_{72},\ H_{84}. \end{aligned}$$ In other words, for any $F\in M_k(U_2({\mathcal O});\mathbb{Z})$ with $4\mid k$, there exists a polynomial with $24$ variables having coefficients in $\mathbb{Z}$ such that $F=P(H_4,H_8,H_{12},\cdots , H_{84})$. We prove it by an induction on the weight. For $k=4$, the statement is true clearly. Suppose that the statement is true for all $k$ with $k<k_0$. Let $F\in M_{k_0}(U_2({\mathcal O});\mathbb{Z})$. Then there exists a polynomial $P$ with $23$ variables having coefficients in $\mathbb{Z}$ such that $F|_{\mathbb{S}_2}=P(S_4,S_{12},T_{12},\cdots, S_{84})$ because of Corollary \[Cor:S\_gen\]. Then we have $F-P(H_4,H_{12},I_{12},\cdots ,H_{84})\in M_{k_0}(U_2({\mathcal O};\mathbb{Z}))$ and $(F-P(H_4,H_{12},I_{12},\cdots , H_{84}))|_{\mathbb{S}_2}=0$. Therefore there exists $F'\in M_{k_0-8}(U_2({\mathcal O});\mathbb{Q})$ such that $F-P(H_4,H_{12},I_{12},\cdots , H_{84})=H_8F'$. Since all Fourier coefficients of $P(H_4,H_{12},I_{12},\cdots , H_{84})$ are in $\mathbb{Z}$, we have $H_8F'\in M_{k}(U_2({\mathcal O});\mathbb{Z})$. By $v_p(H_8)=0$ for any prime $p$, we have $F'\in M_{k_0-8}(U_2({\mathcal O});\mathbb{Z})$ because of Lemma \[Lem:ord\]. By the induction hypothesis, there exists a polynomial $P'$ such that $F'=P'(H_4,H_8,H_{12},\cdots , H_{84})$. Therefore we have $$F=P(H_4,H_{12},I_{12},\cdots , H_{84})+H_8P'(H_4,H_8,H_{12}\cdots , H_{84}).$$ This completes the proof of Theorem \[Thm1\]. To determine the structure of $A^{(2)}(U_2({\mathcal O});{\mathbb{Z}})$ by our method, we need $K_{46}\in M_{46}(U_2({\mathcal O});{\mathbb{Z}})$ such that $K_{46}|_{\mathbb{S}_2}=X_{10}X_{36}$. However, we predict that there does not exist such $K_{46}$, due to the leading terms of Fourier expansions. This is a main reason why we restricted our selves to the case where the weights are multiples of $4$. We remark also that we can construct $K_{46}'\in M_{46}(U_2({\mathcal O});{\mathbb{Z}})$ such that $K_{46}'|_{\mathbb{S}_2}=3X_{10} X_{36}$. An Application -------------- As an application, we have the following Sturm bounds for any $k$ with $4\mid k$. \[Thm2\] Let $p$ be an any prime and $k$ an integer with $4\mid k$. Suppose that $F\in M_{k}(U_2({\mathcal O});\mathbb{Z})$ satisfies that $a_F(m,r,s,n)\equiv 0$ mod $p$ for all $m$, $n\in \mathbb{Z}$ with $$0\le m,\ n\le \left[\frac{k}{8}\right]$$ Then we have $F\equiv 0$ mod $p$. For the primes $p\ge 5$, we can prove the statement in the similar way. Hence we prove the essential case $p=2$, $3$ only. \[LemA1\] Let $p=2$, $3$ and $k$ be an even integer with $4\mid k$. Suppose that $F\in M_k(U_2({\mathcal O});\mathbb{Z})$ satisfies $F|_{\mathbb{S}_2}\equiv 0$ mod $p$, then there exists $F'\in M_{k-8}(U_2({\mathcal O});\mathbb{Z})$ such that $F\equiv H_8 F'$ mod $p$. For $k=4$, $8$, we have as free $\mathbb{Z}$-modules $$\begin{aligned} &M_{4}(U_2({\mathcal O});\mathbb{Z})=H_4\mathbb{Z},\\ &M_{8}(U_2({\mathcal O});\mathbb{Z})=H_4^2\mathbb{Z}\oplus H_8\mathbb{Z}. \end{aligned}$$ If $k\neq 8$ and $F\not \equiv 0$ mod $p$, then $F|_{\mathbb{S}_2}\equiv 0$ mod $p$ is impossible. If $k=8$, then $F|_{\mathbb{S}_2}\equiv 0$ mod $p$ is possible only if $F\equiv cH_8$ mod $p$ for some $c\in \mathbb{Z}$. Therefore the statements for $k=4$, $8$ are true. We prove the case $k\ge 12$ with $4\mid k$. Since $F|_{\mathbb{S}_2}\equiv 0$ mod $p$, we have $\frac{1}{p} F|_{\mathbb{S}_2}\in M_k(\Gamma_2;\mathbb{Z})$. By Corollary \[Cor:S\_gen\], there exists an isobaric polynomial $P$ with coefficients in $\mathbb{Z}$ such that $\frac{1}{p}F|_{\mathbb{S}_2}=P(S_4,S_{12},\cdots ,S_{84})$. If we put $$G:=P(H_4,H_{12},\cdots ,H_{84}),$$ then we have $G\in M_{k}(U_2({\mathcal O});\mathbb{Z})$ and $(F-pG)|_{\mathbb{S}_2}=0$. By the result of Dern-Krieg [@D-K], there exists $F'\in M_{k-8}(U_2({\mathcal O});\mathbb{Q})$ such that $F-pG=H_8F'$. Since $v_p(F-pG)\ge 0$ and $v_p(H_8)=0$ for any $p\ge 2$, it should be that $F'\in M_{k-8}(U_2({\mathcal O});\mathbb{Z})$. Then we have $F\equiv H_8F'$ mod $p$. This competes the proof of Lemma \[LemA1\]. We prove Theorem \[Thm2\]. For $k=4$, $8$, we have as free $\mathbb{Z}$-modules $$\begin{aligned} &M_{4}(U_2({\mathcal O});\mathbb{Z})=H_4\mathbb{Z},\\ &M_{8}(U_2({\mathcal O});\mathbb{Z})=H_4^2\mathbb{Z}\oplus H_8\mathbb{Z}.\end{aligned}$$ Since $H_4\equiv 1$ mod $p$ and $H_8\not \equiv c$ mod $p$ for any $c\in {\mathbb{Z}}$, the statements for $k=4$, $8$ are trivial. Let $k\ge 12$. From $[k/8]\ge [k/10]$, we can apply the Sturm bound in Theorem \[Stbd0\] to $F|_{\mathbb{S}_2}$ and then we have $F|_{\mathbb{S}_2}\equiv 0$ mod $p$. By Lemma \[LemA1\], there exists $F'\in M_{k-8}(U_2({\mathcal O});\mathbb{Z})$ such that $F\equiv H_8F'$ mod $p$. Then $F'$ has the property that $a_{F'}(m,r,s,n)\equiv 0$ mod $p$ for any $m$, $n\in \mathbb{Z}$ with $$0\le m,\ n\le \left[\frac{k}{8}\right]-1=\left[\frac{k-8}{8}\right].$$ This is due to the explicit form of the Fourier expansion of $H_8$ (the same reason as in [@Ki-Ta] Lemma 5.1); $$\begin{aligned} H_8&=\dot{q}_1 \dot{q}_2 (4 -2\dot{q}_{12}^{-1}- 2\dot{q}_{12} - 2\ddot{q}_{12}^{-1} \\ &+ \dot{q}_{12}^{-1}\ddot{q}_{12}^{-1} + \dot{q}_{12}\ddot{q}_{12}^{-1} - 2 \ddot{q}_{12} + \dot{q}_{12}^{-1}\ddot{q}_{12} + \dot{q}_{12}\ddot{q}_{12})+\cdots. \end{aligned}$$ Note here that $4\mid k-8$ and we can apply the above argument to $F'$. If we apply this argument repeatedly, we have $F\equiv 0$ mod $p$. This completes the proof of Theorem \[Thm2\]. Completion of the proofs by numerical data ========================================== Fourier expansions of $\boldsymbol{h_{13}}$, $\boldsymbol{h_{15}}$, $\boldsymbol{h_{17}}$ {#ProofPlus} ----------------------------------------------------------------------------------------- In the proof of Lemma \[Lem2\], we relied on the numerical data. Hence we give its data here. Let $$b_k:=\frac{k}{12}[\Gamma _1:\Gamma _0^{(1)}(32)]$$ be the Sturm bounds we mentioned in the proof of Lemma \[Lem2\]. Then we have $b_{13}=52$, $b_{15}=60$, $b_{17}=68$. Therefore the following numerical data are sufficient for our purpose. [ $$\begin{aligned} h_{13}&:=2\theta^{14}f_2^3 - 60\theta^{10}f_2^4 + 448\theta^6f_2^5\\& =2q^3 - 4 q^4 + 112 q^6 - 4 q^7 - 432 q^8 - 640 q^{10} - 594 q^{11} + 5504 q^{12} - 4320 q^{14} + 9380 q^{15} - 20288 q^{16} \\& + 46848 q^{18} - 71622 q^{19} - 16200 q^{20} - 123376 q^{22} + 331668 q^{23} + 282112 q^{24} - 65664 q^{26} - 978492 q^{27} \\& - 453376 q^{28} + 709600 q^{30} + 1749808 q^{31} - 1112832 q^{32} - 120064 q^{34} - 1329480 q^{35} + 3895356 q^{36} \\& - 2315088 q^{38} - 1756316 q^{39} - 152160 q^{40} - 2846208 q^{42} + 7579934 q^{43} - 11366784 q^{44} + 16414816 q^{46} \\& - 17552376 q^{47} + 10176512 q^{48} + 5875200 q^{50} + 33105284 q^{51} + 3775288 q^{52}+\cdots ,\end{aligned}$$ $$\begin{aligned} h_{15}&:=\theta^{14} f_2^4 -28\theta^{10} f_2^5 +192\theta^6f_2^6\\& =q^4 + 12 q^6 + 64 q^7 + 36 q^8 - 128 q^{10} - 1152 q^{11} - 936 q^{12} - 504 q^{14} + 7872 q^{15} + 8144 q^{16} + 16128 q^{18}\\& - 18816 q^{19} - 32022 q^{20} - 121100 q^{22} - 51264 q^{23} + 26976 q^{24} + 464256 q^{26} + 408960 q^{27} + 258448 q^{28} \\&- 909576 q^{30} - 577024 q^{31} - 971712 q^{32} + 355072 q^{34} - 2085120 q^{35} + 525753 q^{36} + 2238876 q^{38}\\& + 7869888 q^{39} + 4278504 q^{40} - 5027328 q^{42} - 853760 q^{43} - 9440856 q^{44} + 8767832 q^{46} - 36277632 q^{47}\\& - 1162368 q^{48} - 26012160 q^{50} + 46803840 q^{51} + 24912602 q^{52} + 40240728 q^{54} + 71676992 q^{55}\\& - 22735296 q^{56} + 47704960 q^{58} - 187329024 q^{59} + 8247408 q^{60}+\cdots , \end{aligned}$$ $$\begin{aligned} h_{17}&:= \theta^{18} f_2^4 - 36\theta^{14}f_2^5 + 368\theta^{10}f_2^6 - 768\theta^6f_2^7\\ &=q^4 - 12 q^6 - 128 q^7 - 228 q^8 - 800 q^{10} - 768 q^{11} + 1872 q^{12} + 15576 q^{14} + 36480 q^{15} + 9296 q^{16} \\& - 108864 q^{18} - 297216 q^{19} - 178110 q^{20} + 356140 q^{22} + 845952 q^{23} + 816576 q^{24} - 682656 q^{26}\\& + 1071360 q^{27} - 803744 q^{28} + 3381480 q^{30} - 12461056 q^{31} - 5338176 q^{32} - 23163968 q^{34} + 20912640 q^{35} \\& + 16663617 q^{36} + 79051812 q^{38} + 40330368 q^{39} + 2424120 q^{40} - 99195264 q^{42}\\& - 169433600 q^{43} - 64675536 q^{44} - 142870072 q^{46} + 63431424 q^{47} - 965376 q^{48} + 629961600 q^{50} \\& + 381400320 q^{51} + 220457666 q^{52} - 671789592 q^{54} - 295596160 q^{55} + 283752576 q^{56} + 90976480 q^{58}\\& + 62678016 q^{59} - 1557183840 q^{60} - 135149088 q^{62} - 2319442560 q^{63} - 394334976 q^{64} - 99539136 q^{66} \\& + 1338126080 q^{67} + 6624813570 q^{68}+\cdots. \end{aligned}$$ ]{} Proof of integralities of the generators {#List} ---------------------------------------- In this subsection, we list the descriptions of our generators as polynomials with variables $H_{12}$, $H_8$, $S$, $U$, $V\in {\mathbb{Z}}[\![\dot{\boldsymbol{q}}]\!]$, where $S$, $U$, $V$ are defined by $$\begin{aligned} &F_{10}=6H_{12}+H_{4}^2H_8-2^3\cdot 3^2V,\\ &H_4=1+2^4\cdot 3 S, \\ &E_6=1+2^3\cdot 3^2 S+2^5\cdot 3^3 U. \end{aligned}$$ The list below shows that the integralities of corresponding generators as in Subsection \[Int\]. Namely we prove that our generators are elements of the ring ${\mathbb{Z}}[H_{12},H_8,S,U,V]$ in the following. $$\begin{aligned} K_{22}&=H_{12}^2 + 8 H_{12} H_8 S - 2 H_8^2 S + 384 H_{12} H_8 S^2 - 192 H_8^2 S^2 - 3072 H_8^2 S^3 + 24 H_8^2 U + 12 H_{12} V - 2 H_8 V - 96 H_8 S V, \\ H_{24}&= -2 H_{12}^2 S + H_{12} H_8 S + 96 H_{12} H_8 S^2 + 8 H_8^2 S^2 + 1536 H_{12} H_8 S^3 + 896 H_8^2 S^3 + 30720 H_8^2 S^4 + 294912 H_8^2 S^5 - 12 H_{12} H_8 U\\ & - 2 H_8^2 U - 192 H_8^2 S U - 4608 H_8^2 S^2 U + H_{12} V + 48 H_{12} S V+ 12 H_8 S V + 1152 H_8 S^2 V + 18432 H_8 S^3 V - 144 H_8 U V + 6 V^2\\& + 288 S V^2, \\ K_{26}&=2 H_{12}^2 S + H_{12} H_8 S + 96 H_{12} H_8 S^2 + 8 H_8^2 S^2 + 3072 H_{12} H_8 S^3+ 1280 H_8^2 S^3 + 61440 H_8^2 S^4 + 884736 H_8^2 S^5 + 72 H_{12}^2 U\\& + 36 H_{12} H_8 U + 4 H_8^2 U + 2304 H_{12} H_8 S U + 480 H_8^2 S U + 55296 H_{12} H_8 S^2 U + 27648 H_8^2 S^2 U + 884736 H_8^2 S^3 U + 10616832 H_8^2 S^4 U \\ &+ H_{12} V + 96 H_{12} S V + 24 H_8 S V + 2304 H_8 S^2 V+ 55296 H_8 S^3 V + 1728 H_{12} U V + 288 H_8 U V + 27648 H_8 S U V\\& + 663552 H_8 S^2 U V + 12 V^2 + 864 S V^2 + 10368 U V^2,\\ H_{28}&=-48 H_{12} H_8^2 + 16 H_{12}^2 S^2 + 8 H_{12} H_8 S^2 + H_8^2 S^2 + 640 H_{12} H_8 S^3+ 192 H_8^2 S^3 + 12288 H_{12} H_8 S^4 + 12288 H_8^2 S^4\\ &+ 294912 H_8^2 S^5 + 2359296 H_8^2 S^6- 12 H_{12}^2 U - 4 H_{12} H_8 U - 288 H_{12} H_8 S U - 24 H_8^2 S U - 4608 H_{12} H_8 S^2 U\\ &- 2304 H_8^2 S^2 U - 36864 H_8^2 S^3 U + 144 H_8^2 U^2 + 4 H_{12} S V + 2 H_8 S V +384 H_{12} S^2 V + 288 H_8 S^2 V + 12288 H_8 S^3 V + 147456 H_8 S^4 V \\ &- 144 H_{12} U V- 24 H_8 U V - 1152 H_8 S U V + V^2 + 96 S V^2 + 2304 S^2 V^2,\\ I_{28}&=-2 H_{12}^2 S - H_{12} H_8 S - 192 H_{12}^2 S^2 - 192 H_{12} H_8 S^2 - 16 H_8^2 S^2- 9984 H_{12} H_8 S^3 - 2560 H_8^2 S^3 - 147456 H_{12} H_8 S^4 - 147456 H_8^2 S^4 \\ &- 3538944 H_8^2 S^5 - 28311552 H_8^2 S^6 + 72 H_{12}^2 U + 36 H_{12} H_8 U + 4 H_8^2 U + 2304 H_{12} H_8 S U + 576 H_8^2 S U + 27648 H_{12} H_8 S^2 U \\& + 27648 H_8^2 S^2 U + 442368 H_8^2 S^3 U - H_{12} V - 120 H_{12} S V - 24 H_8 S V - 4608 H_{12} S^2 V - 3456 H_8 S^2 V - 147456 H_8 S^3 V\\ &- 1769472 H_8 S^4 V + 864 H_{12} U V+ 288 H_8 U V + 13824 H_8 S U V - 12 V^2 - 1152 S V^2 - 27648 S^2 V^2,\end{aligned}$$ $$\begin{aligned} K_{30}&=288 H_{12}^2 H_8 + 48 H_{12} H_8^2 + 4608 H_{12} H_8^2 S - 8 H_{12}^2 S^2 + 2 H_{12} H_8 S^2+ H_8^2 S^2 + 110592 H_{12} H_8^2 S^2 + 256 H_{12} H_8 S^3 + 192 H_8^2 S^3 \\& + 6144 H_{12} H_8 S^4+ 13056 H_8^2 S^4 + 368640 H_8^2 S^5 + 3538944 H_8^2 S^6 + 12 H_{12}^2 U + 2 H_{12} H_8 U + 288 H_{12}^2 S U + 240 H_{12} H_8 S U\\& + 13824 H_{12} H_8 S^2 U + 1152 H_8^2 S^2 U + 221184 H_{12} H_8 S^3 U+ 129024 H_8^2 S^3 U + 4423680 H_8^2 S^4 U + 42467328 H_8^2 S^5 U- 864 H_{12} H_8 U^2\\ & - 144 H_8^2 U^2- 13824 H_8^2 S U^2- 331776 H_8^2 S^2 U^2 + 3456 H_{12} H_8 V + 4 H_{12} S V + 2 H_8 S V + 192 H_{12} S^2 V+ 312 H_8 S^2 V\\& + 15360 H_8 S^3 V + 221184 H_8 S^4 V + 144 H_{12} U V+ 6912 H_{12} S U V+ 1728 H_8 S U V + 165888 H_8 S^2 U V + 2654208 H_8 S^3 U V\\& - 10368 H_8 U^2 V + V^2 + 120 S V^2+ 3456 S^2 V^2 + 864 U V^2 + 41472 S U V^2,\\ L_{30}&=H_{12}^3 + H_{12}^2 H_8 + 48 H_{12}^2 H_8 S + 16 H_{12} H_8^2 S - H_8^3 S + 1152 H_{12}^2 H_8 S^2 + 1344 H_{12} H_8^2 S^2 - 32 H_8^3 S^2 + 36864 H_{12} H_8^2 S^3+ 7168 H_8^3 S^3\\& + 442368 H_{12} H_8^2 S^4 + 368640 H_8^3 S^4 + 7077888 H_8^3 S^5 + 56623104 H_8^3 S^6 + 20 H_8^3 U + 36 H_{12}^2 V + 18 H_{12} H_8 V - H_8^2 V + 1152 H_{12} H_8 S V \\ &+ 96 H_8^2 S V + 27648 H_{12} H_8 S^2 V + 13824 H_8^2 S^2 V + 442368 H_8^2 S^3 V + 5308416 H_8^2 S^4 V + 432 H_{12} V^2 + 72 H_8 V^2 + 6912 H_8 S V^2 \\& + 165888 H_8 S^2 V^2 + 1728 V^3,\\ K_{34}&=-2 H_{12}^3 S - H_{12}^2 H_8 S - 128 H_{12}^2 H_8 S^2 - 24 H_{12} H_8^2 S^2 - 2304 H_{12}^2 H_8 S^3- 2560 H_{12} H_8^2 S^3 - 64 H_8^3 S^3 - 92160 H_{12} H_8^2 S^4\\& - 12288 H_8^3 S^4 - 884736 H_{12} H_8^2 S^5 - 737280 H_8^3 S^5 - 16515072 H_8^3 S^6 - 113246208 H_8^3 S^7 + 24 H_{12}^2 H_8 U + 10 H_{12} H_8^2 U+ H_8^3 U\\& + 768 H_{12} H_8^2 S U + 144 H_8^3 S U + 18432 H_{12} H_8^2 S^2 U + 9216 H_8^3 S^2 U + 294912 H_8^3 S^3 U + 3538944 H_8^3 S^4 U - H_{12}^2 V - 72 H_{12}^2 S V\\& - 32 H_{12} H_8 S V + 2 H_8^2 S V - 3456 H_{12} H_8 S^2 V - 96 H_8^2 S^2 V - 55296 H_{12} H_8 S^3 V - 27648 H_8^2 S^3 V - 1105920 H_8^2 S^4 V - 10616832 H_8^2 S^5 V\\& + 576 H_{12} H_8 U V + 96 H_8^2 U V + 9216 H_8^2 S U V + 221184 H_8^2 S^2 U V - 18 H_{12} V^2 + H_8 V^2 - 864 H_{12} S V^2 - 144 H_8 S V^2 - 20736 H_8 S^2 V^2\\& - 331776 H_8 S^3 V^2 + 3456 H_8 U V^2\\& - 72 V^3 - 3456 S V^3,\\ H_{36}&=-37 H_{12} H_8^3 + 16 H_{12}^2 H_8 S^2 + 8 H_{12} H_8^2 S^2 + H_8^3 S^2 + 128 H_{12}^2 H_8 S^3+ 704 H_{12} H_8^2 S^3 + 192 H_8^3 S^3 + 17408 H_{12} H_8^2 S^4 + 12800 H_8^3 S^4\\& + 98304 H_{12} H_8^2 S^5 + 352256 H_8^3 S^5 + 4194304 H_8^3 S^6 + 18874368 H_8^3 S^7 + 4 H_{12}^3 U - 8 H_{12}^2 H_8 U - 3 H_{12} H_8^2 U + 192 H_{12}^2 H_8 S U \\& - 176 H_{12} H_8^2 S U - 16 H_8^3 S U+ 4608 H_{12}^2 H_8 S^2 U + 768 H_{12} H_8^2 S^2 U - 1280 H_8^3 S^2 U + 147456 H_{12} H_8^2 S^3 U + 10240 H_8^3 S^3 U \\& + 1769472 H_{12} H_8^2 S^4 U + 1474560 H_8^3 S^4 U + 28311552 H_8^3 S^5 U + 226492416 H_8^3 S^6 U + 112 H_8^3 U^2 + 4 H_{12}^2 S V + 6 H_{12} H_8 S V+ 2 H_8^2 S V\\& + 576 H_{12} H_8 S^2 V + 304 H_8^2 S^2 V + 6144 H_{12} H_8 S^3 V + 14848 H_8^2 S^3 V + 270336 H_8^2 S^4 V + 1769472 H_8^2 S^5 V + 144 H_{12}^2 U V - 72 H_{12} H_8 U V \\& - 16 H_8^2 U V + 4608 H_{12} H_8 S U V- 192 H_8^2 S U V + 110592 H_{12} H_8 S^2 U V + 55296 H_8^2 S^2 U V + 1769472 H_8^2 S^3 U V + 21233664 H_8^2 S^4 U V \\& + H_{12} V^2 + H_8 V^2 + 96 H_{12} S V^2 + 120 H_8 S V^2 + 4608 H_8 S^2 V^2 + 55296 H_8 S^3 V^2 + 1728 H_{12} U V^2 + 288 H_8 U V^2 + 27648 H_8 S U V^2\\& + 663552 H_8 S^2 U V^2 + 8 V^3 + 576 S V^3 + 6912 U V^3,\\ K_{38}&=-96 H_{12}^2 H_8^2 - 16 H_{12} H_8^3 - 1536 H_{12} H_8^3 S + 16 H_{12}^3 S^2 + 12 H_{12}^2 H_8 S^2 + 2 H_{12} H_8^2 S^2 - 36864 H_{12} H_8^3 S^2 + 896 H_{12}^2 H_8 S^3\\& + 384 H_{12} H_8^2 S^3+ 16 H_8^3 S^3 + 18432 H_{12}^2 H_8 S^4 + 26624 H_{12} H_8^2 S^4 + 3328 H_8^3 S^4 + 737280 H_{12} H_8^2 S^5 + 258048 H_8^3 S^5 \\& + 7077888 H_{12} H_8^2 S^6 + 9240576 H_8^3 S^6 + 150994944 H_8^3 S^7 + 905969664 H_8^3 S^8 - 12 H_{12}^3 U - 8 H_{12}^2 H_8 U - H_{12} H_8^2 U - 528 H_{12}^2 H_8 S U\\& - 176 H_{12} H_8^2 S U - 4 H_8^3 S U - 9216 H_{12}^2 H_8 S^2 U - 11520 H_{12} H_8^2 S^2 U - 960 H_8^3 S^2 U - 258048 H_{12} H_8^2 S^3 U - 73728 H_8^3 S^3 U \\& - 1769472 H_{12} H_8^2 S^4 U - 2211840 H_8^3 S^4 U - 21233664 H_8^3 S^5 U + 288 H_{12} H_8^2 U^2 + 48 H_8^3 U^2 + 4608 H_8^3 S U^2 + 110592 H_8^3 S^2 U^2\\& - 1152 H_{12} H_8^2 V + 4 H_{12}^2 S V + 2 H_{12} H_8 S V + 576 H_{12}^2 S^2 V + 480 H_{12} H_8 S^2 V + 40 H_8^2 S^2 V + 27648 H_{12} H_8 S^3 V + 7168 H_8^2 S^3 V \\& + 442368 H_{12} H_8 S^4 V + 442368 H_8^2 S^4 V + 10616832 H_8^2 S^5 V + 84934656 H_8^2 S^6 V - 288 H_{12}^2 U V \ - 120 H_{12} H_8 U V - 4 H_8^2 U V\\& - 8064 H_{12} H_8 S U V - 1152 H_8^2 S U V - 110592 H_{12} H_8 S^2 U V - 82944 H_8^2 S^2 U V - 1327104 H_8^2 S^3 U V + 3456 H_8^2 U^2 V \\& + H_{12} V^2 + 144 H_{12} S V^2 + 36 H_8 S V^2 + 6912 H_{12} S^2 V^2 + 5184 H_8 S^2 V^2 + 221184 H_8 S^3 V^2 + 2654208 H_8 S^4 V^2 - 1728 H_{12} U V^2 \\& - 432 H_8 U V^2 - 20736 H_8 S U V^2 + 12 V^3 + 1152 S V^3 + 27648 S^2 V^3,\\ H_{40}&=-24 H_{12}^2 H_8^2 - H_{12} H_8^3 - 42 H_{12} H_8^3 S + 3 H_8^4 S + 2 H_{12}^3 S^2 + H_{12}^2 H_8 S^2 + 288 H_8^4 S^2 + 64 H_{12}^2 H_8 S^3 + 8 H_{12} H_8^2 S^3 + 6912 H_8^4 S^3 \\ &+ 768 H_{12}^2 H_8 S^4 + 512 H_{12} H_8^2 S^4 - 64 H_8^3 S^4 - 6144 H_{12} H_8^2 S^5 - 10240 H_8^3 S^5- 294912 H_{12} H_8^2 S^6 - 573440 H_8^3 S^6 - 13369344 H_8^3 S^7\\& - 113246208 H_8^3 S^8 - 2 H_{12}^3 U - H_{12}^2 H_8 U + 216 H_{12} H_8^3 U + 36 H_8^4 U - 24 H_{12}^3 S U - 84 H_{12}^2 H_8 S U - 14 H_{12} H_8^2 S U - H_8^3 S U + 3456 H_8^4 S U \\& - 2304 H_{12}^2 H_8 S^2 U - 1632 H_{12} H_8^2 S^2 U - 176 H_8^3 S^2 U + 82944 H_8^4 S^2 U - 27648 H_{12}^2 H_8 S^3 U - 55296 H_{12} H_8^2 S^3 U - 12032 H_8^3 S^3 U\\& - 1105920 H_{12} H_8^2 S^4 U - 466944 H_8^3 S^4 U - 10616832 H_{12} H_8^2 S^5 U - 12386304 H_8^3 S^5 U - 198180864 H_8^3 S^6 U - 1358954496 H_8^3 S^7 U\\ & + 72 H_{12} H_8^2 U^2 + 4 H_8^3 U^2 + 192 H_8^3 S U^2 + 3 H_8^3 V + 216 H_8^3 S V + 24 H_{12}^2 S^2 V - 2 H_8^2 S^2 V - 384 H_{12} H_8 S^3 V - 416 H_8^2 S^3 V \\ &- 18432 H_{12} H_8 S^4 V - 30720 H_8^2 S^4 V - 958464 H_8^2 S^5 V - 10616832 H_8^2 S^6 V - 36 H_{12}^2 U V - 12 H_{12} H_8 U V - H_8^2 U V + 2592 H_8^3 U V\\& - 864 H_{12}^2 S U V - 1152 H_{12} H_8 S U V - 168 H_8^2 S U V - 41472 H_{12} H_8 S^2 U V - 12672 H_8^2 S^2 U V - 663552 H_{12} H_8 S^3 U V - 552960 H_8^2 S^3 U V\\& - 13271040 H_8^2 S^4 U V - 127401984 H_8^2 S^5 U V - 6 H_{12} S V^2 - 3 H_8 S V^2 - 288 H_{12} S^2 V^2 - 432 H_8 S^2 V^2 - 20736 H_8 S^3 V^2 \\& - 331776 H_8 S^4 V^2 - 216 H_{12} U V^2 - 36 H_8 U V^2 - 10368 H_{12} S U V^2 - 5184 H_8 S U V^2 - 248832 H_8 S^2 U V^2 - 3981312 H_8 S^3 U V^2 - V^3 \\ &- 120 S V^3 - 3456 S^2 V^3 - 864 U V^3 - 41472 S U V^3,\end{aligned}$$ $$\begin{aligned} I_{40}&=288 H_{12}^3 H_8 + 96 H_{12}^2 H_8^2 + 16 H_{12} H_8^3 + 9216 H_{12}^2 H_8^2 S + 1920 H_{12} H_8^3 S - 8 H_{12}^3 S^2 - 2 H_{12}^2 H_8 S^2+ 221184 H_{12}^2 H_8^2 S^2\\& + 110592 H_{12} H_8^3 S^2 + 96 H_{12} H_8^2 S^3 + 8 H_8^3 S^3 + 3538944 H_{12} H_8^3 S^3 + 3072 H_{12}^2 H_8 S^4 + 11776 H_{12} H_8^2 S^4 + 2048 H_8^3 S^4 \\& + 42467328 H_{12} H_8^3 S^4 + 466944 H_{12} H_8^2 S^5 + 196608 H_8^3 S^5 + 5898240 H_{12} H_8^2 S^6 + 8749056 H_8^3 S^6 + 179306496 H_8^3 S^7\\& + 1358954496 H_8^3 S^8 + 12 H_{12}^3 U + 6 H_{12}^2 H_8 U + H_{12} H_8^2 U + 288 H_{12}^3 S U + 576 H_{12}^2 H_8 S U + 152 H_{12} H_8^2 S U + 4 H_8^3 S U\\& + 23040 H_{12}^2 H_8 S^2 U + 11136 H_{12} H_8^2 S^2 U + 768 H_8^3 S^2 U + 331776 H_{12}^2 H_8 S^3 U + 516096 H_{12} H_8^2 S^3 U + 64512 H_8^3 S^3 U \\ &+ 13271040 H_{12} H_8^2 S^4 U + 3538944 H_8^3 S^4 U + 127401984 H_{12} H_8^2 S^5 U + 127401984 H_8^3 S^5 U + 2378170368 H_8^3 S^6 U\\& + 16307453952 H_8^3 S^7 U - 864 H_{12}^2 H_8 U^2 - 288 H_{12} H_8^2 U^2 - 48 H_8^3 U^2 - 27648 H_{12} H_8^2 S U^2 - 5760 H_8^3 S U^2 - 663552 H_{12} H_8^2 S^2 U^2 \\& - 331776 H_8^3 S^2 U^2 - 10616832 H_8^3 S^3 U^2 - 127401984 H_8^3 S^4 U^2 + 6912 H_{12}^2 H_8 V + 1152 H_{12} H_8^2 V + 4 H_{12}^2 S V + 2 H_{12} H_8 S V \\:& + 110592 H_{12} H_8^2 S V + 96 H_{12}^2 S^2 V + 336 H_{12} H_8 S^2 V + 32 H_8^2 S^2 V + 2654208 H_{12} H_8^2 S^2 V + 19968 H_{12} H_8 S^3 V + 6272 H_8^2 S^3 V \\& + 368640 H_{12} H_8 S^4 V + 436224 H_8^2 S^4 V + 12681216 H_8^2 S^5 V + 127401984 H_8^2 S^6 V + 288 H_{12}^2 U V + 72 H_{12} H_8 U V + 4 H_8^2 U V\\& + 10368 H_{12}^2 S U V + 9216 H_{12} H_8 S U V + 672 H_8^2 S U V + 497664 H_{12} H_8 S^2 U V + 78336 H_8^2 S^2 U V + 7962624 H_{12} H_8 S^3 U V \\& + 5308416 H_8^2 S^3 U V + 159252480 H_8^2 S^4 U V + 1528823808 H_8^2 S^5 U V - 20736 H_{12} H_8 U^2 V - 3456 H_8^2 U^2 V \\& - 331776 H_8^2 S U^2 V - 7962624 H_8^2 S^2 U^2 V + H_{12} V^2 + 41472 H_{12} H_8 V^2 + 168 H_{12} S V^2 + 36 H_8 S V^2 + 5760 H_{12} S^2 V^2\\& + 5472 H_8 S^2 V^2 + 267264 H_8 S^3 V^2 + 3981312 H_8 S^4 V^2 + 2592 H_{12} U V^2 + 144 H_8 U V^2 + 124416 H_{12} S U V^2 + 41472 H_8 S U V^2\\& + 2985984 H_8 S^2 U V^2 + 47775744 H_8 S^3 U V^2 - 124416 H_8 U^2 V^2 + 12 V^3 + 1440 S V^3 + 41472 S^2 V^3 + 10368 U V^3 + 497664 S U V^3,\\ K_{42}&=-48 H_{12}^3 H_8 + 8 H_{12}^2 H_8^2 + 192 H_{12} H_8^3 S - 2 H_{12}^3 S^2 - H_{12}^2 H_8 S^2 - 18432 H_{12}^2 H_8^2 S^2 + 18432 H_{12} H_8^3 S^2 - 64 H_{12}^3 S^3 - 112 H_{12}^2 H_8 S^3 \\& - 16 H_{12} H_8^2 S^3 + 294912 H_{12} H_8^3 S^3 - 4608 H_{12}^2 H_8 S^4 - 2560 H_{12} H_8^2 S^4 - 128 H_8^3 S^4 - 73728 H_{12}^2 H_8 S^5 - 141312 H_{12} H_8^2 S^5 \\& - 24576 H_8^3 S^5 - 3244032 H_{12} H_8^2 S^6 - 1671168 H_8^3 S^6 - 28311552 H_{12} H_8^2 S^7 - 49545216 H_8^3 S^7 - 679477248 H_8^3 S^8 - 3623878656 H_8^3 S^9 \\& + 2 H_{12}^3 U + H_{12}^2 H_8 U - 2304 H_{12} H_8^3 U + 72 H_{12}^3 S U + 108 H_{12}^2 H_8 S U + 10 H_{12} H_8^2 S U - H_8^3 S U + 4032 H_{12}^2 H_8 S^2 U + 1632 H_{12} H_8^2 S^2 U\\& - 144 H_8^3 S^2 U + 55296 H_{12}^2 H_8 S^3 U + 82944 H_{12} H_8^2 S^3 U - 2304 H_8^3 S^3 U 1548288 H_{12} H_8^2 S^4 U + 331776 H_8^3 S^4 U+ 10616832 H_{12} H_8^2 S^5 U\\& + 10616832 H_8^3 S^5 U + 84934656 H_8^3 S^6 U - 72 H_{12} H_8^2 U^2 + 12 H_8^3 U^2 - 3456 H_{12} H_8^2 S U^2 - 82944 H_8^3 S^2 U^2 - 1327104 H_8^3 S^3 U^2\\& + 6912 H_8^3 U^3 - 576 H_{12}^2 H_8 V + 192 H_{12} H_8^2 V + 9216 H_{12} H_8^2 S V - 48 H_{12}^2 S^2 V - 24 H_{12} H_8 S^2 V - 2 H_8^2 S^2 V - 2304 H_{12}^2 S^3 V\\& - 3072 H_{12} H_8 S^3 V - 608 H_8^2 S^3 V - 129024 H_{12} H_8 S^4 V - 61440 H_8^2 S^4 V - 1769472 H_{12} H_8 S^5 V - 2654208 H_8^2 S^5 V - 49545216 H_8^2 S^6 V\\& - 339738624 H_8^2 S^7 V + 36 H_{12}^2 U V + 12 H_{12} H_8 U V - H_8^2 U V + 1728 H_{12}^2 S U V + 1440 H_{12} H_8 S U V - 48 H_8^2 S U V+ 55296 H_{12} H_8 S^2 U V\\& + 6912 H_8^2 S^2 U V + 663552 H_{12} H_8 S^3 U V + 442368 H_8^2 S^3 U V + 5308416 H_8^2 S^4 U V - 864 H_8^2 U^2 V - 41472 H_8^2 S U^2 V - 6 H_{12} S V^2 \\& - 3 H_8 S V^2 - 864 H_{12} S^2 V^2 - 576 H_8 S^2 V^2 - 27648 H_{12} S^3 V^2 - 39168 H_8 S^3 V^2 - 1105920 H_8 S^4 V^2 - 10616832 H_8 S^5 V^2\\& + 216 H_{12} U V^2 + 36 H_8 U V^2 + 10368 H_{12} S U V^2 + 3456 H_8 S U V^2 + 82944 H_8 S^2 U V^2 - V^3 - 144 S V^3 - 6912 S^2 V^3 - 110592 S^3 V^3,\\ H_{48}&=-162 H_{12}^3 H_8^2 - 63 H_{12}^2 H_8^3 - 4 H_{12} H_8^4 - 5172 H_{12}^2 H_8^3 S - 834 H_{12} H_8^4 S - H_{12}^4 S^2 + 3 H_{12}^3 H_8 S^2 + H_{12}^2 H_8^2 S^2\\& - 124416 H_{12}^2 H_8^3 S^2 - 61632 H_{12} H_8^4 S^2 + 48 H_8^5 S^2 - 64 H_{12}^3 H_8 S^3 + 40 H_{12}^2 H_8^2 S^3 - 12 H_{12} H_8^3 S^3 - 3 H_8^4 S^3 - 1981440 H_{12} H_8^4 S^3\\& + 5376 H_8^5 S^3 - 1536 H_{12}^3 H_8 S^4 - 1152 H_{12}^2 H_8^2 S^4 - 1792 H_{12} H_8^3 S^4 - 624 H_8^4 S^4 - 23887872 H_{12} H_8^4 S^4 + 184320 H_8^5 S^4 - 79872 H_{12}^2 H_8^2 S^5 \\& - 99328 H_{12} H_8^3 S^5 - 51712 H_8^4 S^5 + 1769472 H_8^5 S^5 - 884736 H_{12}^2 H_8^2 S^6 - 2441216 H_{12} H_8^3 S^6 - 2170880 H_8^4 S^6 - 33030144 H_{12} H_8^3 S^7\\& - 48758784 H_8^4 S^7 - 226492416 H_{12} H_8^3 S^8 - 594542592 H_8^4 S^8 - 4529848320 H_8^4 S^9 - 21743271936 H_8^4 S^{10} + H_{12}^4 U - 3 H_{12}^3 H_8 U\\& - H_{12}^2 H_8^2 U - 72 H_{12} H_8^4 U - 12 H_8^5 U + 12 H_{12}^3 H_8 S U - 78 H_{12}^2 H_8^2 S U - H_{12} H_8^3 S U + H_8^4 S U- 1152 H_8^5 S U + 1152 H_{12}^3 H_8 S^2 U \\& - 2496 H_{12}^2 H_8^2 S^2 U- 464 H_{12} H_8^3 S^2 U + 160 H_8^4 S^2 U - 27648 H_8^5 S^2 U - 4608 H_{12}^2 H_8^2 S^3 U - 38912 H_{12} H_8^3 S^3 U + 7552 H_8^4 S^3 U \\& + 442368 H_{12}^2 H_8^2 S^4 U - 1290240 H_{12} H_8^3 S^4 U - 24576 H_8^4 S^4 U - 8847360 H_{12} H_8^3 S^5 U - 11501568 H_8^4 S^5 U + 56623104 H_{12} H_8^3 S^6 U\\& - 297271296 H_8^4 S^6 U - 2038431744 H_8^4 S^7 U + 288 H_{12}^2 H_8^2 U^2 + 124 H_{12} H_8^3 U^2+ 7 H_8^4 U^2 + 9216 H_{12} H_8^3 S U^2 + 1488 H_8^4 S U^2\\& + 221184 H_{12} H_8^3 S^2 U^2 + 110592 H_8^4 S^2 U^2+ 3538944 H_8^4 S^3 U^2 + 42467328 H_8^4 S^4 U^2 - 3888 H_{12}^2 H_8^2 V - 642 H_{12} H_8^3 V\\& - 61920 H_{12} H_8^3 S V+ 72 H_8^4 S V - 48 H_{12}^3 S^2 V + 12 H_{12}^2 H_8 S^2 V - 24 H_{12} H_8^2 S^2 V - 7 H_8^3 S^2 V - 1492992 H_{12} H_8^3 S^2 V \\& + 6912 H_8^4 S^2 V - 2688 H_{12}^2 H_8 S^3 V- 3072 H_{12} H_8^2 S^3 V - 1360 H_8^3 S^3 V + 110592 H_8^4 S^3 V - 55296 H_{12}^2 H_8 S^4 V - 116736 H_{12} H_8^2 S^4 V \\& - 96768 H_8^3 S^4 V - 2211840 H_{12} H_8^2 S^5 V - 3133440 H_8^3 S^5 V - 21233664 H_{12} H_8^2 S^6 V - 48955392 H_8^3 S^6 V\\& - 452984832 H_8^3 S^7 V - 2717908992 H_8^3 S^8 V + 36 H_{12}^3 U V - 48 H_{12}^2 H_8 U V - 3 H_{12} H_8^2 U V + H_8^3 U V - 864 H_8^4 U V - 144 H_{12}^2 H_8 S U V \\& - 480 H_{12} H_8^2 S U V + 180 H_8^3 S U V + 27648 H_{12}^2 H_8 S^2 U V - 48384 H_{12} H_8^2 S^2 U V + 5184 H_8^3 S^2 U V - 552960 H_{12} H_8^2 S^3 U V \\& - 387072 H_8^3 S^3 U V + 5308416 H_{12} H_8^2 S^4 U V - 19906560 H_8^3 S^4 U V - 191102976 H_8^3 S^5 U V + 6912 H_{12} H_8^2 U^2 V + 1152 H_8^3 U^2 V\\& + 110592 H_8^3 S U^2 V + 2654208 H_8^3 S^2 U^2 V - 23328 H_{12} H_8^2 V^2 + 36 H_8^3 V^2 - 6 H_{12}^2 S V^2 - 15 H_{12} H_8 S V^2 - 6 H_8^2 S V^2 + 1728 H_8^3 S V^2\\& - 864 H_{12}^2 S^2 V^2 - 1296 H_{12} H_8 S^2 V^2 - 1032 H_8^2 S^2 V^2 - 41472 H_{12} H_8 S^3 V^2 - 59136 H_8^2 S^3 V^2 - 663552 H_{12} H_8 S^4 V^2\\& - 1327104 H_8^2 S^4 V^2 - 15925248 H_8^2 S^5 V^2 - 127401984 H_8^2 S^6 V^2 + 432 H_{12}^2 U V^2 - 252 H_{12} H_8 U V^2 + 42 H_8^2 U V^2 \\& - 8640 H_{12} H_8 S U V^2 - 864 H_8^2 S U V^2 + 165888 H_{12} H_8 S^2 U V^2 - 373248 H_8^2 S^2 U V^2 - 5971968 H_8^2 S^3 U V^2 + 41472 H_8^2 U^2 V^2 \\& - H_{12} V^3 - 2 H_8 V^3 - 144 H_{12} S V^3 - 276 H_8 S V^3 - 6912 H_{12} S^2 V^3 - 12096 H_8 S^2 V^3 - 221184 H_8 S^3 V^3 - 2654208 H_8 S^4 V^3 \\& + 1728 H_{12} U V^3 - 1296 H_8 U V^3 - 62208 H_8 S U V^3 - 9 V^4 - 864 S V^4 - 20736 S^2 V^4\end{aligned}$$ $$\begin{aligned} K_{52}&=-876 H_{12}^4 H_8 - 124 H_{12}^3 H_8^2 + H_{12}^2 H_8^3 - 21504 H_{12}^3 H_8^2 S + 384 H_{12}^2 H_8^3 S + 288 H_{12} H_8^4 S \\& - 2 H_{12}^4 S^2 - 7 H_{12}^3 H_8 S^2 - H_{12}^2 H_8^2 S^2 - 672768 H_{12}^3 H_8^2 S^2- 23040 H_{12}^2 H_8^3 S^2 \\& + 53760 H_{12} H_8^4 S^2 - 64 H_{12}^4 S^3 - 512 H_{12}^3 H_8 S^3- 240 H_{12}^2 H_8^2 S^3 - 5750784 H_{12}^2 H_8^3 S^3\\& + 2 H_8^4 S^3 + 3588096 H_{12} H_8^4 S^3 - 6400 H_{12}^3 H_8 S^4 - 18432 H_{12}^2 H_8^2 S^4 - 704 H_{12} H_8^3 S^4 \\& - 129171456 H_{12}^2 H_8^3 S^4 + 480 H_8^4 S^4 + 100270080 H_{12} H_8^4 S^4 - 98304 H_{12}^3 H_8 S^5 - 538624 H_{12}^2 H_8^2 S^5 \\& - 131072 H_{12} H_8^3 S^5 + 43008 H_8^4 S^5+ 962592768 H_{12} H_8^4 S^5 - 6193152 H_{12}^2 H_8^2 S^6 - 8241152 H_{12} H_8^3 S^6\\& + 1630208 H_8^4 S^6 - 56623104 H_{12}^2 H_8^2 S^7 - 208404480 H_{12} H_8^3 S^7 + 10616832 H_8^4 S^7 \\& - 2378170368 H_{12} H_8^3 S^8 - 1019215872 H_8^4 S^8- 14495514624 H_{12} H_8^3 S^9 - 29595009024 H_8^4 S^9 \\& - 318901321728 H_8^4 S^{10} - 1391569403904 H_8^4 S^{11} + 2 H_{12}^4 U + 7 H_{12}^3 H_8 U + H_{12}^2 H_8^2 U\\& - 18432 H_{12}^2 H_8^3 U - 3072 H_{12} H_8^4 U + 72 H_{12}^4 S U + 556 H_{12}^3 H_8 S U + 242 H_{12}^2 H_8^2 S U - H_{12} H_8^3 S U \\& - 2 H_8^4 S U - 294912 H_{12} H_8^4 S U + 6144 H_{12}^3 H_8 S^2 U + 18784 H_{12}^2 H_8^2 S^2 U + 272 H_{12} H_8^3 S^2 U \\& - 528 H_8^4 S^2 U - 7077888 H_{12} H_8^4 S^2 U + 82944 H_{12}^3 H_8 S^3 U + 516096 H_{12}^2 H_8^2 S^3 U + 89344 H_{12} H_8^3 S^3 U\\& - 52480 H_8^4 S^3 U + 4202496 H_{12}^2 H_8^2 S^4 U + 6316032 H_{12} H_8^3 S^4 U - 2383872 H_8^4 S^4 U \\& + 31850496 H_{12}^2 H_8^2 S^5 U + 136249344 H_{12} H_8^3 S^5 U - 44236800 H_8^4 S^5 U + 934281216 H_{12} H_8^3 S^6 U\\& - 9437184 H_8^4 S^6 U + 4076863488 H_{12} H_8^3 S^7 U + 7247757312 H_8^4 S^7 U + 43486543872 H_8^4 S^8 U\\& + 2304 H_{12}^3 H_8 U^2 + 216 H_{12}^2 H_8^2 U^2 +100 H_{12} H_8^3 U^2 + 20 H_8^4 U^2 + 51840 H_{12}^2 H_8^2 S U^2\\& + 4032 H_{12} H_8^3 S U^2 + 2400 H_8^4 S U^2 + 1769472 H_{12}^2 H_8^2 S^2 U^2 - 165888 H_{12} H_8^3 S^2 U^2 + 4608 H_8^4 S^2 U^2 \\& + 11501568 H_{12} H_8^3 S^3 U^2 - 8699904 H_8^4 S^3 U^2 + 339738624 H_{12} H_8^3 S^4 U^2 - 336199680 H_8^4 S^4 U^2\\& - 3227516928 H_8^4 S^5 U^2 + 55296 H_{12} H_8^3 U^3 + 9216 H_8^4 U^3 + 884736 H_8^4 S U^3 \\& + 21233664 H_8^4 S^2 U^3 - 21024 H_{12}^3 H_8 V - 240 H_{12}^2 H_8^2 V + 288 H_{12} H_8^3 V - 179712 H_{12}^2 H_8^2 S V \\& + 59904 H_{12} H_8^3 S V - 72 H_{12}^3 S^2 V - 176 H_{12}^2 H_8 S^2 V - 6 H_{12} H_8^2 S^2 V- 8073216 H_{12}^2 H_8^2 S^2 V \\& + 2 H_8^3 S^2 V + 3760128 H_{12} H_8^3 S^2 V - 3072 H_{12}^3 S^3 V - 13952 H_{12}^2 H_8 S^3 V - 2656 H_{12} H_8^2 S^3 V \\& + 480 H_8^3 S^3 V + 60162048 H_{12} H_8^3 S^3 V - 239616 H_{12}^2 H_8 S^4 V \\& - 280576 H_{12} H_8^2 S^4 V + 36352 H_8^3 S^4 V - 3538944 H_{12}^2 H_8 S^5 V - 10887168 H_{12} H_8^2 S^5 V + 417792 H_8^3 S^5 V\\& - 166330368 H_{12} H_8^2 S^6 V - 59768832 H_8^3 S^6 V - 1358954496 H_{12} H_8^2 S^7 V - 2378170368 H_8^3 S^7 V \\& - 32614907904 H_8^3 S^8 V - 173946175488 H_8^3 S^9 V + 60 H_{12}^3 U V + 164 H_{12}^2 H_8 U V + 3 H_{12} H_8^2 U V \\& - 2 H_8^3 U V - 221184 H_{12} H_8^3 U V+ 2592 H_{12}^3 S U V + 13248 H_{12}^2 H_8 S U V + 1816 H_{12} H_8^2 S U V - 544 H_8^3 S U V \\ &+ 152064 H_{12}^2 H_8 S^2 U V + 202368 H_{12} H_8^2 S^2 U V - 52992 H_8^3 S^2 U V + 1990656 H_{12}^2 H_8 S^3 U V\\& + 6967296 H_{12} H_8^2 S^3 U V - 1916928 H_8^3 S^3 U V + 61046784 H_{12} H_8^2 S^4 U V \\& - 7077888 H_8^3 S^4 U V + 382205952 H_{12} H_8^2 S^5 U V + 509607936 H_8^3 S^5 U V \\& + 4076863488 H_8^3 S^6 U V + 55296 H_{12}^2 H_8 U^2 V - 1728 H_{12} H_8^2 U^2 V \\& + 480 H_8^3 U^2 V + 359424 H_{12} H_8^2 S U^2 V - 96768 H_8^3 S U^2 V + 21233664 H_{12} H_8^2 S^2 U^2 V \\& - 12607488 H_8^3 S^2 U^2 V - 201719808 H_8^3 S^3 U^2 V + 663552 H_8^3 U^3 V \\& - 126144 H_{12}^2 H_8 V^2 + 19584 H_{12} H_8^2 V^2 - 6 H_{12}^2 S V^2 - 3 H_{12} H_8 S V^2 + 940032 H_{12} H_8^2 S V^2 - 1440 H_{12}^2 S^2 V^2 \\& - 1776 H_{12} H_8 S^2 V^2 + 56 H_8^2 S^2 V^2 - 55296 H_{12}^2 S^3 V^2 - 143616 H_{12} H_8 S^3 V^2 - 2944 H_8^2 S^3 V^2\\& - 3428352 H_{12} H_8 S^4 V^2 - 1155072 H_8^2 S^4 V^2- 42467328 H_{12} H_8 S^5 V^2 - 63700992 H_8^2 S^5 V^2\\& - 1189085184 H_8^2 S^6 V^2 - 8153726976 H_8^2 S^7 V^2 + 648 H_{12}^2 U V^2 + 1020 H_{12} H_8 U V^2 - 128 H_8^2 U V^2 \\& + 31104 H_{12}^2 S U V^2 + 84672 H_{12} H_8 S U V^2 - 13632 H_8^2 S U V^2 + 1078272 H_{12} H_8 S^2 U V^2 \\& - 193536 H_8^2 S^2 U V^2 + 11943936 H_{12} H_8 S^3 U V^2 + 10616832 H_8^2 S^3 U V^2 + 127401984 H_8^2 S^4 U V^2 \\ &+ 331776 H_{12} H_8 U^2 V^2 - 65664 H_8^2 U^2 V^2 - 3151872 H_8^2 S U^2 V^2\\& - H_{12} V^3 - 216 H_{12} S V^3- 48 H_8 S V^3 - 17280 H_{12} S^2 V^3 - 9216 H_8 S^2 V^3 - 442368 H_{12} S^3 V^3\\& - 626688 H_8 S^3 V^3 - 17694720 H_8 S^4 V^3 - 169869312 H_8 S^5 V^3 + 2592 H_{12} U V^3 + 576 H_8 U V^3\\& + 124416 H_{12} S U V^3 + 55296 H_8 S U V^3 + 1327104 H_8 S^2 U V^3 - 12 V^4 - 1728 S V^4 \\& - 82944 S^2 V^4 - 1327104 S^3 V^4.\end{aligned}$$ Acknowledgement {#acknowledgement .unnumbered} =============== The idea of proof of $h_{15}\in M_{15}^+(\Gamma _0^{(1)}(4),\chi _{-4})$ using the twisting operator is due to Professor S. Böcherer. This makes it possible to prove Lemma \[Lem2\]. The author is supported by JSPS KAKENHI Grant Number JP18K03229. D. Choi, Dohoon, Y. Choie, T. Kikuta, Sturm type theorem for Siegel modular forms of genus $2$ modulo $p$, Acta Arith. 158 (2013), no. 2, 129-139. T. Dern, Hermitesche Modulformen zweiten Grades, Verlag Mainz, Wissenschaftsverlag, Aachen, 2001. T. Dern, A. Krieg, Graded rings of Hermitian modular forms of degree $2$, Manuscripta Math. 110 (2003), no. 2, 251-272. J.-I. Igusa, On the ring of modular forms of degree two over ${\mathbb{Z}}$, Amer. J. Math. 101 (1979), no. 1, 149-183. T. Kikuta, S. Nagaoka, On Hermitian modular forms mod $p$. J. Math. Soc. Japan 63 (2011), no. 1, 211-238. T. Kikuta, S.  Nagaoka, On the theta operator for Hermitian modular forms of degree 2, Abh. Math. Semin. Univ. Hambg. 87 (2017), no. 1, 145-163. T. Kikuta, S. Takemori, Sturm bounds for Siegel modular forms of degree 2 and odd weights, to apper in Math. Z. S. Nagaoka, S. Takemori, Theta operator on Hermitian modular forms over the Eisenstein field, to appear in Ramanujan J. A. Krieg, The Maass spaces on the Hermitian half-space of degree $2$. Math. Ann. 289 (1991), no. 4, 663-681. G. Shimura, Introduction to the arithmetic theory of automorphic functions. Reprint of the 1971 original. Publications of the Mathematical Society of Japan, 11. Kano Memorial Lectures, 1. Princeton University Press, Princeton, NJ, 1994. xiv+271
--- abstract: 'We present a polynomial-time quantum algorithm for obtaining the energy spectrum of a physical system, i.e. the differences between the eigenvalues of the system’s Hamiltonian, provided that the spectrum of interest contains at most a polynomially increasing number of energy levels. A probe qubit is coupled to a quantum register that represents the system of interest such that the probe exhibits a dynamical response only when it is resonant with a transition in the system. By varying the probe’s frequency and the system-probe coupling operator, any desired part of the energy spectrum can be obtained. The algorithm can also be used to deterministically prepare any energy eigenstate. As an example, we have simulated running the algorithm and obtained the energy spectrum of the water molecule.' author: - 'Hefeng Wang$^{1, 2, 3}$, S. Ashhab$^{2, 3}$, and Franco Nori$^{2, 3}$' title: Quantum algorithm for obtaining the energy spectrum of a physical system --- Introduction ============ Obtaining the energy spectrum of a physical system is an important task in a variety of fields. In general, one has to solve the Schrödinger equation of the system, which is a difficult task on a classical computer for large systems, because the dimension of the Hilbert space of the system increases exponentially with the size of the system, which is commonly defined as the number of particles in the system. Thus, the complexity of simulating the quantum system grows exponentially. On a quantum computer, however, the number of qubits required to simulate the system increases linearly with the size of the system. As a result, Solving the Schrödinger equation of the system is more efficient on a quantum computer than on a classical computer [@bacon; @mhy; @jybb; @nori1; @nori2]. The standard quantum algorithm for obtaining the eigenvalues and eigenvectors of the Hamiltonian of a quantum system is the phase estimation algorithm (PEA) [@kitaev95; @abrams; @aa; @whf; @whf0; @childs; @nori3; @whf2]. In the PEA, one prepares an initial guess state, and the algorithm randomly selects one of the energy eigenstates in the guess state and produces its energy as the output of the algorithm. It is worth mentioning here that the probability of selecting a given energy eigenstate is equal to the square of its overlap with the guess state. In reality, one is usually most interested in the energy differences between energy levels, instead of the absolute energy of a given energy level. In this paper, we present a quantum algorithm that solves this problem: obtaining the energy differences between energy levels of a quantum system described by a given Hamiltonian. The algorithm can also be used to prepare any energy eigenstate of the system. Our algorithm is motivated by the following observation in simulating the dynamics of an open quantum system [@terhal; @whf1; @sanders]: For an open system interacting with many environment modes, the mode that resonates with a certain transition in the spectrum of the open system contributes the most to the decay dynamics associated with that transition. This property suggests a method to locate the transition frequencies separating the different energy levels of a physical system. The basic idea of the algorithm is as follows: we couple the quantum system to a probe qubit with a certain frequency, set the probe qubit in one of its energy eigenstates (say the excited state), evolve the whole system for some time, then perform a measurement on the probe qubit. When the frequency of the probe qubit matches the transition frequency between two energy levels of the quantum system, one observes a peak in the decay rate of the probe qubit. Therefore by varying the frequency of the probe qubit, we can locate the transition frequencies of the quantum system. We can also set the probe qubit to be in its ground state and measure its excitation dynamics. The difference is that in the former case we obtain the absorption spectroscopy of the system while in the latter case we obtain the emission spectroscopy. This algorithm has the following advantages: $\left( i\right) $ There are several adjustable elements (initial state of the system, interaction operator, evolution time and system-probe coupling strength) that can be varied in order to improve the efficiency of the algorithm. $(ii)$ The coupling of the system to the probe qubit can simulate a realistic interaction, and therefore the algorithm can naturally identify transitions that would occur in a realistic setting. $(iii)$ Because of the freedom associated with choosing the coupling operator, the algorithm gives as an additional piece of output the transition matrix elements for any desired operator. $(iv)$ Because the algorithm involves transitions between different energy eigenstates, preparing the system in a good approximation to any particular energy eigenstate is less crucial than in the phase estimation algorithm. The structure of this work is as follows: In Sec. \[alg\], we present an algorithm for obtaining the energy spectrum of a physical system. In Sec. \[example\], we give an example to demonstrate the algorithm for obtaining the energy spectrum of the water molecule. In Sec. \[discuss\], we discuss the efficiency, the accuracy and the resource requirement of the algorithm, and compare our algorithm with the PEA. We close with a conclusion section. The algorithm {#alg} ============= First, we make an initial guess about the range of the energy differences between the energy levels of the system, $\left[ \omega _{\min }\text{, }% \omega _{\max }\right] $. We discretize this frequency range into $j$ intervals, where each interval has a width of $\Delta \omega =\left( \omega _{\max }-\omega _{\min }\right) /j$, and the center frequencies are given by $\omega _{k}=\omega _{\min }+\left( k+1/2\right) \Delta \omega ,k=0\ldots ,j-1$. We now let a probe qubit couple to the quantum system, and we design the Hamiltonian of the whole system to be of the form$$H=H_{S}+\frac{1}{2}\omega \sigma _{z}+cA\otimes \sigma _{x},$$where the first term is the Hamiltonian of the system, the second term is the Hamiltonian of the probe qubit, and the third term describes the interaction between the system and the probe qubit. Here, $\omega $ is the frequency of the probe qubit (we have set $\hbar=1$), and $c$ is the coupling strength between the probe qubit and the system, while $\sigma _{x}$ and $\sigma _{z}$ are Pauli matrices. The operator $A$ acts in the state space of the system and plays the role of an excitation operator that transfers the initial state of the quantum system to another state. The frequency $\omega $ is taken from the frequency set $\omega _{k}$. For a frequency $\omega _{k}$ of the probe qubit, we let the whole system evolve with the Hamiltonian shown in Eq. ($1$) for a time $\tau $. This evolution is implemented using the procedure of quantum simulation based on the Trotter-Suzuki formula [@nc]. After that, we read out the state of the probe qubit. We repeat the whole procedure many times in order to obtain the decay probability. Then we change the probe frequency and repeat this procedure until we cover all the frequencies in the range $\left[ \omega _{\min }\text{, }\omega _{\max }\right] $. Setting the probe qubit in its excited (ground) state, when the frequency of the probe qubit matches the transition frequency between two energy levels of the quantum system, the probe qubit has the fastest decay (excitation). For example, in the case where the initial state of the probe qubit is the excited state, the final state of the probe qubit is: $$\rho _{p}(\tau )=\text{Tr}_{S}[U(\tau )\left( |\psi _{s}\rangle \langle \psi _{s}|\otimes |1\rangle \langle 1|\right) U^{\dag }(\tau )],$$where Tr$_{S}[\cdots ]$ means tracing out the system degrees of freedom. The unitary evolution operator $U(\tau )=\exp \left( -iH\tau \right) $, $H$ is given in Eq. ($1$), $|\psi _{s}\rangle $ is the initial state of the system, and $|1\rangle $ represents the excited state of the probe qubit, while $% |0\rangle $ represents the ground state of the probe qubit. The quantity of interest to us now is the decay probability of the probe qubit $P_{\text{% decay}}=\langle 0|\rho _{p}(\tau )|0\rangle $. By plotting $P_{\text{decay}}$ as a function of the probe-qubit frequency, we can obtain the absorption spectrum of the system. If there are no degeneracies in the transition frequencies, at most one transition (denoted by $i\rightarrow j$) in the system will contribute to the decay dynamics of the probe qubit (taking into consideration the possibility of degenerate transitions makes the derivations longer but does not affect our main results). In this case, we obtain the result $$P_{\text{decay}}=\sin ^{2}\left( \frac{\Omega _{ij}\tau }{2}\right) \frac{% Q_{ij}^{2}}{Q_{ij}^{2}+\left( E_{j}-E_{i}-\omega _{k}\right) ^{2}}|\langle \varphi _{i}|\psi _{s}\rangle |^{2},$$where $Q_{ij}=2c|\langle \varphi _{j}|A|\varphi _{i}\rangle |$, and $\Omega _{ij}=\sqrt{Q_{ij}^{2}+\left( E_{j}-E_{i}-\omega _{k}\right) ^{2}}$. $% |\varphi _{i}\rangle $ is the $i$-th energy eigenstate of the system and $% E_{i}$ is the corresponding eigenenergy. Eq. ($3$) describes Rabi-oscillation dynamics, where the system and probe exchange an excitation. The second factor on the right-hand side is the maximum oscillation probability, and it depends on the relation between the matrix element for a given transition and the system-probe detuning for that transition. The third factor is the overlap between the initial state of the system and a given energy eigenstate. In general, the interaction between the probe qubit and the quantum system should be weak such that the widths of the peaks are small and one obtains accurate results. The evolution time $\tau $ should ideally be large ($c\tau \sim 1$), such that the change of the system is clear and the peaks in the spectroscopy have high resolution. The procedure of the algorithm is as follows: $\left( i\right) $ prepare a quantum register $R_{S}$, which encodes the state of the system, in state $% |\psi _{s}\rangle $, and the probe qubit in state $|1\rangle $; $\left( ii\right) $ implement the unitary operator $U(\tau )=\exp \left( -iH\tau \right) $ where $H$ is given in Eq. ($1$); $\left( iii\right) $ read out the state of the probe qubit; $\left( iv\right) $ perform steps $\left( i\right) $ – $\left( iii\right) $ many times in order to obtain good statistics and calculate the decay probability; $\left( v\right) $ repeat steps $\left( i\right) $ – $\left( iv\right) $ for different frequencies of the probe qubit. From the above procedure, one obtains the absorption spectroscopy of the system. One can also set the probe qubit in its ground state $|0\rangle $, and perform the above steps in order to obtain the emission spectroscopy. The quantum circuit for steps $\left( i\right) $ – $\left( iii\right) $ is shown in Fig. $1$. ![Quantum circuit for obtaining the energy spectrum of a physical system. The first input register represents a probe qubit, and the second input register represents the system whose spectrum we are trying to obtain.](fig1.eps){width="0.9\columnwidth"} Example: obtaining the energy spectrum of the water molecule {#example} ============================================================ In the following, we present an example that demonstrates how the algorithm would perform in obtaining the energy spectrum of the water molecule. To apply the quantum algorithm presented above, first we have to map the state space of the water molecule to the state space of the qubits. Using the mapping technique introduced in Ref. [@whf], considering the C$_{2V}$ and $^{1}A_{1}$ symmetries known from quantum chemistry of the water molecule, we can minimize the number of qubits needed to represent the water molecule on a quantum register. Note that the symmetries can be used with the PEA in order to optimize the algorithm in the same way that we have used them in the water-molecule example. These symmetries would not lead to an exponential speedup in either algorithm. In other words, knowing the symmetries is not crucial for running the algorithm and for having a polynomial scaling of resources. The Hamiltonian for the water molecule is given in Ref. [@sbo] and shown below. The Hartree-Fock wave function for the ground state of the water molecule is $% (1a_{1})^{2}(2a_{1})^{2}(1b_{2})^{2}(3a_{1})^{2}(1b_{1})^{2}$. Using the STO-$3$G basis set [@sbo] and freezing the first two $a_{1}$ orbitals, we construct a model space with $^{1}A_{1}$ symmetry that includes the $% 3a_{1},4a_{1},1b_{1}$ and $1b_{2}$ orbitals and we consider only single and double excitations to the external space for performing the multi-reference-configuration interaction (MRCI) calculation. The MRCI space is composed of $18$ configuration state functions. Therefore at least $5$ qubits are required to represent the state of the water molecule in this calculation. In order to optimize the implementation of the algorithm, it is useful to have a priori knowledge of the molecular states and their symmetries. This can be done using quantum-chemistry algorithms on a classical computer. The Hamiltonian of the water molecule in the form of second quantization is $$H=\sum_{p,q}\left\langle p\left\vert T+V_{N}\right\vert q\right\rangle a_{p}^{\dagger }a_{q}-\frac{1}{2}\sum\limits_{p,q,r,s}\left\langle p\left\vert \left\langle q\left\vert V_{e}\right\vert r\right\rangle \right\vert s\right\rangle a_{p}^{\dagger }a_{q}^{\dagger }a_{r}a_{s},$$ where $|p\rangle $ is the one-particle state, $a_{p}^{\dagger }$ is its fermionic creation operator, and $T$, $V_{N}$, and $V_{e}$ are the one-particle kinetic operator, nuclear attraction operator and the two-particle electron repulsion operator, respectively. For the initial state, we prepare the system register in the simple state $% |00010\rangle $, which is close to the true ground state. Then we implement the unitary operation $U=\exp \left( -iH\tau \right) $. For the interaction operator $A$, we set $$A=(A_{1}+A_{2}+A_{3}+A_{4}+A_{5})/\sqrt{5},$$where $A_{1}=I\otimes I\otimes I\otimes I\otimes \sigma _{x}$, $% A_{2}=I\otimes I\otimes I\otimes \sigma _{x}\otimes I$, $A_{3}=I\otimes I\otimes \sigma _{x}\otimes I\otimes I$, $A_{4}=I\otimes \sigma _{x}\otimes I\otimes I\otimes I$, $A_{5}=\sigma _{x}\otimes I\otimes I\otimes I\otimes I$. We set the coupling strength $c=0.005$ and the evolution time $\tau =500$ (here we measure energies in units of Hartree and time in units of Hartree$% ^{-1}$). We vary the frequency of the probe qubit in the range $\omega \in % \left[ 0.4\text{, }2.0\right] $, which is divided into $200$ intervals, and run the circuit shown in Fig. $1$. We obtain the spectrum shown in Fig. $2$ for the transition frequencies between the ground state and several excited states. From the figure we can see that the spectroscopy obtained using our algorithm is in very good agreement with the known transition frequency spectrum (in red) of the water molecule. The coupling strength $c$ and the evolution time $\tau $ can be adjusted to improve the resolution of the peaks and the accuracy of the results. In order to demonstrate this point, we now set $c=0.001$ and $\tau =2500$. We focus on the second and the third peaks as shown in the inset of Fig. $2$. We can see that the widths of the peaks are reduced and the resolution of the peaks is now higher. We also observe a small peak at the frequency of the transition between the second and the eighth energy levels. From Fig. $2$, we can see that some transitions between the ground state and the excited states are barely visible. Their decay probabilities can be improved by constructing a different operator $A$. The choice for the operator $A$ in Eq.($5$) includes single-qubit operators with all the qubits represented. With this choice most of the desired resonance peaks are observed in the simulation. However, as can be seen in Fig.$2$, some peaks are very low. We use two-qubit operators in order to look for any such missing peaks. We have tried a few different choices, and we only show the one that resulted in all the peaks being visible. In principle, even if no single operator (as happened in our example) produces all the resonance peaks, one can still construct the spectrum by putting together the information obtained from the different choices for $A$. We define the operators $A_{6}=I\otimes I\otimes I\otimes \sigma _{x}\otimes \sigma _{x}$ and $A_{7}=I\otimes I\otimes \sigma _{x}\otimes \sigma _{x}\otimes I$, set the interaction operator$$A=(A_{1}+A_{2}+A_{3}+A_{6}+A_{7})/\sqrt{5},$$and run the algorithm. The results are shown in Fig. $3$. We can see that now all the expected peaks are clearly visible. ![(Color online) Transition frequency spectrum between the ground state, $|\protect\varphi _{0}\rangle $, and the first ten excited states $% \left( |\protect\varphi _{i}\rangle ,i=1,2,\ldots ,10\right) $ of the water molecule. The blue solid curve represents the decay probability of the probe qubit at different frequencies with the coupling coefficient in Eq. ($1$) $% c=0.005$ and the evolution time $\protect\tau =500$, and the operator $A$ as shown in Eq. ($5$). The red dotted vertical lines represent the known transition frequencies between the ground state and the first ten excited states of the water molecule. In the inset, the second and the third transition frequencies shown in blue were obtained using $c=10^{-3}$ and $% \protect\tau =2500$. The green vertical dashed line represents the known transition frequency ($1$-$7$) between the first and the seventh excited states.](fig2.eps){width="0.9\columnwidth"} ![(Color online) Same as in Fig. $2$, except the operator $A$ is set as shown in Eq. ($6$).](fig3.eps){width="0.9\columnwidth"} In our algorithm, we can transfer the initial state of the system to another state through the interaction operator $A$. Therefore the initial state of the system is not of crucial importance to the success of the algorithm. Here we give an example that demonstrates how the PEA can fail when the initial state is not a good approximation of the desired state, but where our algorithm still succeeds. In the PEA, the success probability of the algorithm depends on the overlap of the initial guess state with the desired eigenstate of the system. In the previous example, if the initial state of the water molecule is prepared in state $|11111\rangle $, the overlap of this state with any of the $18$ eigenstates (in our example, the dimension of the state space of the water molecule is $18$) of the water molecule is *zero*. Therefore the PEA will *fail* in such a case. Our algorithm, however, still works. We set the operator $A$ to be$$A=\frac{1}{3}\sum_{i=1}^{9}B_{i},$$where $B_{1}=\sigma _{x}\otimes \sigma _{x}\otimes \sigma _{x}\otimes \sigma _{x}\otimes I$, $B_{2}=\sigma _{x}\otimes \sigma _{x}\otimes \sigma _{x}\otimes I\otimes \sigma _{x}$, $B_{3}=\sigma _{x}\otimes \sigma _{x}\otimes I\otimes \sigma _{x}\otimes \sigma _{x}$, $B_{4}=\sigma _{x}\otimes I\otimes \sigma _{x}\otimes \sigma _{x}\otimes \sigma _{x}$, $% B_{5}=\sigma _{x}\otimes \sigma _{x}\otimes \sigma _{x}\otimes I\otimes I$, $% B_{6}=\sigma _{x}\otimes I\otimes I\otimes \sigma _{x}\otimes \sigma _{x}$, $% B_{7}=\sigma _{x}\otimes I\otimes \sigma _{x}\otimes I\otimes I$, $% B_{8}=\sigma _{x}\otimes I\otimes I\otimes I\otimes \sigma _{x}$, $% B_{9}=\sigma _{x}\otimes \sigma _{x}\otimes \sigma _{x}\otimes \sigma _{x}\otimes \sigma _{x}$. And we use the state $|11111\rangle $ as the initial state of the system. We set the coupling coefficient $c=0.002$, and the evolution time $\tau =800$, and run the algorithm. The results are shown in Fig. $4$. From this figure we can see that the algorithm still has a high success probability in obtaining the energy spectrum of the water molecule. ![(Color online) Transition frequency spectrum between the first $11$ eigenstates of the water molecule and the state $|11111\rangle $. The blue solid curve represents the decay of the probe qubit at different frequencies with the coupling coefficient shown in Eq. ($1$) $c=0.002$ and the evolution time $\protect\tau =800$ and operator $A$ as shown in Eq. ($7$), in simulating the algorithm, and the red dotted vertical line represents the known eigenenergies of the first $11$ eigenstates of the water molecule. ](fig4.eps){width="0.9\columnwidth"} Discussion {#discuss} ========== In the following, we discuss the factors that affect the efficiency, the accuracy and the resource requirements of the algorithm, and we compare our algorithm with the phase estimation algorithm. The efficiency of the algorithm is most naturally defined through the number of times that the circuit in Fig. $1$ must be run in order to identify the peaks in the spectrum. This number is proportional to the number of frequency points that need to be used and the number of times that the circuit needs to be run for a single frequency. Most physical systems have typical energy scales that are linear in the system size (for the total energy), while some unusual systems exhibit a polynomial dependence with relatively small exponents. The energy scale for the low-energy spectrum might be even smaller than that scale. The number of frequency points that need to be used in the algorithm, which is proportional to the frequency range, therefore scales polynomially with the system size. The number of times that the circuit needs to be run for a single frequency must be at least proportional to $1/P_{\text{decay}}$ in order to observe a peak. It should also be mentioned here that each single run of the algorithm is essentially a quantum simulation of the dynamics, which scales polynomially with the size of the system [@kitaev2]. We note here that, for large systems, there is an exponentially large number of energy eigenstates, and determining the entire spectrum of a large system exhibits exponential complexity. However, one is usually not interested in all of the energy eigenstates, but rather a very small fraction of them, that is, a polynomial number of energy eigenstates. The energy eigenstates of interest could for example be the low-lying energy levels or the energy levels that are connected with the ground state by strong electric-dipole transitions. Once the criterion for the energy levels of interest is specified, and their number is small (or at least not exponentially large), the complexity of the algorithm does not grow exponentially with the size of the system any more. The part of the spectrum of interest will appear naturally in our algorithm, because the system will undergo transitions that mimic those of the simulated system. Since the heights of the peaks depend on the product $\Omega _{ij}\tau $, we assume as a worst-case scenario that $% Q_{ij}\tau \ll 1$ and find that the decay probability in Eq. ($3$) at the center of a given peak (i.e. at $\omega _{k}=E_{j}-E_{i}$) can be approximated as $$P_{\text{decay}}\approx c^{2}\tau ^{2}|\langle \varphi _{j}|A|\varphi _{i}\rangle |^{2}|\langle \varphi _{i}|\psi _{s}\rangle |^{2}.$$From the above equation, we can see that the decay probability, and therefore the efficiency of the algorithm, depends on the coupling strength, the evolution time, the interaction operator and the initial state of the system. Note that we can also use a number of qubits in parallel as probe qubits in order to improve the efficiency of the algorithm. We now address the roles of the different factors appearing in Eq. ($6$). The parameters $c$ and $\tau $ define the accuracy of the algorithm. The accuracy is given by the width of the peaks, and this width is given by $% \max \left[ c\langle \varphi _{j}|A|\varphi _{i}\rangle \text{, }1/\tau % \right] $ [@whf1]. To obtain accurate results, we need to set $c$ to be small so that we have weak system-probe coupling and the evolution time $% \tau $ is set to be large such that the change of the system remains observable. Note that the accuracy is set by the experimenter, independently of the size of the system. It is also worth noting here that the size of the frequency intervals $\Delta \omega $ is set by the choice of $c$ and $\tau $: $\Delta \omega $ should be smaller than the width of the peaks in order to avoid missing some of the peaks, but there is no point in reducing $\Delta \omega $ far beyond this point. Since $P_{\text{decay}}$ depends on $\langle \varphi _{j}|A|\varphi _{i}\rangle $, the algorithm can also be used to obtain the matrix element $% \langle \varphi _{j}|A|\varphi _{i}\rangle $ for any operator $A$ and any two energy eigenstates, provided that this matrix element is not exponentially small. For this purpose, it would be ideal to set the initial state to one of the states $|\varphi _{i}\rangle $, which can be achieved as will be explained shortly. One can then use the height of the peak to obtain $\langle \varphi _{j}|A|\varphi _{i}\rangle $. In this context it is also worth noting that since the height of the peak depends on $\langle \varphi _{j}|A|\varphi _{i}\rangle $, certain transitions might not result in any peaks if the relevant matrix element vanishes. By designing $A$ to be a physically-relevant operator, e.g. the operator that describes the coupling of a molecule to an electric field, one can identify transitions that would occur under electromagnetic irradiation of a molecule. Needless to say, $A$ is not restricted to be a naturally occurring operator. The last factor in Eq. ($6$) is the overlap between the initial state and any given energy eigenstate (which serves as the initial state in a given transition). In principle, preparing an initial state that has a large overlap with any given energy eigenstate can be a difficult task, possibly involving exponential scaling in the size of the system. However, a crucial point here is that once we observe a transition at the end of given run of the algorithm, we know that the final state of the system is the final energy eigenstate of the relevant transition. We can now use this state as the new initial state and rerun the algorithm. If the new initial state is different from a state that we wish to examine, we can convert the state of the system into the desired state by adding or subtracting from the system the required energy difference, which we would know at least approximately [@nakazato]. Even if the two states in question were macroscopically different, it should be possible to go from one of them to the other via an at-most polynomially large number of transitions each of which involves single- or few-body operators. In many cases of practical interest, a relatively small number of transitions are needed to connect the energy levels of interest, e.g. the low-lying energy eigenstates. We note here that, since our algorithm relies on changes in the energy to keep track of the state of the system, one cannot tell whether a given energy level is degenerate or not, and in the case of degeneracy one cannot tell which final state is obtained upon detecting the relevant transition. If one wishes to check for degeneracies, one could add a few small perturbations to the Hamiltonian of the system, and for most physical systems these perturbations will lift the degeneracies in the spectrum. Finally, we compare our algorithm with the PEA. In the PEA, one prepares an initial state that ideally has a large overlap with the desired energy eigenstate, and the algorithm produces the energy of that state. In cases where the desired energy eigenstate has a complicated form or whose form is unknown, it can become impossible to prepare a guess state that has any substantial overlap with the desired state. The algorithm would fail in this case. In our algorithm, the initial state does not need to have a large overlap with any particular state. As mentioned above, the observation of a transition in the probe signals a corresponding transition in the system. The post-transition state can now be treated as the initial state for the next step in the algorithm. This way, one can guide the system to any energy eigenstate, including the ground state. The freedom in choosing the operator $A$ allows additional controllability for this purpose. We note here that there is no single choice of the operator $A$ that is needed in order to obtain a certain energy difference, say between the ground state and first-excited state. As explained above, it should be possible to go from any state to any other state via a relatively small number of intermediate states. An exception might be glassy and similar frustrated systems with a large number of vastly different low-energy states. However, there is no known efficient algorithm, classical or quantum, for exhaustively identifying the low-energy states of such complex systems. In the PEA, one needs to have a good idea about the form of the energy eigenstates of interest. In our algorithm no such a priori knowledge is needed. If one is interested in the low-energy spectrum, the relevant states would show up naturally in the spectrum. This property is demonstrated in the example presented in Sec. \[example\], showing that our algorithm can work in cases where the PEA fails. In the PEA, one obtains the absolute eigenenergy of the system. For a large system, the absolute eigenenergy could be a large number, much larger than the separation between the energy eigenstates of interest. This large overall energy would appear as part of the output, thus taking up resources such as additional index qubits. In our algorithm, one obtains the energy difference between energy levels, therefore avoiding the unnecessary readout of any overall energy shift. Note that the number of qubits required for implementing our algorithm is the same as that in the optimized version of the PEA [@aa; @griffith]. Conclusion ========== We have presented a hybrid analogue/digital quantum algorithm for obtaining the energy spectrum of a physical system. The algorithm provides more flexibility than the phase estimation algorithm. It can also be used to simulate a realistic interaction, and naturally identify transitions that would occur in a realistic setting. The algorithm can also be used to prepare any desired energy eigenstate of a physical system. We acknowledge partial support from DARPA, AFOSR, ARO, NSF grant No. 0726909, JSPS-RFBR contract No. 12-02-92100, Grant-in-Aid for Scientific Research (S), MEXT Kakenhi on Quantum Cybernetics, and the JSPS FIRST program. HW is supported by the Fundamental Research Funds for the Central Universities of China. [99]{} D. Bacon and W. van Dam, Communications of the ACM, **53**, 84 (2010). M.-H. Yung, J. D. Whitefield, S. Boixo, D. G. Tempel and A. Aspuru-Guzik, e-print arXiv:1203.1331 (2012). J. Yepez and B. Boghosian, Comp. Phys. Comm. **146**, 280-294 (2002). I. Buluta, F. Nori, Science **326**, 108 (2009). I. Buluta, S. Ashhab, F. Nori, Rep. Prog. Phys. **74**, 104401 (2011). A. Y. Kitaev, e-print arXiv: quant-ph/9707021, 1997. D. S. Abrams and S. Lloyd, Phys. Rev. Lett. **83**, 5162 (1999). A. Aspuru-Guzik, A. D. Dutoi, P. J. Love, and M. Head-Gordon, Science **309**, 1704 (2005). H. Wang, S. Kais, A. Aspuru-Guzik, and M. R. Hoffmann, Phys. Chem. Chem. Phys. **10**, 5388 (2008). H. Wang, L.-A. Wu, Y.-X. Liu, F. Nori, Phys. Rev. A, **82**, 062303 (2010). A. M. Childs and W. van Dam, Rev. Mod. Phys. **82**, 1 (2010). L.F. Wei, F. Nori, J. of Phys. A **37**, 4607 (2004). H. Wang, L.-A. Wu, Y.-X. Liu, F. Nori, Phys. Rev. A **82**, 062303 (2010). B. M. Terhal and D. P. DiVincenzo, Phys. Rev. A, **61**, 022301 (2000). H. Wang, S. Ashhab and F. Nori, Phys. Rev. A, **83**, 062317 (2011). N. Wiebe, D. W. Berry, P. Hoyer, B. C. Sanders, e-print arXiv:1011.3489 (2011). M. Nielsen, I. Chuang, *Quantum Computation and Quantum Information* (Cambridge Univ. Press, Cambridge 2000). A. Szabo, N. Ostlund, [*Modern Quantum Chemistry: Introduction to advanced Electronic Structure Theory*]{} (McGraw-Hill, New York, 1989). A. Kitaev, A. H. Shen, and M. N. Vyalyi. *Classical and quantum computation, Graduate studies in Mathematics* Vol. **47**. American Mathematical Society, Providence, RI, 2002. H. Nakazato, T. Takazawa, and K. Yuasa, Phys. Rev. Lett. **90**, 060401 (2003). R. B. Griffiths and C.-S. Niu, Phys. Rev. Lett. **76**, 3228 (1996).
--- abstract: 'We study the effect of noise in the density field, such as would arise from a finite number density of tracers, on reconstruction of the acoustic peak within the context of Lagrangian perturbation theory. Reconstruction performs better when the density field is determined from denser tracers, but the gains saturate at $\bar{n}\sim 10^{-4}\,(h\,{\rm Mpc}^{-1})^3$. For low density tracers it is best to use a large smoothing scale to define the shifts, but the optimum is very broad.' author: - Martin White bibliography: - 'shotnoise.bib' title: Shot noise and reconstruction of the acoustic peak --- Introduction ============ Baryon acoustic oscillations (BAO) in the baryon-photon fluid provide a standard ruler to constrain the expansion of the Universe and have become an integral part of current and next-generation dark energy experiments [@EisReview05]. These sound waves imprint an almost harmonic series of peaks in the power spectrum $P(k)$, corresponding to a feature in the correlation function $\xi(r)$ at $\sim$100 Mpc, with width $\sim 10$% due to Silk damping [@PeeYu70; @SunZel70; @DorZelSun78; @Eis98; @MeiWhiPea99; @ESW]. Non-linear evolution leads to a damping of the oscillations on small scales [@Bha96; @MeiWhiPea99] (and a small shift in their positions [@ESW07; @CroSco08; @Mat08a; @Seo08; @PadWhi09]), $$P_{\rm obs}(k) = b^2 e^{-k^2\Sigma^2/2} P_L(k) + \cdots \cdots \label{eq:processed}$$ where we have assumed a scale-independent bias, $b$, and left all broad band and mode-coupling features implicit in the $\cdots$. The damping of the linear power spectrum (or equivalently the smoothing of the correlation function) reduces the contrast of the feature and the precision with which the size of ruler may be measured and is given by the mean-squared Zel’dovich displacement of particles, $$\Sigma^2 = \frac{1}{3\pi^2} \int dp\ P_L(p) \qquad . \label{eq:sigmal}$$ In [@ESSS07] a method was introduced for reducing the damping, sharpening the feature in configuration space or restoring the higher $k$ oscillations in Fourier space. This procedure was studied in [@PadWhiCoh09; @NohWhiPad09] using Lagrangian perturbation theory. In this brief note we generalize these treatements to show how the effects of noise in the density field, arising for example from the finite number density of tracers, affects reconstruction. We shall concentrate on the broadening of the peak, and refer the reader to [@PadWhiCoh09; @NohWhiPad09] for details, discussion and notation. Reconstruction with noise ========================= The prescription of [@ESSS07] begins by smoothing the observed density field to filter out high $k$ modes: $\delta({{\bf k}})\rightarrow {{\cal S}}(k)\delta({{\bf k}})$. We shall take ${{\cal S}}$ to be Gaussian of width $R$. From the smoothed field the negative Zel’dovich displacement is computed ${{\bf s}}({{\bf k}})\equiv -i({{\bf k}}/k^2){{\cal S}}(k)\delta({{\bf k}})$. Then the objects are shifted ${{\bf s}}$ to form the “displaced” density field, $\delta_d$, and an initially spatially uniform grid of particles is also shifted to form the “shifted” density field, $\delta_s({{\bf k}})$. The reconstructed density field is defined as ${\delta_{\rm recon}}\equiv\delta_d-\delta_s$, and to lowest order it is equal to the linear density field [@ESSS07; @PadWhiCoh09; @NohWhiPad09]. The non-linear damping is however modified from $\exp[-k^2\Sigma^2/2]$ to [@PadWhiCoh09; @NohWhiPad09] $$\begin{aligned} D(k)&\equiv& {{\cal S}}^{2}(k) e^{-\frac{1}{2} k^{2} \Sigma_{ss}^{2}} + [1-{{\cal S}}(k)]^{2} e^{-\frac{1}{2} k^{2} \Sigma_{dd}^{2}} \nonumber \\ &+& 2 {{\cal S}}(k) [1-{{\cal S}}(k)] e^{-\frac{1}{2} k^{2} \Sigma_{sd}^{2}} \,\,. \label{eq:damptransform}\end{aligned}$$ with $\Sigma_{ss}$ and $\Sigma_{dd}$ defined as integrals over the linear power spectrum, $P_L$, (see below) and $\Sigma_{sd}^{2}\equiv (1/2)\left(\Sigma_{ss}^{2}+\Sigma_{dd}^{2}\right)$. If we assume there is a contribution, $\delta_N$, from noise we find ${\delta_{\rm recon}}$ is unchanged to lowest order. However the damping scale is modified. Following [@PadWhiCoh09] we find $$\Sigma_{ss}^{2} \to \frac{1}{3\pi^2} \int dp\ {{\cal S}}^2(p) \left[ P_L(p) + P_N(p) \right]$$ where $P_N$ is the power spectrum of $\delta_N$ and $$\Sigma_{dd}^{2} \to \frac{1}{3\pi^2} \int dp\ \left[1-{{\cal S}}(p)\right]^2 P_L(p) + {{\cal S}}^2(p) P_N(p) \, ,$$ which reduce to the expressions of [@PadWhiCoh09; @NohWhiPad09] as $P_N\to0$. For Poisson shot-noise we expect $P_N=b^{-2}\bar{n}^{-1}$ for tracers with number density $\bar{n}$ assuming linear bias $b$. These equations present the generalization of the treament in [@PadWhiCoh09; @NohWhiPad09] to include shot-noise. Results ======= One method to forecast the effect of this noise on cosmological parameters constrained by BAO is to replace the Gaussian damping of Eq. (\[eq:processed\]) with Eq. (\[eq:damptransform\]) in the computation of the Fisher matrix for the acoustic scale $s$. For example, in spherical geometry [@SeoEis07] $$\sigma^{-2}_{\ln s} = \frac{V_{\rm survey}}{2}\int\frac{d{{\bf k}}}{(2\pi)^3} \left[\frac{\partial P/\partial\ln s}{P+\bar{n}^{-1}}\right]^2 \label{eq:Fisher}$$ with $$P \propto D(k)\frac{\sin ks}{ks} e^{-k^2\Sigma_{\rm Silk}^2/2} + \cdots$$ where $\Sigma_{\rm Silk}$ is the Silk damping scale and $\cdots$ refers to terms independent of $s$ [@SeoEis07]. The effects of shot-noise show up in the increased damping of the higher harmonics of the signal and the increase in the variance per ${{\bf k}}$ mode (the denominator in Eq. \[eq:Fisher\]). However, almost as much intuition can be gained by approximating $D(k)$ as a Gaussian and asking how the effective damping depends on $P_N$. To this end we define an “effective” $\Sigma$ from the value of the damping at $k_{\rm fid}=0.2\,h\,{\rm Mpc}^{-1}$. Figure \[fig:SigvsNR\], top, shows how $\Sigma_{\rm eff}(z=0)$ depends on $\bar{n}$ for a $\Lambda$CDM model with $\Omega_m=0.25$. Note that reconstruction improves for higher number density tracers, but the gains saturate above approximately $10^{-4}\,(h\,{\rm Mpc}^{-1})^3$. For lower number densities, it is advantageous to use a larger smoothing scale to define the shifted field, as expected. For comparisong, without reconstruction the full non-linear smearing at $z=0$ leads to $\Sigma\simeq 10\,h^{-1}$Mpc, scaling as the growth factor to higher redshift. The horizontal dot-dashed line indicates the Silk scale, or the intrinsic width of the acoustic peak, for our cosmology — the observed width of the acoustic peak is the quadrature sum of $\Sigma_{\rm Silk}$. A different view is given in the lower panel of Figure \[fig:SigvsNR\], which shows how $\Sigma_{\rm eff}(z=0)$ depends on $R$ for different values of $\bar{n}$. Note the existence of an “optimal” smoothing scale, but that the minimum is extremely broad. These results show that, within the context of Lagrangian perturbation theory, it is straightforward to understand the effects of noise in the density field on the efficacy of reconstruction. Reconstruction performs better when the density field is determined from denser tracers, but the gains saturate at $\bar{n}\sim 10^{-4}\,(h\,{\rm Mpc}^{-1})^3$. For low density tracers it is best to use a large smoothing scale to define the shifts, but the optimum is very broad. I would like to thank Joanne Cohn, Daniel Eisenstein, Yookyung Noh and Nikhil Padmanabhan for conversations and collaborations which significantly informed this work. MW is supported by NASA and the Department of Energy.
--- abstract: 'Within spectator model we study the reaction ${\gamma}d \to K^-{\Theta}^+p \to K^-K^+np$ in the threshold energy region. We present the predictions for the exclusive and inclusive $K^-$-meson angular distributions in the laboratory system for this reaction calculated for two possible parity states of the ${\Theta}^+$ resonance at 1.5 and $1.75~{\rm GeV}$ beam energies with and without imposing the relevant kinematical cuts on those parts of the sampled phase space where contribution from the main background sources, associated with the $\phi(1020)$, $\Lambda(1520)$ production as well as with the $K^-p$-rescattering in the final state, is expected to be dominant. We show that under chosen kinematics these distributions are sensitive to the ${\Theta}^+$ parity and, therefore, can be used as a filter for the determination of its parity.' author: - | E.Ya. Paryev\ [*Institute for Nuclear Research, Russian Academy of Sciences,*]{}\ [*Moscow 117312, Russia*]{} title: 'ANTIKAON ANGULAR DISTRIBUTIONS IN THE REACTION ${\gamma}d \to K^-{\Theta}^+p \to K^-K^+np$ NEAR THE THRESHOLD AND THE PARITY OF THE ${\Theta}^+$ PENTAQUARK' --- 1 Introduction {#introduction .unnumbered} ============== The study of an exotic pentaquark baryons has received considerable interest in recent years (see, for example, refs. \[1–7\], which contain a review of the experimental and theoretical works on the issue) and is one of the most exciting topics of the nuclear and hadronic physics nowadays. This interest was triggered by the discovery of the narrow baryon resonance $\Theta^+(1540)$ with positive strangeness $S=+1$ by the LEPS Collaboration at SPring-8/Osaka \[8\] and the subsequent other experiments \[9–17\]. The observed state $\Theta^+(1540)$ decays into a kaon and a nucleon and has been interpreted as $q^4{\bar q}$ pentaquark with quark structure $uudd{\bar s}$. Evidence for the existence of another exotic pentaquark state $\Xi^{--}(1862)$ with mass $1.86~{\rm GeV}$, width about $18~{\rm MeV}$ due to detector resolution, strangeness $S=-2$ and quark content $ddss{\bar u}$ has been reported by the NA49 Collaboration at SPS \[18\]. In addition, the signal of a heavy pentaquark $\Theta_c(3099)$ in which the antistrange quark in the $\Theta^+$ is replaced by an anticharm quark was found in recent experiment \[19\]. Meanwhile, there have been also several experiments \[20–27\] at high energy in which no signals for those pentaquark baryons have been observed. Moreover, no definite structure in the $K^+n$ invariant mass spectrum from the reaction ${\gamma}p \to {\bar K}^{0}K^+n$ was observed at 1540 MeV in the recent high–statistics and high–resolution experiment \[28\] indertaken by the CLAS Collaboration at JLab. Therefore, the existence of these baryons is still not completely established and more high–statistics experiments with different beams, targets, energies are needed to obtain a definite result for or against their existence. The mass of about $1.54~{\rm GeV}$ and decay width of less than 20–$25~{\rm MeV}$ of the $\Theta^+$, extracted from the experiments \[8–17\], are compatible with theoretical predictions of the chiral soliton model \[29\]. The observed $\Theta^+$ width reflects the experimental resolutions and its actual magnitude, as is expected \[30–35\] from the analysis of the kaon–nucleon, kaon–deuteron and kaon–nucleus scattering data, is limited by a few MeV level. While the isospin of the $\Theta^+$ resonance is probably zero (see, e.g., SAPHIR \[11\] and CLAS \[12\] results concerning non–existence of the $\Theta(1540)$ in $K^+p$ channel), the other quantum numbers of this state including spin and parity have not yet been determined experimentally. Theoretically, most models predict that $\Theta^+$ has spin $1/2$ because of its low mass, whereas their predictions on the $\Theta^+$ parity are still controversial. Thus, for example, the positive parity of the $\Theta^+$ is supported by the chiral soliton model \[29, 36, 37\], various correlated quark models \[38–42\], Skyrme model \[43\], and a lattice calculation \[44\]. On the other hand, such theoretical approaches as the uncorrelated quark model \[45\], the collective stringlike model of pentaquarks \[46\], the QCD sum rules \[47–49\], the lattice QCD \[50, 51\] favor a negative parity for the $\Theta^+(1540)$. So, it is unclear currently what sign of the $\Theta^+$ parity is the correct one. The knowledge of this sign is important for distinguishing between different models mentioned above and, hence, for gaining more insight into the dynamics of low–energy QCD \[52\]. To help determine the parity of the $\Theta^+$, a number of studies have been carried out to understand how the unpolarized \[53–66\] and polarized \[59, 62–77\] observables of the $\Theta^+$ production processes, induced by the medium energy photons, nucleons, pions, and kaons on nucleon targets, depend on the parity of $\Theta^+(1540)$. Very recently, the authors of \[78\] have explored how the spin observables in the reaction $\pi^{\pm}{\vec D} \to {\vec \Sigma}^{\pm}\Theta^+$ near the threshold can be used to distinguish the parity of $\Theta^+$. Obviously, the use of the unpolarized observables for the determination of the $\Theta^+$ parity, which do not depend very much on the theoretical ambiguities (if such observables exist), has an advantage compared to the utilizing of the spin ones, since in the first case a much simpler experimental setups and beam conditions are required for the measurements. Recently, in refs. \[58, 59\], the authors discussed a rather model–independent way to discriminate the $\Theta^+$ parity from the ${\gamma}N \to {\bar K}\Theta^+$ reaction by looking at antikaon angular distribution. In particular, they have demonstrated that the (unpolarized) differential cross section for the reaction ${\gamma}n \to K^-\Theta^+$ close to the production threshold shows a clear distinction between the two opposite parities of the $\Theta^+$ baryon. Namely, near the threshold [^1] , this cross section is isotropic in the ${\gamma}n$ c.m.s. frame if the parity of the $\Theta^+$ is positive, and it follows $\sin^2{\theta_{K^-}^{'}}$ behavior (where $\theta_{K^-}^{'}$ is the $K^-$-meson polar production angle in the c.m.s.) when the parity of $\Theta^+$ is negative. Therefore, measurement of the reaction ${\gamma}n \to K^-\Theta^+$ in the threshold energy region would allow one to determine the parity of the $\Theta^+$ resonance \[58, 59\]. However, such measurement can be performed only on the bound in the nucleus neutron because of the absence of a free neutron target. Often, the bound neutron in the deuteron is used as a substitute of the free one. Thus, for instance, at JLab, the $\Theta^+$ baryon was observed with the CLAS detector \[10\] as a narrow peak in the $K^+n$ system produced in the reaction ${\gamma}n \to K^-\Theta^+ \to K^-K^+n$, where the target neutron was bound in the deuteron. Unfortunately, the neutron in the deuteron is not at rest and is moving with a Fermi momentum which has a component along the incident photon direction of say about ${\pm}$ 50 MeV/c. Though this is only a few MeV in energy, it has a huge influence on the kinematics, especially, if we are investigating the threshold phenomena. This raises the question of whether a predicted \[58, 59\] specific shape of the angular distribution of the ${\gamma}n \to K^-\Theta^+$ reaction near threshold, depending on the $\Theta^+$ parity, survives when this reaction takes place on the moving neutron in the deuteron. It is highly desirable and useful to give a reasonable answer to this question–the main goal of the present investigation–to clarify the feasibility of experimental determination of the parity of an exotic pentaquark baryon state $\Theta^+$ by measuring this distribution. In doing so, it is needed also to take into consideration the fact that the two–body reaction ${\gamma}n \to K^-\Theta^+$ is directly unobserved, since $\Theta^+$ can be detected only from its hadronic decays $\Theta^+ \to K^+n$ \[8, 10–12\] and $\Theta^+ \to K^0p$ \[9, 13–17\]. In this paper we perform a detailed analysis of the reaction ${\gamma}d \to K^-{\Theta}^+p \to K^-K^+np$ in the threshold energy region. We present the predictions for the exclusive and inclusive antikaon angular distributions in the laboratory system for this reaction obtained in the framework of a simple spectator model for two possible parity states of the $\Theta^+$ baryon at 1.5 and 1.75 GeV beam energies with and without imposing the relevant kinematical cuts on those parts of the sampled phase space where contribution from the main background sources, associated with the $\phi(1020)$, $\Lambda(1520)$ production as well as with the $K^-p$–rescattering in the final state, is expected to be dominant. We show that under chosen kinematics these distributions are still sensitive to the $\Theta^+$ parity and, therefore, can be used as an important tool for identifying its parity. 2 Spectator model {#spectator-model .unnumbered} ================= Due to the high momentum transfer in the elementary process ${\gamma}n \to K^-\Theta^+$ near the threshold [^2] and a large average separation of the neutron and proton in the deuteron we can analyze the reaction ${\gamma}d \to K^-{\Theta}^+p \to K^-K^+np$ of our interest in the Impulse Approximation (IA) regime \[79, 80\]. In this regime the reaction ${\gamma}d \to K^-{\Theta}^+p \to K^-K^+np$ reduces to the $\Theta^+$ photoproduction off the neutron in the deuteron: $$\gamma+n \to K^-+\Theta^+,$$ and its subsequent decay into the $K^+n$ [^3] : $$\Theta^+ \to K^+n,$$ while the recoiling proton acts as a spectator (see, fig. 1). Considering that the width of $\Theta^+$ is very small compared to its mass and using the results given in refs. \[81, 82\], we can represent in the IA the differential cross section for creation of the four–body final state $K^-K^+np$ through the production/decay sequence (1, 2), taking place on a neutron embedded in a deuteron, as follows: $$d\sigma_{{\gamma}d \to K^-K^+np}^{(IA)}(E_{\gamma})=n_d(|{\bf p}_t|) \delta({\bf p}_{t}+{\bf p}_{s})d{\bf p}_{t}d{\bf p}_{s}\times$$ $$\times \frac{\pi}{I_2(s,m_{K},m_{\Theta^+})} \frac{d\sigma_{{\gamma}n \to K^-{\Theta^+}}(s,{\theta}_{K^-}^{'})} {d{\bf {\Omega}}_{K^-}^{'}}\times$$ $$\times \delta({\bf p}_{\gamma}+{\bf p}_{t}-{\bf p}_{K^-}-{\bf p}_{\Theta^+}) \delta(E_{\gamma}+E_{t}-E_{K^-}-E_{\Theta^+})\frac{d{\bf p}_{K^-}}{E_{K^-}} \frac{d{\bf p}_{\Theta^+}}{E_{\Theta^+}}\times$$ $$\times \frac{d\Gamma_{\Theta^+ \to K^+n}(m_{\Theta^+},{\bf p}_{\Theta^+})} {\Gamma_{\Theta^+}(m_{\Theta^+},{\bf p}_{\Theta^+})},$$ where $$I_2(s,m_K,m_{\Theta^+})=\frac{\pi}{\sqrt{s}}p_{K^-}^{'},$$ $$p_{K^-}^{'}=\left|{\bf p}_{K^-}^{'}\right|=\frac{1}{2\sqrt{s}} \lambda(s,m_{K}^{2},m_{\Theta^+}^{2}),$$ $$\lambda(x,y,z)=\sqrt{{\left[x-({\sqrt{y}}+{\sqrt{z}})^2\right]}{\left[x- ({\sqrt{y}}-{\sqrt{z}})^2\right]}},$$ $$s=\left(E_{\gamma}+E_{t}\right)^2-\left({\bf p}_{\gamma}+{\bf p}_{t}\right)^2,$$ $$E_t=M_{d}-E_{s},\,\,\,E_s=\sqrt{{\bf p}_{s}^{2}+m_{p}^2},$$ $$E_{K^-}=\sqrt{{\bf p}_{K^-}^{2}+m_{K}^2},\,\, E_{\Theta^+}=\sqrt{{\bf p}_{\Theta^+}^{2}+m_{\Theta^+}^2};$$ and $$d\Gamma_{\Theta^+ \to K^+n}(m_{\Theta^+},{\bf p}_{\Theta^+})= \frac{\left|M_{\Theta^+ \to K^+n}\right|^2}{2E_{\Theta^+}}(2{\pi})^4 \delta({\bf p}_{\Theta^+}-{\bf p}_{K^+}-{\bf p}_{n})\times$$ $$\times \delta(E_{\Theta^+}-E_{K^+}-E_{n})\frac{d{\bf p}_{K^+}}{(2{\pi})^{3}2E_{K^+}} \frac{d{\bf p}_{n}}{(2{\pi})^{3}2E_{n}},$$ $${\Gamma}_{\Theta^+}(m_{\Theta^+},{\bf p}_{\Theta^+})= {\Gamma}_{\Theta^+}(m_{\Theta^+})/{\gamma}_{\Theta^+},\,\, {\gamma}_{\Theta^+}=E_{\Theta^+}/m_{\Theta^+},$$ $$E_{K^+}=\sqrt{{\bf p}_{K^+}^{2}+m_{K}^2},\,\, E_{n}=\sqrt{{\bf p}_{n}^{2}+m_{n}^2}.$$ Here, $(E_{\gamma},{\bf p}_{\gamma})$, $(E_{t},{\bf p}_{t})$, $(E_{\Theta^+},{\bf p}_{\Theta^+})$, $(E_{K^-},{\bf p}_{K^-})$, $(E_{K^+},{\bf p}_{K^+})$, $(E_{n},{\bf p}_{n})$, and $(E_{s},{\bf p}_{s})$ are the four–momenta in the lab (or deuteron rest) frame of the incoming photon, the struck target neutron, the intermediate $\Theta^+$ resonance [^4] , the outgoing $K^-$, $K^+$-mesons and the neutron, and the recoil proton, respectively; $d\sigma_{{\gamma}n \to K^-{\Theta^+}}(s,{\theta}_{K^-}^{'})/ d{\bf {\Omega}}_{K^-}^{'}$ is the off–shell [^5] differential cross section for the production of a $K^-$-meson under the polar angle ${\theta}_{K^-}^{'}$ with the momentum ${\bf p}_{K^-}^{'}$ in reaction (1) in the ${\gamma}n$ c.m.s. (${\bf {\Omega}}_{K^-}^{'}={\bf p}_{K^-}^{'}/p_{K^-}^{'}$); $n_d(|{\bf p}_t|)$ is the nucleon momentum distribution in the deuteron normalized to unity; $m_p(m_n)$, $m_K$ and $M_d$ are the masses in free space of a proton (neutron), kaon and deuteron, respectively; $m_{\Theta^+}$ is the pole mass of the $\Theta^+$ baryon ($m_{\Theta^+}=1.54~{\rm GeV}$); $\left|M_{\Theta^+ \to K^+n}\right|^2$ is the spin–averaged matrix element squared describing the decay (2); ${\Gamma}_{\Theta^+}(m_{\Theta^+})$ is the total width of the decay of $\Theta^+$ in its rest frame, taken at the pole of the resonance. Let us now specify the off–shell differential cross section $d\sigma_{{\gamma}n \to K^-{\Theta^+}}(s,{\theta}_{K^-}^{'})/ d{\bf {\Omega}}_{K^-}^{'}$ for $K^-$ production in the elementary process (1), entering into eq. (3). Following refs. \[79–82\], we assume that this cross section is equivalent to the respective on–shell cross section calculated for the off–shell kinematics of the reaction (1). The on–shell differential cross section for the reaction ${\gamma}n \to K^-{\Theta^+}$ has been calculated theoretically in refs. \[58, 59\] using both the respective hadronic model and the CGLN amplitudes. The results of the calculations show that this cross section in the threshold energy region, i.e. at $E_{\gamma} \le 2~{\rm GeV}$, can be approximately parametrized by $$\frac{d\sigma_{{\gamma}n \to K^-{\Theta^+}}(s,{\theta}_{K^-}^{'})} {d{\bf {\Omega}}_{K^-}^{'}}=\left\{ \begin{array}{ll} \frac{1}{4{\pi}}\sigma_{{\gamma}n \to K^-{\Theta^+}}^{(+)}(\sqrt{s}) &\mbox{for the positive $\Theta^+$ parity}, \\ &\\ \frac{3}{8{\pi}}\sin^2{{\theta}_{K^-}^{'}} \sigma_{{\gamma}n \to K^-{\Theta^+}}^{(-)}(\sqrt{s}) &\mbox{for the negative $\Theta^+$ parity}. \end{array} \right.$$ Here, $\sigma_{{\gamma}n \to K^-{\Theta^+}}^{(+)}(\sqrt{s})$ and $\sigma_{{\gamma}n \to K^-{\Theta^+}}^{(-)}(\sqrt{s})$ are the on–shell total cross sections of the elementary process ${\gamma}n \to K^-{\Theta^+}$ for the positive and negative $\Theta^+$ parities, respectively. These cross sections have been also calculated in refs. \[58, 59\] and the results of calculations, taking into account that the $s$–wave ($p$–wave) antikaon production is expected \[58, 59\] near threshold when the $\Theta^+$ has the positive (negative) parity, have been parametrized by us as follows: $$\sigma_{{\gamma}n \to K^-{\Theta^+}}^{(+)}(\sqrt{s})=\frac{675p_{K^-}^{'}} {1+2p_{K^-}^{'2}} [{\rm nb}],$$ $$\sigma_{{\gamma}n \to K^-{\Theta^+}}^{(-)}(\sqrt{s})=\frac{595p_{K^-}^{'3}} {1+15p_{K^-}^{'3}} [{\rm nb}],$$ with $p_{K^-}^{'}$ denoting the $K^-$ three–momentum in the ${\gamma}n$ c.m.s. measured in GeV/c. This momentum is defined above by eq. (5). An inspection of the formulas (14), (15) leads, as is easy to see, to the conclusion that the total cross sections for the negative parity $\Theta^+$ are approximately 10–100 times smaller than those for the positive parity one in the range of the photon energy $1.73~{\rm GeV} < E_{\gamma} < 3~{\rm GeV}$. Thus, for example, the positive and negative $\Theta^+$ parity cases give the total cross sections of $100~{\rm nb}$ and $2~{\rm nb}$, respectively, at $E_{\gamma}=1.8~{\rm GeV}$, whereas at $E_{\gamma}=2.5~{\rm GeV}$ these cross sections, correspondingly, are $230~{\rm nb}$ and $28~{\rm nb}$ [^6] . The $K^-$-meson production angle ${\theta}_{K^-}^{'}$ in the ${\gamma}n$ c.m.s., entering into eq. (13), is defined by $$\cos{{\theta}_{K^-}^{'}}=\frac{{\bf p}_{\gamma}^{'}{\bf p}_{K^-}^{'}} {p_{\gamma}^{'}p_{K^-}^{'}},$$ where ${\bf p}_{\gamma}^{'}$ denotes the three–momentum of an incident photon in this system. Writting the relativistic invariant $t=[(E_{\gamma},{\bf p}_{\gamma})-(E_{K^-},{\bf p}_{K^-})]^2$ in the laboratory and in the ${\gamma}n$ c.m. systems and equating the results, we readily obtain $$\cos{{\theta}_{K^-}^{'}}=\frac{p_{\gamma}p_{K^-}\cos{{\theta}_{K^-}}+ (E_{\gamma}^{'}E_{K^-}^{'}-E_{\gamma}E_{K^-})} {p_{\gamma}^{'}p_{K^-}^{'}}.$$ In the above, ${\theta}_{K^-}$ is the angle between the momenta ${\bf p}_{\gamma}$ and ${\bf p}_{K^-}$ in the lab frame, while $E_{\gamma}^{'}$ and $E_{K^-}^{'}$ are the energies of the initial photon and outgoing antikaon in the ${\gamma}n$ c.m.s., respectively. These energies are given by $$E_{\gamma}^{'}=p_{\gamma}^{'}=\frac{1}{2\sqrt{s}} \lambda(s,0,E_{t}^2-p_{t}^2),$$ $$E_{K^-}^{'}=\sqrt{p_{K^-}^{'2}+m_{K}^2}.$$ Consider now the spin–averaged matrix element squared $\left|M_{\Theta^+ \to K^+n}\right|^2$ describing the decay $\Theta^+ \to K^+n$. Following the parity and angular momentum conservation laws, the decay amplitude $M_{\Theta^+ \to K^+n}$ should exhibit a $p$– or $s$–wave behavior (for a spin–$\frac{1}{2}\Theta^+$) in the $\Theta^+$ rest frame when the $\Theta^+$ has the positive or negative parity, respectively. However, if the spin state of the outgoing neutron is not fixed, the difference between the angular distributions of the $\Theta^+ \to K^+n$ decay, corresponding to the positive and negative $\Theta^+$ parity, disappears \[65\], which means that the spin–averaged matrix element squared $\left|M_{\Theta^+ \to K^+n}\right|^2$ results in an isotropic angular distribution of this decay for both parities of $\Theta^+$. By taking this fact into consideration as well as integrating eq. (10) over the momenta ${\bf p}_{K^+}$ and ${\bf p}_{n}$ in the $\Theta^+$ rest frame, we can easily get the following relation between $\left|M_{\Theta^+ \to K^+n}\right|^2$ and the partial width $\Gamma_{\Theta^+ \to K^+n}(m_{\Theta^+})$ of the $\Theta^+ \to K^+n$ decay: $$\frac{\left|M_{\Theta^+ \to K^+n}\right|^2}{(2{\pi})^2}=\frac{2m_{\Theta^+}^2} {{\pi}\stackrel{*}p_{K^+}}\Gamma_{\Theta^+ \to K^+n}(m_{\Theta^+}),$$ where [^7] $$\stackrel{*}p_{K^+}=\frac{1}{2m_{\Theta^+}} \lambda(m_{\Theta^+}^2,m_{K}^2,m_{n}^2).$$ By using the relation (20), one finds that the ratio\ $d\Gamma_{\Theta^+ \to K^+n}(m_{\Theta^+},{\bf p}_{\Theta^+})/ \Gamma_{\Theta^+}(m_{\Theta^+},{\bf p}_{\Theta^+})$, entering into eq. (3), reduces to a simpler form: $$\frac{d\Gamma_{\Theta^+ \to K^+n}(m_{\Theta^+},{\bf p}_{\Theta^+})} {\Gamma_{\Theta^+}(m_{\Theta^+},{\bf p}_{\Theta^+})}=\frac{m_{\Theta^+}} {{\pi}\stackrel{*}p_{K^+}}BR(\Theta^+ \to K^+n) \delta({\bf p}_{\Theta^+}-{\bf p}_{K^+}-{\bf p}_{n})\times$$ $$\times \delta(E_{\Theta^+}-E_{K^+}-E_{n})\frac{d{\bf p}_{K^+}}{2E_{K^+}} \frac{d{\bf p}_{n}}{2E_{n}},$$ where $$BR(\Theta^+ \to K^+n)= \Gamma_{\Theta^+ \to K^+n}(m_{\Theta^+})/\Gamma_{\Theta^+}(m_{\Theta^+}).$$ According to \[21, 83, 84\], $BR(\Theta^+ \to K^+n)=1/2$ for both parities of $\Theta^+$. Before going to the next step, we discuss now the nucleon momentum distribution in the deuteron $n_d(p_t)$ [^8] needed for our calculations. This momentum distribution has been calculated in \[85\], using the Paris potential \[86, 87\], and the results of calculations have been parametrized here by the simple analytical form (A1) (see also formula (23) in ref. \[82\]). This form has been employed in our calculations of the $K^-$ production cross sections in the reaction ${\gamma}d \to K^-{\Theta^+}p \to K^-K^+np$ reported in the paper. In fig. 2 we present the momentum distribution of the proton–spectator $p_s^2n_d(p_s)$ in this reaction (solid curve) calculated using the parametrization (A1) from \[85\] for $n_d(p_s)$. It is clearly seen that this distribution has a sharp peak with a maximum near $45~{\rm {MeV/c}}$ and the long tail above $150~{\rm {MeV/c}}$. Now, let us proceed to the identification of the kinematic regions where the reaction ${\gamma}d \to K^-K^+np$, going via the production/decay sequence (1, 2), is expected to dominate over the non–resonant background [^9] . It is natural to consider this reaction namely in these identified kinematic regions. According to \[8, 10, 65, 66\], the main contribution to the non–resonant background in the near–threshold region with $E_{\gamma} \le 2~{\rm GeV}$ comes from the intermediate $\phi$-meson and $\Lambda(1520)$-hyperon photoproduction:${\gamma}N \to {\phi}N \to K^+K^-N$ and ${\gamma}p \to K^+{\Lambda(1520)} \to K^+K^-p$. Thus, for example, our calculations [^10] show that at $E_{\gamma}=1.8~{\rm GeV}$ the $K^+K^-$ invariant mass $M_{K^+K^-}$ in the process ${\gamma}n \to K^-{\Theta^+} \to K^-K^+n$ taking place on a free neutron being at rest is distributed in the region $1.0~{\rm GeV} \le M_{K^+K^-} \le 1.1~{\rm GeV}$. The narrow mass distribution of the $\phi$ concentrates largely in the region of the $K^+K^-$ invariant masses $1.00~{\rm GeV} < M_{K^+K^-} < 1.04~{\rm GeV}$ \[8, 65\] (the so–called “$\phi$ window”) and, therefore, lies completely within the sampled kinematic region indicated above, which makes the $\phi$-meson contribution to the respective data sample significant \[8, 10, 65, 66\]. In order to suppress this contribution and enhance signal to background ratio, the $\phi$-mesons have to be removed from the data sample. In order to remove the $\phi$-mesons, events with $1.00~{\rm GeV} < M_{K^+K^-} < 1.04~{\rm GeV}$ have to be rejected \[8, 65\]. This means that we have to eliminate the phase space with the $K^+K^-$ invariant mass from 1.00 to $1.04~{\rm GeV}$ in our consideration of the reaction ${\gamma}d \to K^-{\Theta^+}p \to K^-K^+np$. To make this, we will multiply the differential cross section (3) by the “$\phi$ phase space eliminating” factor $Q(M_{K^+K^-})$ defined as: $$Q(M_{K^+K^-})=\left\{ \begin{array}{ll} 0 &\mbox{for $1.00~{\rm GeV} < M_{K^+K^-} < 1.04~{\rm GeV}$}, \\ &\\ 1 &\mbox{otherwise}. \end{array} \right.$$ Before going further, one has to evaluate the invariant mass $M_{K^+K^-}$ of a $K^+K^-$–pair produced in the production/decay sequence (1, 2). In order to evaluate this quantity it is more convenient to put ourselves in the ${\gamma}n$ c.m.s. Then, the invariant $M_{K^+K^-}^2$ can be expressed through the energies and momenta of the $K^+$ and the $K^-$, $E_{K^+}^{'}$, ${\bf p}_{K^+}^{'}$ and $E_{K^-}^{'}$, ${\bf p}_{K^-}^{'}$, in this system in the following way: $$M_{K^+K^-}^2=\left(E_{K^+}^{'}+E_{K^-}^{'}\right)^2- \left({\bf p}_{K^+}^{'}+{\bf p}_{K^-}^{'}\right)^2=2m_K^2+ 2E_{K^+}^{'}E_{K^-}^{'}-2{\bf p}_{K^+}^{'}{\bf p}_{K^-}^{'},$$ where $$E_{K^+}^{'}=\sqrt{{\bf p}_{K^+}^{'2}+m_{K}^2},$$ and the quantities $p_{K^-}^{'}$ and $E_{K^-}^{'}$ are defined above by eqs. (5) and (19), respectively. Taking into account that the kaon momentum ${\bf p}_{K^+}^{'}$ can be expressed via its momentum ${\bf {\stackrel{*}p}}_{K^+}$ in the $\Theta^+$ rest frame and the $\Theta^+$ momentum ${\bf p}_{\Theta^+}^{'}$ in the ${\gamma}n$ c.m.s. as \[88\] $${\bf p}_{K^+}^{'}=\frac{p_{\Theta^+}^{'}\stackrel{*}E_{K^+}}{m_{\Theta^+}} {\bf n}_{\Theta^+}+\stackrel{*}p_{K^+}\left\{{\bf n}_{K^+}^{*}+ ({\gamma}_{\Theta^+}^{'}-1)\cos{{\theta}_{K^+}^{*}}{\bf n}_{\Theta^+}\right\},$$ where $$\stackrel{*}E_{K^+}=\sqrt{\stackrel{*}p_{K^+}^{2}+m_{K}^2}, {\gamma}_{\Theta^+}^{'}=E_{\Theta^+}^{'}/m_{\Theta^+}, E_{\Theta^+}^{'}=\sqrt{{\bf p}_{\Theta^+}^{'2}+m_{\Theta^+}^2}, p_{\Theta^+}^{'}=p_{K^-}^{'},$$ $${\bf n}_{\Theta^+}={\bf p}_{\Theta^+}^{'}/p_{\Theta^+}^{'},\,\, {\bf n}_{K^+}^{*}={\bf \stackrel{*}p}_{K^+}/\stackrel{*}p_{K^+}, \,\, \cos{{\theta}_{K^+}^{*}}={\bf n}_{K^+}^{*}{\bf n}_{\Theta^+},$$ we easily get that: $${\bf p}_{K^+}^{'2}=\left(\frac{p_{\Theta^+}^{'}\stackrel{*}E_{K^+}} {m_{\Theta^+}}\right)^{2}+ \frac{2\stackrel{*}E_{K^+}E_{\Theta^+}^{'}\stackrel{*}p_{K^+}p_{\Theta^+}^{'}} {m_{\Theta^+}^2}\cos{{\theta}_{K^+}^{*}}+$$ $$+ \stackrel{*}p_{K^+}^{2} \left\{1+({\gamma}_{\Theta^+}^{'2}-1)\cos^2{{\theta}_{K^+}^{*}}\right\},$$ $$2{\bf p}_{K^+}^{'}{\bf p}_{K^-}^{'}=-\frac{2p_{K^-}^{'}}{m_{\Theta^+}} \left[p_{\Theta^+}^{'}\stackrel{*}E_{K^+}+ \stackrel{*}p_{K^+}E_{\Theta^+}^{'}\cos{{\theta}_{K^+}^{*}}\right].$$ The $K^+$ momentum $\stackrel{*}p_{K^+}$ in the $\Theta^+$ rest frame, entering into eqs. (27)–(31), is defined above by the (21). It should be emphasized that, according to (5), (19), (25)–(31), the invariant mass $M_{K^+K^-}$ of interest depends only on the cosine of the $K^+$ decay angle ${\theta}_{K^+}^{*}$ in the $\Theta^+$ rest system and the squared invariant energy $s$ available in the first–chance ${\gamma}n$–collision, which simplifies the calculations presented below. Similarly, our calculations [^11] show that at $E_{\gamma}=1.8~{\rm GeV}$ the invariant mass $M_{K^-p}$ of the $K^-p$ system in the reaction ${\gamma}d \to K^-{\Theta^+}p$ is distributed in the region $1.432~{\rm GeV} \le M_{K^-p} \le 1.665~{\rm GeV}$ which straddles the $\Lambda(1520)$ mass, since the peak corresponding to the $\Lambda(1520)$ lies basically \[10\] in the region of the $K^-p$ invariant masses $1.485~{\rm GeV} < M_{K^-p} < 1.551~{\rm GeV}$. This makes the $\Lambda(1520)$ contribution to the same final state of our interest significant \[8, 10, 66\]. To reduce this contribution and, respectively, to improve signal to background ratio, the $\Lambda(1520)$ resonance has to be removed from the data sample by rejecting events with \[10\] $1.485~{\rm GeV} < M_{K^-p} < 1.551~{\rm GeV}$. This means that we have to eliminate also the phase space with the $K^-p$ invariant mass from 1.485 to 1.551 GeV in our study of the reaction ${\gamma}d \to K^-{\Theta^+}p \to K^-K^+np$.To do this, we will also multiply the differential cross section (3) by the “$\Lambda(1520)$ phase space eliminating” factor $Q(M_{K^-p})$. This factor is defined in the following way: $$Q(M_{K^-p})=\left\{ \begin{array}{ll} 0 &\mbox{for $1.485~{\rm GeV} < M_{K^-p} < 1.551~{\rm GeV}$}, \\ &\\ 1 &\mbox{otherwise}. \end{array} \right.$$ The invariant mass squared $M_{K^-p}^2$ can be obtained in a straightforward manner, and the result is $$M_{K^-p}^2=\left(E_{K^-}+\sqrt{(-{\bf p}_t)^2+m_p^2}\right)^2- \left({\bf p}_{K^-}-{\bf p}_{t}\right)^2=$$ $$=m_K^2+m_p^2+ 2E_{K^-}\sqrt{(-{\bf p}_t)^2+m_p^2}+ 2p_{K^-}p_{t}\cos{\theta_{{\bf p}_t{\bf p}_{K^-}}}$$ with $\theta_{{\bf p}_t{\bf p}_{K^-}}$ being the angle between the momenta ${\bf p}_t$ and ${\bf p}_{K^-}$ in the lab frame. This angle is related to the angles between ${\bf p}_{\gamma}$ and ${\bf p}_t$ ($\theta_t$), ${\bf p}_{\gamma}$ and ${\bf p}_{K^-}$ ($\theta_{K^-}$) and to the azimuthal angles ${\varphi}_t$ of ${\bf p}_t$, ${\varphi}_{K^-}$ of ${\bf p}_{K^-}$ by the trigonometric relation $$\cos{\theta_{{\bf p}_t{\bf p}_{K^-}}}=\cos{\theta_{K^-}}\cos{\theta_t}+ \sin{\theta_{K^-}}\sin{\theta_t}\cos{({\varphi}_t-{\varphi}_{K^-})}.$$ There is yet another background source, if we want to look at the antikaon angular distributions from primary production process (1). This source of background is related to the possible rescattering [^12] of the produced $K^-$-meson on the proton in the final state (see fig. 3). Such rescattering may distort the angular distributions of antikaons produced in ${\gamma}d$ interactions through the primary photon–induced reaction channel ${\gamma}n \to K^-{\Theta^+}$ of interest (see fig. 1). Therefore, we also need to specify the kinematic region, in addition to those specified before, where the $K^-p$–rescattering is expected to be negligible. This region has to be taken into consideration as well in the subsequent calculations of the $K^-$ angular distributions from the primary reaction channel (1). The effects of rescattering on the recoil nucleon in hadron– and photon–deuteron interactions have been discussed previously (see, e.g., refs. \[89–98\] and references therein). Following \[89, 90\], the ratio of the moduli of the amplitudes corresponding to the diagrams of fig. 3 and fig. 1 ($M^{(FSI)}$ and $M^{(IA)}$, respectively) can be estimated using the relation $$\frac{|M^{(FSI)}|}{|M^{(IA)}|} \approx \frac{|f_{K^-p \to K^-p}|}{4{\pi}R_d} \frac{1}{q_{K^-p}R_d}\frac{{\varphi}_d(0)}{{\varphi}_d(p_s)}.$$ Here, $f_{K^-p \to K^-p}$ is the elastic $K^-p$ scattering amplitude normalized on the $K^-$ differential cross section $d\sigma_{K^-p \to K^-p}/d{\bf {\Omega}}_{c.m.s.}$ in the $K^-p$ c.m.s. by $|f_{K^-p \to K^-p}|^2=d\sigma_{K^-p \to K^-p}/d{\bf {\Omega}}_{c.m.s.}$; $q_{K^-p}$ is the relative momentum of the intermediate $K^-$-meson and the spectator proton; $R_d$ is the average internucleon distance inside the deuteron; ${\varphi}_d$ is the deuteron wave function in momentum space. The $K^-p$ rescattering plays a significant role in the case of the relative momenta $q_{K^-p}$ falling in the low–momentum region $q_{K^-p} < 100~{\rm {MeV/c}}$ where the $K^-p$ elastic cross section $\sigma_{K^-p \to K^-p}$ is large ($\sigma_{K^-p \to K^-p} > (80-100)~{\rm mb}$ \[99–101\]). Therefore, to reduce this rescattering we will restrict ourselves in the following to the case of relative momenta $q_{K^-p} \ge 100~{\rm {MeV/c}}$ [^13] . Then, employing, e.g., the Hulthen wave function [^14] \[102\] for ${\varphi}_d$ to estimate the ratio (35) and assuming that $|f_{K^-p \to K^-p}| \approx \sqrt{80{\rm mb}/{4\pi}}$ here, one obtaines that for $q_{K^-p} \ge 100~{\rm {MeV/c}}$ (or for $M_{K^-p} \ge 1.447~{\rm GeV}$) the contribution from the diagram of fig. 3 is suppressed at least at the recoil proton momenta of $p_s < 280~{\rm {MeV/c}}$. Hence, the spectator mechanism of fig. 1 gives the dominant contribution to the $\Theta^+$ photoproduction from the neutron in the deuteron at small values of the spectator proton momentum $p_s$. So, the above considerations require that the $K^-p$ invariant mass must be greater than $1.447~{\rm GeV}$ and the recoil proton momentum must be smaller than $280~{\rm {MeV/c}}$. To fulfil these requirements, we will multiply the differential cross section (3) by one more “$K^-p$ phase space terminating” factor $Q(M_{K^-p},p_s)$. This factor is given by: $$Q(M_{K^-p},p_s)={\theta}(M_{K^-p}-M_{cut}){\theta}(p_{cut}-p_s),$$ where $M_{cut}=1.447~{\rm GeV}$, $p_{cut}=280~{\rm {MeV/c}}$ and $\theta(x)$ is the standard step function. Finally, by combining (3), (24), (32) and (36), we get within the IA the following expression for the differential cross section of the reaction ${\gamma}d \to K^-{\Theta^+}p \to K^-K^+np$ over all the physical variables, which includes the phase space cuts we introduced: $$d\sigma_{{\gamma}d\to K^-K^+np}(E_{\gamma})= d\sigma_{{\gamma}d\to K^-K^+np}^{(IA)}(E_{\gamma})Q(M_{K^+K^-})Q(M_{K^-p}) Q(M_{K^-p},p_s).$$ Integrating eq. (37) over the available phase space with accounting for (22), we get after some algebra the exclusive differential cross section of eq. (37) in the laboratory frame of physical (and practical) interest, where the final antikaon is detected without analyzing its energy for fixed three–momentum of the proton–spectator: $$\frac{d\sigma_{{\gamma}d\to K^-K^+np}(E_{\gamma})} {d{\bf {\Omega}}_{K^-}dp_sd{\bf {\Omega}}_s}=p_{s}^{2}n_d(p_s) {\theta}(v_{K^-}^{'}-v_c){\theta}(p_{cut}-p_s)\times$$ $$\times \frac{p_{K^-}^{(1)2}}{p_{K^-}^{'} \sqrt{p_{K^-}^{'2}-{\gamma}_{c}^{2}v_{c}^{2}m_{K}^{2} \sin^2{{\theta}_{K^-}^{c}}}} \frac{d\sigma_{{\gamma}n\to K^-{\Theta}^+} [s,{\theta}_{K^-}^{'}(p_{K^-}^{(1)})]} {d{\bf {\Omega}}_{K^-}^{'}}\times$$ $$\times Q[M_{K^-p}(p_{K^-}^{(1)})] {\theta}[M_{K^-p}(p_{K^-}^{(1)})-M_{cut}]\times$$ $$\times \frac{1}{2}BR({\Theta}^{+} \to K^+n) \int\limits_{-1}^{1}Q[M_{K^+K^-}(p_t,{\theta}_t,\cos{{\theta}_{K^+}^{*}})] d\cos{{\theta}_{K^+}^{*}}+$$ $$+ p_{s}^{2}n_d(p_s){\theta}(v_c-v_{K^-}^{'}){\theta}(p_{cut}-p_s)\times$$ $$\times \sum_{i=1}^2\frac{p_{K^-}^{(i)2}}{p_{K^-}^{'} \sqrt{p_{K^-}^{'2}-{\gamma}_{c}^{2}v_{c}^{2}m_{K}^{2} \sin^2{{\theta}_{K^-}^{c}}}} \frac{d\sigma_{{\gamma}n\to K^-{\Theta}^+} [s, {\theta}_{K^-}^{'}(p_{K^-}^{(i)})]} {d{\bf {\Omega}}_{K^-}^{'}}\times$$ $$\times Q[M_{K^-p}(p_{K^-}^{(i)})] {\theta}[M_{K^-p}(p_{K^-}^{(i)})-M_{cut}] \times$$ $$\times \frac{1}{2}BR({\Theta}^{+} \to K^+n) \int\limits_{-1}^{1}Q[M_{K^+K^-}(p_t,{\theta}_t,\cos{{\theta}_{K^+}^{*}})] d\cos{{\theta}_{K^+}^{*}},$$ where $$\begin{aligned} {\bf {\Omega}}_{K^-}={\bf p}_{K^-}/p_{K^-},\,\, {\bf {\Omega}}_{s}={\bf p}_{s}/p_{s},\,\, v_{K^-}^{'}=p_{K^-}^{'}/E_{K^-}^{'},\\ \nonumber v_c=|{\bf v}_c|,\,\, {\bf v}_c=\frac{{\bf p}_{\gamma}+{\bf p}_t}{E_{\gamma}+E_t},\,\, {\gamma}_c=\frac{1}{\sqrt{1-v_{c}^{2}}}; \end{aligned}$$ $$\cos{\theta_{K^-}^{c}}=\frac{{\bf p}_{K^-}{\bf v}_c}{p_{K^-}v_c}= \frac{p_{\gamma}\cos{\theta_{K^-}}+p_t\cos{\theta_{{\bf p}_t{\bf p}_{K^-}}}} {v_c(E_{\gamma}+E_t)},\,\, {\bf p}_t=-{\bf p}_s$$ and $$p_{K^-}^{(1)}=\frac{p_{K^-}^{'}(v_c/v_{K^-}^{'})\cos{{\theta}_{K^-}^{c}}+ \sqrt{p_{K^-}^{'2}-\gamma_{c}^{2}v_{c}^{2}m_{K}^{2}\sin^2{\theta_{K^-}^c}}} {{\gamma}_c(1-v_{c}^2\cos^2{{\theta}_{K^-}^c})}\,\, for \,\, v_{K^-}^{'} > v_c,$$ $$p_{K^-}^{(1, 2)}=\frac{p_{K^-}^{'}(v_c/v_{K^-}^{'})\cos{{\theta}_{K^-}^{c}}\pm \sqrt{p_{K^-}^{'2}-\gamma_{c}^{2}v_{c}^{2}m_{K}^{2}\sin^2{\theta_{K^-}^c}}} {{\gamma}_c(1-v_{c}^2\cos^2{{\theta}_{K^-}^c})}\,\, for \,\, v_{K^-}^{'} \le v_c.$$ The quantity $\cos{\theta_{{\bf p}_t{\bf p}_{K^-}}}$, entering into eq. (40), is defined above by the (34). It is worth noting that, as is evident from eqs. (41), (42), in the case when the $K^-$-meson velocity $v_{K^-}^{'}$ in the ${\gamma}n$ c.m.s. is greater than the velocity $v_c$ of this system in the lab frame ($v_{K^-}^{'} > v_c$) the polar $K^-$ production angle $\theta_{K^-}^{c}$ varies without restriction between 0 and $\pi$, otherwise ($v_{K^-}^{'} \le v_c$) it lies in the interval $(0, \arcsin{(p_{K^-}^{'}/{\gamma_c}{v_c}m_K)})$. Thus, for instance, simple calculations show that the maximal value of the $K^-$-meson production angle $\theta_{K^-}$ in the lab system in the reaction ${\gamma}n \to K^-\Theta^+$ taking place on a free neutron being at rest [^15] at $E_{\gamma}=1.8$ GeV amounts approximately to 21$^{0}$, i.e., in the threshold energy region antikaons are mainly emitted in this reaction in forward directions (see, also, figs. 4–8 given below). Further observables of our interest for the exclusive $d(\gamma, K^-p)K^+n$ and inclusive $d(\gamma, K^-)K^+np$ processes, proceeding through the intermediate $\Theta^+$ state, are the $K^-$-meson angular distributions, respectively, for fixed and non–fixed solid angle of the proton–spectator. Because of eq. (38), they are given by: $$\frac{d\sigma_{{\gamma}d\to K^-K^+np}(E_{\gamma})} {d{\bf {\Omega}}_{K^-}d{\bf {\Omega}}_{s}}= \int{dp_{s}}\frac{d\sigma_{{\gamma}d\to K^-K^+np}(E_{\gamma})} {d{\bf {\Omega}}_{K^-}dp_{s}d{\bf {\Omega}}_s},$$ $$\frac{d\sigma_{{\gamma}d\to K^-K^+np}(E_{\gamma})} {d{\bf {\Omega}}_{K^-}}= \int\int{dp_{s}}{d{\bf {\Omega}}_{s}} \frac{d\sigma_{{\gamma}d\to K^-K^+np}(E_{\gamma})} {d{\bf {\Omega}}_{K^-}dp_{s}d{\bf {\Omega}}_{s}}.$$ Let us discuss now the results of our calculations in the framework of the approach outlined above. 3 Results {#results .unnumbered} ========= At first, we consider the exclusive differential cross section (38) for the process $d(\gamma, K^-p)K^+n$ proceeding through the intermediate $\Theta^+$ state. There are many options to display the information contained in this cross section. In particular, in fig. 4 we show the exclusive $K^-$-meson differential cross sections in the lab frame for the proton–spectator emerging in the direction of the incoming photon (i.e. at ${\bf {\Omega}_s}={\bf {\Omega}_{\gamma}}$, where ${\bf {\Omega}_{\gamma}}={\bf p_{\gamma}}/p_{\gamma})$ with momentum of $270~{\rm {MeV/c}}$ calculated by eq. (38) for different assumptions concerning the parity of the $\Theta^+$ state and the availability of the limitations (32), (36) we introduced on the phase space of the $K^-p$ system at beam energy of 1.5 GeV. The same as in fig. 4 but calculated for the photon energy of 1.75 GeV and the proton–spectator momentum of $45~{\rm {MeV/c}}$ is shown in fig. 5. A choice of these two options for the incident energy and the three–momentum of the proton–spectator has been particularly motivated by the fact that the excess energies above the $K^-\Theta^+$ threshold, corresponding to both options (26 and 42 MeV, respectively), fall in the near–threshold energy region of interest ($\le 100~MeV$). Under this choice, the respective antikaon velocities in the ${\gamma}n$ c.m.s. turn out to be smaller than the ones of this system in the lab frame. This means that in the chosen kinematical conditions only the second term in eq. (38) plays a role and, therefore, the $K^-$-meson production angle $\theta_{K^-}$ in the lab system must be limited [^16] . Our calculations show that the maximal value of this angle amounts to $28.7^0$ and $26.3^0$ for $E_{\gamma}=1.5$ GeV, $p_s=270~{\rm {MeV/c}}$, ${\bf {\Omega}_s}={\bf {\Omega}_{\gamma}}$ and $E_{\gamma}=1.75$ GeV, $p_s=45~{\rm {MeV/c}}$, ${\bf {\Omega}_s}={\bf {\Omega}_{\gamma}}$, respectively, which is reflected in the results we have exhibited in figs. 4 and 5. Looking at these figures, one can see that there are a clear differences between the antikaon angular distributions calculated for different $\Theta^+$ parities and the same assumptions concerning the availability of the limitations on the phase space of the $K^-p$ system (between dashed and solid, double–dot–dashed and dot–dashed lines). Namely, in the case of negative–parity $\Theta^+$, the distributions (dashed and double–dot–dashed curves) are strongly suppressed at forward angles $\theta_{K^-} \le 15^0$, whereas in the case of positive–parity $\Theta^+$, the ones (solid and dot–dashed lines) are flat at these angles. Moreover, although at larger angles the respective differential cross sections belonging to the calculations for different $\Theta^+$ parities have a similar shapes (compare dashed and solid, double–dot–dashed and dot–dashed lines in figs. 4 and 5), their strengths here for the negative–parity $\Theta^+$ are about forty times smaller than those for the positive–parity one. Comparing the curves, corresponding to the calculations for the same $\Theta^+$ parities with and without placing the cuts under consideration on the phase space of the $K^-p$ system (solid and dot–dashed, dashed and double–dot–dashed lines, respectively), one can also see that these cuts only slightly reduce the strengths of the cross sections practically at all allowed angles in the case of the kinematics of fig. 4, while in the kinematical conditions of fig. 5 they decrease the cross sections only in small region of angles near the maximal $K^-$-meson production angle [^17] . This means that the main strengths of the exclusive $K^-$-meson differential cross sections under consideration concentrate in those parts of the sampled phase space where contribution from the background sources, associated both with the $\Lambda(1520)$ production and the $K^-p$–FSI effects, is expected to be negligible in the chosen kinematics. So, the foregoing shows that the observation of the exclusive antikaon angular distributions from the process $d(\gamma, K^-p)K^+n$ proceeding via the intermediate $\Theta^+$ state near the threshold, like those just considered, can serve as an important tool to distinguish the parity of the $\Theta^+$ baryon. Let us concentrate now on the exclusive differential cross section (43) for the process $d(\gamma, K^-p)K^+n$ going through the virtual $\Theta^+$ state. In fig. 6 we show the exclusive $K^-$-meson differential cross sections in the lab frame for the proton–spectator emerging in the direction of the incoming photon calculated by eq. (43) with allowance for the same scenarios for the $\Theta^+$ parity and the availability of the limitations on the phase space of the $K^-p$ system as in the preceding case at incident energy of 1.5 GeV. It can be seen that here also there are a distinct differences between the respective negative–parity $\Theta^+$ and the positive–parity $\Theta^+$ results analogous to those observed previously. Namely, the calculations including negative–parity $\Theta^+$ lie considerably lower the positive–parity $\Theta^+$ results and, furthermore, their strengths are substantially suppressed at forward angles $\theta_{K^-} \le 15^0$, while the positive–parity $\Theta^+$ results are basically constant at these angles. On the other hand, as shown in fig. 6, the differences between the calculations for the same $\Theta^+$ parities with and without placing the cuts on the phase space of the $K^-p$ system are small (cf. figs. 4 and 5), which means that the main strengths of the exclusive antikaon differential cross sections considered here lie in those parts of the sampled phase space where contribution from the background sources, associated with the $\Lambda(1520)$ production and the $K^-p$-rescattering in the final state, is expected to be negligible in the chosen kinematics. Therefore, the preceding gives the opportunity to determine the $\Theta^+$ parity experimentally also by measuring the exclusive antikaon angular distribution from the reaction ${\gamma}d \to K^-\Theta^+p \to K^-K^+np$, like that just considered, in the threshold energy region. Finally, let us focus on the inclusive differential cross section (44) for the process $d(\gamma, K^-)K^+np$ proceeding through the intermediate $\Theta^+$ state. In fig. 7 we show the inclusive $K^-$-meson differential cross sections in the lab frame calculated by eq. (44) with employing the same scenarios for the parity of the $\Theta^+$ pentaquark and the availability of the limitations on the phase space of the $K^-p$ system as those of figs. 4, 5, 6 at the photon energy of 1.5 GeV. The same as in fig. 7 but calculated for the photon energy of 1.75 GeV is shown in fig. 8. One can see that the distinctions between the corresponding negative–parity $\Theta^+$ and the positive–parity $\Theta^+$ calculations are quite clear both for 1.5 and 1.75 GeV initial energies and analogous to those observed before. In particular, the cross sections calculated assuming that the $\Theta^+$ has negative parity also are much suppressed at forward angles $\theta_{K^-} \le 15^0$, while the ones obtained supposing that the $\Theta^+$ has positive parity, as in the preceding cases, are practically constant at these angles. Furthermore, although at larger angles [^18] the respective antikaon angular distributions belonging to the calculations for different $\Theta^+$ parities also have a similar shapes (compare dashed and solid, double–dot–dashed and dot–dashed curves in figs. 7 and 8), their strengths here for the negative–parity $\Theta^+$ are significantly reduced compared to those for the positive–parity one (cf. figs. 4, 5, 6). On the other hand, the differences between the calculations for the same $\Theta^+$ parities with and without placing the cuts on the phase space of the $K^-p$ system, as in the preceding cases, are largely insignificant, which means that also the main strengths of the inclusive $K^-$-meson differential cross sections under consideration concentrate in those parts of the sampled phase space where contribution from the background sources, associated with the $\Lambda(1520)$ production and the $K^-p$–FSI effects, is expected to be negligible in the chosen kinematics. Therefore, the foregoing shows that the inclusive antikaon angular distribution from the reaction ${\gamma}d \to K^-{\Theta^+}p \to K^-K^+np$ near the threshold also can be useful to help determine the parity of the $\Theta^+$ pentaquark. Taking into account the above considerations, we come to the conclusion that both the exclusive and inclusive $K^-$-meson laboratory differential cross sections for the reaction ${\gamma}d \to K^-{\Theta^+}p \to K^-K^+np$ near the threshold may be an important tool to determine the parity of the $\Theta^+$ baryon. These observables might be measured on modern experimental facilities such as SPring–8, JLab, ELSA and ESRF. 4 Conclusions {#conclusions .unnumbered} ============= In this paper we have investigated in a spectator model the possibility of determining the parity of the $\Theta^+$ pentaquark from the reaction ${\gamma}d \to K^-{\Theta}^+p \to K^-K^+np$ near the threshold. The elementary $\Theta^+$ production process included in our study is ${\gamma}n \to K^-\Theta^+$. Taking into account the fact, established by the authors of refs. \[58, 59\], that the free c.m.s. differential cross section for this elementary process shows a clear distinction between the two opposite parities of the $\Theta^+$ baryon close to the threshold and using their predictions for this cross section, we have calculated the exclusive and inclusive laboratory angular distributions of $K^-$-mesons produced through this process, taking place on the moving neutron in the deuteron, for two possible parity states of the $\Theta^+$ resonance at 1.5 and 1.75 GeV beam energies with and without placing the relevant kinematical cuts on those parts of the sampled phase space where contribution from the main background sources, associated with the $\phi(1020)$, $\Lambda(1520)$ production as well as with the $K^-p$–FSI effects, is expected to be dominant. We have shown that these cuts play an insignificant role in the chosen kinematics, namely, they only slightly reduce the antikaon angular distributions of interest, which means that, in the chosen kinematical conditions, the main strengths of these distributions concentrate in those parts of the sampled phase space where the non–resonant background is expected to be negligible. On the other hand, the calculated $K^-$-meson angular distributions were found to be strongly sensitive to the $\Theta^+$ parity. We, therefore, come to the conclusion that the observation of the exclusive and inclusive antikaon angular distributions from the reaction ${\gamma}d \to K^-{\Theta}^+p \to K^-K^+np$ near the threshold can serve as an important tool to distinguish the parity of the $\Theta^+$ pentaquark. Such observation might be conducted at current experimental facilities. The author is grateful to A.I.Reshetin for interest in the work. [99]{} Q. Zhao, F. E. Close, [*J. Phys.*]{} [**G31**]{}, L1 (2005). M. Karliner, H. J. Lipkin, [*Phys. Lett.*]{} [**B597**]{}, 309 (2004). K. Hicks, [*Int. J. Mod. Phys.*]{} [**A20**]{}, 219 (2005). D. S. Carman, hep–ex/0412074. B. K. Jennings, K. Maltman, [*Phys. Rev.*]{} [**D69**]{}, 094020 (2004). S.–L. Zhu, hep–ph/0406204; hep–ph/0410002. R. L. Jaffe, hep–ph/0409065. T. Nakano et al., LEPS Coll., [*Phys. Rev. Lett.*]{} [**91**]{}, 012002 (2003). V. V. Barmin et al., DIANA Coll., [*Phys. At. Nucl.*]{} [**66**]{}, 1715 (2003). S. Stepanyan et al., CLAS Coll., [*Phys. Rev. Lett.*]{} [**91**]{}, 252001 (2003). J. Barth et al., SAPHIR Coll., [*Phys. Lett.*]{} [**B572**]{}, 127 (2003). V. Kubarovsky et al., CLAS Coll., [*Phys. Rev. Lett.*]{} [**92**]{}, 032001 (2004). M. Abdel–Bary et al., COSY–TOF Coll., [*Phys. Lett.*]{} [**B595**]{}, 127 (2004). A. Airapetian et al., HERMES Coll., [*Phys. Lett.*]{} [**B585**]{}, 213 (2004). S. Chekanov et al., ZEUS Coll., [*Phys. Lett.*]{} [**B591**]{}, 7 (2004). A. Aleev et al., SVD Coll., hep–ex/0401024. A. E. Asratyan, A. G. Dolgolenko, M. A. Kubantsev, [*Phys. At. Nucl.*]{} [**67**]{}, 682 (2004). C. Alt et al., NA49 Coll., [*Phys. Rev. Lett.*]{} [**92**]{}, 042003 (2004). A. Aktas et al., H1 Coll., [*Phys. Lett.*]{} [**B588**]{}, 17 (2004). Yu. M. Antipov et al., SPHINX Coll., [*Eur. Phys. J.*]{} [**A21**]{}, 455 (2004). S. Salur, for the STAR Coll., nucl–ex/0403009. C. Pinkenburg, for the PHENIX Coll., [*J. Phys.*]{} [**G30**]{}, s1201 (2004). J. Z. Bai et al., BES Coll., [*Phys. Rev.*]{} [**D70**]{}, 012004 (2004). K. T. Kn$\ddot{\rm o}$pfle, M. Zavertyaev, T. $\check{\rm Z}$ivko, for the HERA–B Coll., [*J. Phys.*]{} [**G30**]{}, s1363 (2004). I. Abt et al., HERA–B Coll., [*Phys. Rev. Lett.*]{} [**93**]{}, 212003 (2004). B. Aubert et al., BABAR Coll., hep–ex/0408064. T. Wengler, hep–ex/0405080; M. J. Longo et al., HyperCP Coll., [*Phys. Rev.*]{} [ **D70**]{}, 111101 (2004). M. Battaglieri et al., CLAS Coll., hep–ex/0510061. D. Diakonov, V. Petrov, M. Polyakov, [*Z. Phys.*]{} [**A359**]{}, 305 (1997). S. Nussinov, hep–ph/0307357. R. A. Arndt, I. I. Strakovsky, R. L. Workman, [*Phys. Rev.*]{} [**C68**]{}, 042201 (2003). J. Haidenbauer, G. Krein, [*Phys. Rev.*]{} [**C68**]{}, 052201 (2003). A. Sibirtsev, J. Haidenbauer, S. Krewald, Ulf–G. Meissner, [*Phys. Lett.*]{} [**B599**]{}, 230 (2004). A. Sibirtsev, J. Haidenbauer, S. Krewald, Ulf–G. Meissner, [*Eur. Phys. J.*]{} [**A23**]{}, 491 (2005). R. N. Cahn, G. H. Trilling, [*Phys. Rev.*]{} [**D69**]{}, 011501 (2004). H. Walliser, V. B. Kopeliovich, [*JETP*]{} [**97**]{}, 433 (2003). J. Ellis, M. Karliner, M. Praszalowicz, [*JHEP*]{} [**0405**]{}, 002 (2004); hep–ph/0401127. F. Stancu, D. O. Riska, [*Phys. Lett.*]{} [**B575**]{}, 242 (2003). R. L. Jaffe, F. Wilczek, [*Phys. Rev. Lett.*]{} [**91**]{}, 232003 (2003). M. Karliner, H. J. Lipkin, hep–ph/0307243. A. Hosaka, [*Phys. Lett.*]{} [**B571**]{}, 55 (2003). C. E. Carlson et al., [*Phys. Lett.*]{} [**B579**]{}, 52 (2004). N. Itzhaki et al., [*Nucl. Phys.*]{} [**B684**]{}, 264 (2004). T.–W. Chiu, T.–H. Hsieh, hep–ph/0403020. C. E. Carlson et al., [*Phys. Lett.*]{} [**B573**]{}, 101 (2003). R. Bijker, M. M. Giannini, E. Santopinto, hep–ph/0409022. S.–L. Zhu, [*Phys. Rev. Lett.*]{} [**91**]{}, 232002 (2003). J. Sugiyama, T. Doi, M. Oka, [*Phys. Lett.*]{} [**B581**]{}, 167 (2004). S. H. Lee, H. Kim, Y. Kwon, [*Phys. Lett.*]{} [**B609**]{}, 252 (2005). F. Csikor et al., [*JHEP*]{} [**0311**]{}, 070 (2003). S. Sasaki, [*Phys. Rev. Lett.*]{} [**93**]{}, 152001 (2004). Ulf–G. Meissner, hep–ph/0408029. W. Liu, C. M. Ko, [*Phys. Rev.*]{} [**C68**]{}, 045203 (2003). W. Liu, C. M. Ko, [*Nucl. Phys.*]{} [**A741**]{}, 215 (2004). W. Liu, C. M. Ko, V. Kubarovsky, [*Phys. Rev.*]{} [**C69**]{}, 025202 (2004). C. M. Ko, W. Liu, nucl–th/0410068. Y. Oh, H. Kim, S.–H. Lee, [*Phys. Rev.*]{} [**D69**]{}, 014009 (2004). B. G. Yu, T. K. Choi, C.–R. Ji, [*Phys. Rev.*]{} [**C70**]{}, 045205 (2004). B. G. Yu, T. K. Choi, C.–R. Ji, nucl–th/0408006. T. Mart et al., nucl–th/0412095. T. Mart, [*Phys. Rev.*]{} [**C71**]{}, 022202 (2005). S.–I. Nam, A. Hosaka, H.–C. Kim, nucl–th/0411111. S.–I. Nam, A. Hosaka, H.–C. Kim, nucl–th/0411119. H. W. Barz, M. Zetenyi, nucl–th/0411006. A. I. Titov et al., [*Phys. Rev.*]{} [**C71**]{}, 035203 (2005). Y. Oh, K. Nakayama, T.–S. H. Lee, hep–ph/0412363. A. W. Thomas, K. Hicks, A. Hosaka, [*Prog. Theor. Phys.*]{} [**111**]{}, 291 (2004). T. Hyodo et al., nucl–th/0410013. Q. Zhao, [*Phys. Rev.*]{} [**D69**]{}, 053009 (2004); [**D70**]{}, 039901(E) (2004). Q. Zhao, J. S. Al–Khalili, [*Phys. Lett.*]{} [**B585**]{}, 91 (2004); [**B596**]{}, 317(E) (2004). K. Nakayama, K. Tsushima, [*Phys. Lett.*]{} [**B583**]{}, 269 (2004). Q. Zhao, hep–ph/0502033. C. Hanhart et al., [*Phys. Lett.*]{} [**B590**]{}, 39 (2004). C. Hanhart et al., [*Phys. Lett.*]{} [**B606**]{}, 67 (2005). Yu. N. Uzikov, nucl–th/0411113. M. P. Rekalo, E. Tomasi–Gustafsson, [*J. Phys.*]{} [**G30**]{}, 1459 (2004). K. Nakayama, W. G. Love, [*Phys. Rev.*]{} [**C70**]{}, 012201 (2004). A. I. Titov, B. K$\ddot{\rm a}$mpfer, nucl–th/0504073. O. Benhar, nucl–th/0307061. O. Benhar, N. Farina, nucl–th/0407106. Y. Nara et al., [*Nucl. Phys.*]{} [**A614**]{}, 433 (1997). E. Ya. Paryev, [*Eur. Phys. J.*]{} [**A7**]{}, 127 (2000). Ya. I. Azimov, I. I. Strakovsky, [*Phys. Rev.*]{} [**C70**]{}, 035210 (2004). D. Cabrera et al., [*Phys. Lett.*]{} [**B608**]{}, 231 (2005). C. Ciofi degli Atti, S. Simula, [*Phys. Rev.*]{} [**C53**]{}, 1689 (1996). M. Lacombe et al., [*Phys. Rev.*]{} [**C21**]{}, 861 (1980). M. Lacombe et al., [*Phys. Lett.*]{} [**B101**]{}, 139 (1981). P. Filip, E. E. Kolomeitsev [*Phys. Rev.*]{} [**C64**]{}, 054905 (2001). V. M. Kolybasov, I. S. Shapiro, Yu. N. Sokolskikh, [*Phys. Lett.*]{} [**B222**]{}, 135 (1989). V. M. Kolybasov, V. G. Ksenzov, [*Yad. Fizika*]{} [**v.22**]{}, 720 (1975). L. A. Kondratyuk, [*Sov. J. Nucl. Phys.*]{} [**v.24**]{}, 247 (1976). L. A. Kondratyuk, M. Zh. Shmatikov, [*Phys. Lett.*]{} [**B117**]{}, 381 (1982). Eed M. Darwish, [*Prog. Theor. Phys.*]{} [**113**]{}, 169 (2005). Eed M. Darwish, A. Salam, nucl–th/0505002. H. Yamamura et al., [*Phys. Rev.*]{} [**C61**]{}, 014001 (2000). J. M. Laget, [*Phys. Rep.*]{} [**69**]{}, 1 (1981). E. M. Darwish, H. Arenh$\ddot{\rm o}$vel, M. Schwamb, [*Eur. Phys. J.*]{} [**A16**]{}, 111 (2003). V. Lensky et al., nucl–th/0505039. A. Cieply, E. Friedman, A. Gal, J. Mares, [*Nucl. Phys.*]{} [**A696**]{}, 173 (2001). B. Borasoy, R. Nissler, W. Weise, hep–ph/0410305. C. Dover, G. Walker, [*Phys. Rep.*]{} [**89**]{}, 1 (1982). Y. Yamaguchi, [*Phys. Rev.*]{} [**95**]{}, 1628 (1954). [^1]: At excess energies above the $K^-\Theta^+$ threshold less than approximately 100 MeV as may be inferred from the arguments presented in the work \[59\], or, respectively, at the photon energies smaller than about 2 GeV if the reaction ${\gamma}n \to K^-\Theta^+$ takes place on a free target neutron being at rest. [^2]: Which amounts to $1.73~{\rm GeV}$ when the target neutron is at rest. [^3]: Because in the photoproduction experiments \[8, 10–12\] the $\Theta^+$ was observed in the $K^+n$ decay mode, it is natural to consider this mode in the present work. [^4]: Which is assumed to be on–shell, since its width is very small compared to its mass. [^5]: The struck target neutron is off–shell, see eq. (8). [^6]: It should be noted that these values are in disagreement with the results of the experiment \[28\]. In the light of these results the use of eqs. (14), (15) enables us to obtain an upper estimate of the strength of the respective antikaon angular distributions and has no influence on their shape of our main interest. [^7]: Note that the $K^+$ momentum $\stackrel{*}p_{K^+}$ in the $\Theta^+$ decay into $K^+n$ in its rest frame, as is easy to calculate, is equal to $269.7~{\rm {MeV/c}}$. [^8]: Or the momentum distribution $n_d(p_s)$ of the nucleon–spectator produced by the spectator mechanism in the reactions off the deuteron target nucleus. [^9]: The reactions which contribute to the same final state $K^-K^+np$ and do not proceed through the virtual ${\Theta^+}$ state. [^10]: Performed in line with the formulas (25)–(31) given below. [^11]: Carried out in line with the following absolute limits for the invariant mass $M_{K^-p}$ of interest: $m_K+m_p \le M_{K^-p} \le \sqrt{(E_{\gamma}+M_d)^2-{\bf p}_{\gamma}^2}- m_{\Theta^+}$. [^12]: Or final–state interaction (FSI). [^13]: Accounting for the relation $q_{K^-p}=\frac{1}{2M_{K^-p}}\lambda(M_{K^-p}^2,m_K^2,m_{p}^2)$, we can easily obtain that this corresponds to the region of the $K^-p$ invariant masses $M_{K^-p} \ge 1.447~{\rm GeV}$. [^14]: The quantity $R_d$ for this function is equal to $3.1~{\rm fm}$. [^15]: In this case, as is easy to see, $\theta_{K^-}=\theta_{K^-}^{c}$. [^16]: Since in the chosen kinematics $\theta_{K^-}=\theta_{K^-}^c$ and the angle $\theta_{K^-}^c$, defined above by eq. (40), is limited in line with the text given just below the eq. (42). [^17]: It is interesting to note that placing the cut (24) on the phase space of the $K^+K^-$ system results in reduction of the antikaon yield from the process (1) by the factors of about 1.7 and 1.5 in the kinematical conditions, respectively, of fig. 4 and fig. 5. This means that about 40% and 30% of the total $K^+K^-$ phase space are eliminated due to the cut (24), correspondingly, in the former and the latter cases and, therefore, the larger part of this space is free from the $\phi$-meson background. [^18]: Restricted at $E_{\gamma}=1.5$ GeV as shown in fig. 7.
--- abstract: 'The scaling of the fundamental limits of the second hyperpolarizability is used to define the intrinsic second hyperpolarizability, which aids in identifying material classes with ultralarge nonlinear-optical response per unit of molecular size. The intrinsic nonlinear response is a size-independent metric that we apply to comparing classes of molecular homologues, which are made by adding repeat units to extend their lengths. Several new figures of merit are proposed that quantify not only the intrinsic nonlinear response, but also how the second hyperpolarizability increases with size within a molecular class. Scaling types can be classified into sub-scaling, nominal scaling that follows the theory of limits, and super-scaling behavior. Super-scaling homologues that have large intrinsic nonlinearity are the most promising because they efficiently take advantage of increased size. We apply our approach to data in the literature to identify the best super-scaling molecular paradigms and articulate the important underlying parameters.' author: - 'Javier Perez-Moreno' - Shoresh Shafei - 'Mark G. Kuzyk' bibliography: - '\\bibs.bib' title: 'Applying universal scaling laws to identify the best molecular design paradigms for third-order nonlinear optics' --- Introduction ============ The quest for materials with enhanced third-order nonlinear-optical response has been fueled by the needs of applications in varied fields such as multi-photon biomedical imaging, photodynamic cancer therapies, optical computing, information transmission, laser technology and real-time holography. Since the origins of the nonlinear-optical response in organic materials such as dye-doped polymers and van der Waals crystals is found to originate at the molecular level, improving these material’s third-order nonlinear properties requires the design and optimization of the substituent nonlinear-optical chromophores. Marks et al. have identified new materials paradigms illustrated in small molecules with huge two-photon absorption cross-section,[@pati01.01] ultralarge hyperpolarizability,[@kang07.01], and large intensity-dependent refractive index.[@He11.01] Roberts et al. have reported on molecules based on triphenylamine-cored slkynylruthenium dendrimers that have exceptionally large third-order susceptibility.[@rober09.01] Based on the apparent observed improvements from one molecule to another, how can we determine which paradigm has the greatest potential? Is it possible to compare the performance of molecules of vastly different sizes? Which molecules give the largest nonlinearity per unit of size that scales in a favorable way when the molecule is made larger by adding repeat units? Can we tell when we have reached a fundamental ceiling? The theory of quantum limits shows that the strength of the nonlinear optical response of a molecule is bounded. The limit is a function of the number of electrons and the energy difference between the two lowest-energy states, and is reached when the charges are optimally arranged.[@kuzyk00.01; @kuzyk00.02; @kuzyk03.02; @kuzyk03.01] A fair comparison of the measure of the performance of a molecule is obtained by comparing its response with others that have the same number of electrons and energy gap. In practice, molecules of interest have varied energy gaps and electron count, so a more fruitful strategy is to compare the molecular nonlinearity with that of the quantum limit for that number of electrons and energy gap, thus through transitivity, making it possible to compare any two molecules by how well they perform relative to the limits. This is the approach used here. The theory of the quantum limits has been used to (1) elucidate the origins of the nonlinear optical response at the molecular level,[@tripa04.01; @tripa06.01; @perez07.02; @zhou08.01; @perez05.01; @perez11.02; @perez11.01; @perez09.02; @van2012dispersion; @de12.01] (2) introduce new paradigms for optimization,[@perez09.01; @perez07.01; @perez06.01; @perez11.02; @Kang05.01; @brown08.01; @He11.01] and (3) establish fundamental scaling laws.[@kuzyk10.01; @kuzyk13.01] This paper recognizes that it is not enough to search for the ideal molecule, using previous ones as stepping stones to the ultimate one. Rather, one must identify a family of molecules that both have a large intrinsic nonlinear-optical response, and which super scales so that the intrinsic nonlinear response grows with size. This will lead to molecules with ultralarge nonlinear-optical response. Super scaling is desirable because the absolute second hyperpolarizability grows as a power law greater than an exponent of 3; thus, though fewer large molecules will fit within a fixed volume, their aggregate nonlinear response will be greater than that of many more smaller molecules. When a molecular class sub scales, the larger response of the larger molecules produces a smaller bulk response. In this work, we analyze the experimental data in the literature to identify the best molecular candidates for the largest third-order nonlinear optical response. The analysis relies on scaling to determine which molecules are candidates for super scaling, so that longer homologues will become more efficient and approach the quantum limit. This paper is the second part to a companion paper that applies the same principles to the hyperpolarizability.[@perez16beta] The strength of the nonlinear-optical response scales with the size of the quantum system[@kuzyk13.01] according to [*simple scaling*]{} when re-scaling results in a self similar system, as is found for a particle in a box as the walls are moved further apart. The intrinsic nonlinearity removes this effect so that size is removed from consideration, making comparisons between any two molecules possible. Data from the literature confirms that most molecules fall into the sub-scaling class, so most present-day design paradigms are based on homologues that are less efficient when they are made larger. Since most molecules fall far below the fundamental limits, molecules that scale at or less than predictions will become worse as they are made larger. Even when their absolute nonlinear-optical response is large, their electrons are not being used efficiently and larger homologues will underperform. Only the molecules that super-scale have potential for reaching the fundamental limit, provided that they have large intrinsic nonlinear-optical response. In this work, we identify existing molecules that super-scale with the goal of identifying structural properties associated with a large nonlinear-optical response that can be applied to making even better materials. This paper is organized as follows. First we introduce limit theory and scaling, and then propose several figures of merit that apply to a group of homologues. These figures of merit quantify the type of scaling, the extrapolated molecule size that would yield the fundamental limit, and the magnitude of the nonlinearity at saturation. We apply the figures of merit to a group of molecular classes the most promising systems for applications in third-order nonlinear-optical materials. Approach ======== The molecular property of interest is the second hyperpolarizability, $\gamma$, a fourth rank tensor. Typically the largest component is the diagonal one, so we will focus on the largest component and call it $\gamma$ for simplicity. The fundamental limit of $\gamma$ is calculated using the sum rules and given by:[@kuzyk00.02] $$\label{eq:gammaMax} \gamma_{max}= 4 \left( \frac{e\hbar}{\sqrt{m}} \right)^{4} \frac{N^{2}}{E_{10}^{5}},$$ where $e$ and $m$ are the charge and mass of the electron, $\hbar$ is the reduced Plank constant, $N$ is the effective number of electrons, and $E_{10}$ is the energy difference between the first and the ground state. Using $esu$ units we can approximate Equation \[eq:gammaMax\] as: $$\label{eq:gammaMaxUnits} \gamma_{max} \left[\frac{cm^6}{erg}\right] = 29,700 \times 10^{-36} N^2/E_{10}\left[eV \right]^5 ,$$ where the quantities in brackets are the units. The conversion between energy of a photon in $eV$ and its associated wavelength $\lambda$ in nanometers is $\lambda[nm] = 1240/E[eV]$. The fundamental limits define an absolute maximum, so the ratio of the measured nonlinearity to the limit is a dimensionless parameter of magnitude less than unity. The intrinsic second hyperpolarizability is defined as the ratio[@watki12.01] $$\label{eq:gammaInt} \gamma_{int} = \frac {\gamma} {\gamma_{max}}.$$ It has be shown that in general, the second hyperpolarizability scales in the same manner as the fundamental limit, or[@kuzyk13.01] $$\label{eq:gammascales} \gamma \propto \frac{N^{2}}{E_{10}^{5}}.$$ This kind of scaling, which is obeyed by all self-similar structures, is called [*“simple scaling”*]{}. The ratio defined by Equation \[eq:gammaInt\] eliminates simple scaling, and is thus said to be scale invariant or size independent. The Schrödinger Equation is invariant under transformations in which the lengths are re-scaled by a factor $\epsilon$ if the energies are simultaneously re-scaled by a factor $1/\epsilon^2$.[@zhou08.01; @kuzyk10.01] Such re-scaling would change the absolute value of the second hyperpolarizablity but would leave the intrinsic second hyperpolarizability unchanged. This idea applies to molecules, so we assess the scaling behavior of molecular classes using the change in intrinsic second hyperpolarizability as a function of size as a metric. A molecular [*“class”*]{} is a collection of homologue molecules of varying sizes. When the intrinsic second hyperpolarizability in a class is independent of the size, the class is assigned to the scaling type calked simple scaling. If the intrinsic second hyperpolarizability increases or decreases with size the class is assigned to the super-scaling or sub scaling type, respectively. The target paradigm is a molecular class with a large nonlinear response that super scales. In this paper we use these concepts to define figures of merit and identify the best molecular paradigms. Results and discussions ======================= ![image](Figure1.eps){width="4in"}\ Figure \[fig:gamma\] shows the molecular classes whose second hyperpolarizabilities are studied. In each case, the base molecule is shown, and the class is defined by varying the number of repeat units, $n$. The calculation of the intrinsic second hyperpolarizability requires as an input the measured second hyperpolarizability, the effective number of electrons, $N$, and the energy difference between the ground and first electronic excited states, $E_{10}$ so that Equation \[eq:gammaInt\] can be evaluated using Equation \[eq:gammaMax\]. The absolute second hyperpolarizabilities are determined from measurements reported in the literature, the energy difference $E_{10}$ is determined from the wavelength of maximum absorption, and the effective number of electrons is determined according to: $$\label{eq:gammaEffN} N_{\gamma} = \left( \sum_{i} N_i^2 \right)^{1/2},$$ where the sum is over each contiguous conjugated path.[@kuzyk03.03] For a single conjugated path, there are two electrons per double or triple bounds, and the effective number of electrons is simply the total number of $\pi$-electrons in the conjugated path. The number of effective electrons is calculated using Equations \[eq:gammaMax\]. The values of $E_{10}$ and $N$ for all the molecular classes are tabulated in Table \[tab:gamma\]. Figure \[fig:GammaIntExp\] plots the intrinsic second hyperpolarizability as a function of the absolute second hyperpolarizability. While the absolute second hyperpolarizability spans 4 orders of magnitude, the intrinsic hyperpolarizability spans only two orders of magnitudes. As in the case of the first hyperpolarizability, this is an indication that most of the measured variations are due to simple scaling. An understanding of the origin of the two orders of magnitude variation of the intrinsic second hyperpolarizability could be used to make molecules with better scaling that would translate into much bigger absolute nonlinearities. It is interesting to note that the longest molecule in class G5 has a respectable second hyperpolarizability that is larger than most of the other molecules; but, its intrinsic nonlinear-optical response is the smallest of all the molecules. ![Plot of the intrinsic second hyperpolarizability $\gamma_{int}$ as a function of the measured absolute second hyperpolarizability $\gamma_{exp}$. The fit is to the function $\gamma_{int}=c \gamma_{exp} + d$ and the fitting parameters are given in Table \[tab:gamma\]. Since the scales for the horizontal and vertical axis are logarithmic, the linear fits appear as curves.[]{data-label="fig:GammaIntExp"}](Figure2.eps "fig:")\ Figure \[Gmerger\] plots the intrinsic second hyperpolarizability (red points) as a function of the number of repeat units for each molecule class. As it was found for the first hyperpolarizability, the relationship is approximately linear. The blue lines show the liner fit ($\gamma_{int}= a \cdot n + b$). The slope of the line determines the nature of scaling in each series. In some cases, such as series G8, G9 and G1, the effect of the molecular ends is large for the shortest molecules. In these cases, the shorter members in the series, shown as green points, are excluded from the linear fits. The fit parameters $a$ and $b$ are listed in Table \[tab:gamma\], together with the values of $c$ and $d$ which are determined from the fit $\gamma_{int} = c \gamma_{exp} + d$, as shown in Figure \[fig:GammaIntExp\]. Other parameters in Table \[tab:gamma\] are discussed later. Notice that $a$, which quantifies the degree of scaling, is also the incremental intrinsic second hyperpolarizability per repeat unit, or $$\label{eq:a-gamma} a = \frac {\partial \gamma_{int}} {\partial n},$$ and $b$ is the extrapolated value of the second hyperpolarizability in the limit of zero repeat units: $$\label{eq:b-gamma} b = \left. \gamma_{int} \right|_{n=0}.$$ ![image](Figure3.eps){width="4.5in"}\ ------------------------------ ---------- ---------------- ----- ------------------ ------------------ ---------------------- ------------ --------------------------- ------------------ -------------------- -------------------- -------------------- ----------------------- Class $E_{10}$  $n^{\prime}$  $N$ $a$ $b$ $\gamma_{max}^{int}$  $n_{max}$ $c$ $d$ $\gamma_(n=1)$ $\gamma_{SAT}$ $FOM_{\gamma}$ $\Delta \gamma_{exp}$ (eV) $\times 10^{-3}$ $\times 10^{-3}$ $\times 10^{-3}$ $\times 10^{30} esu^{-1}$ $\times 10^{-3}$ $\times 10^{-34} $ $\times 10^{-34} $ $\times 10^{-34} $ $\times 10^{-34} $ esu esu esu esu $\gamma_1$ [@thien90.01] 3.67 3 4 -19 $\pm$ 13 355 $\pm$ 99 271 $\pm$ 55 7 -1.0 $\pm$ 0.3 253 $\pm$ 95 9.9 (n=3) - - - $\gamma_2$ [@pucce93.01] 3.780 3 2 -24 $\pm$ 7 190 $\pm$ 36 123 $\pm$ 14 3 -90 $\pm$ 50 124 $\pm$ 32 1.7 (n=3) - - - $\gamma_3$ [@pucce93.01] 3.039 3 14 9 $\pm$ 1 -2 $\pm$ 13 96 $\pm$ 19 11 2.0 $\pm$ 0.4 350 $\pm$ 12 9 (n=3) 3250 29 $4.5 \pm 1.4$ $\gamma_4$ [@pucce93.01] 3.229 1 8 2.0 $\pm$ 0.9 34 $\pm$ 4 52 $\pm$ 3 6 2.0 $\pm$ 0.8 36 $\pm$ 3 2 4820 10 $1.0 \pm 0.9 $ $\gamma_5^{T}$ [@guble99.01] 4.189 1 6 -0.20 $\pm$ 0.07 6.0 $\pm$ 0.5 15 $\pm$ 3 1 -0.7 $\pm$ 0.2 6.0 $\pm $ 0.4 0.12 - - - $\gamma_5^{D}$ [@guble99.01] 4.189 1 6 -0.5 $\pm$ 0.1 1.3 $\pm$ 0.8 10 $\pm$ 2 1 -2.0 $\pm$ 0.4 10.0 $\pm$ 0.7 0.09 - - - $\gamma_6$[@meier01.01] 3.669 1 14 11 $\pm$ 9 168 $\pm$ 30 226 $\pm$ 29 1 0.6 $\pm$ 0.7 185 $\pm$ 23 13.4 13583 180 $20 \pm 40 $ $\gamma_7$[@luu05.01] 3.780 1 16 32 $\pm$ 11 71 $\pm$ 27 210 $\pm$ 13 7 2.0 $\pm$ 0.3 116 $\pm$ 6 12 4420 152 $16 \pm 8 $ $\gamma_8$[@eisle05.01] 6.2 2 2 1.0 $\pm$ 0.1 0.2 $\pm$ 0.6 14..0 $\pm$ 0.06 10 20 $\pm$ 4 6.0 $\pm$ 0.6 0.028 (n=2) 4970 5 $0.05 \pm 0.02 $ $\gamma_9$[@meier05.01] 3.324 1 8 -0.30 $\pm$ 0.07 4.0 $\pm$ 0.2 4.0 $\pm$ 0.8 2 -10 $\pm$ 7 5.0 $\pm$ 0.9 0.21 - - - $\gamma_{10}$[@meier05.01] 3.324 1 14 74 $\pm$ 17 -63 $\pm$ 45 163 $\pm$ 33 3 4.0 $\pm$ 0.6 -35 $\pm$ 22 3.78 2587 185 $20 \pm 7$ $\gamma_{11}$[@May07.01] 2.857 1 26 1.0 $\pm$ 0.2 2.0 $\pm$ 0.9 10 $\pm$ 6 6 20 $\pm$ 300 3.00 $\pm$ 0.05 4 (n=2) 498 0.5 $0.05 \pm 0.8$ ------------------------------ ---------- ---------------- ----- ------------------ ------------------ ---------------------- ------------ --------------------------- ------------------ -------------------- -------------------- -------------------- ----------------------- A plot of $a$ and $b$ for all the molecular classes is shown in Figure \[fig:a-and-b-gamma\]. On the horizontal axis, the classes are ranked based on the value of $\gamma_{int}^{max}$ (listed in Table \[tab:gamma\]). The inset shows the number of repeat units required to attain $\beta_{int}=1$ at the saturation length $n_{SAT}$ assuming that scaling remains linear. ![Plot of the incremental intrinsic second hyperpolarizability per repeat unit $a$ and the intrinsic second hyperpolarizability of the base $b$, i.e. when $n=0$. The classes are ranked based on the value of $\gamma_{int}^{max}$ (listed in Table \[tab:gamma\]). The inset shows $n_{SAT}$, the number of repeat units required to attain $\gamma_{int}=1$ assuming that scaling remains linear.[]{data-label="fig:a-and-b-gamma"}](Figure4.eps "fig:")\ The scaling type of a molecular class is given by the sign of $a$. Classes G06, G07, and G10 super-scale ($a>0$) while classes G04, G08, and G11 show nominal scaling ($a \approx 0$) within experimentally uncertainty. Classes G01, G02, and G03 sub-scale ($a<0$) while classes G05 and G09 sub-scale but are within experimental uncertainty of nominal scaling. Classes G06, G07 and G10 also require the minimum number of repeat units of all to reach saturation, when $\gamma_{int} = 1$. Class G10 is the best of all, with $n_{sat}$ between 10 and 20, a synthetically-achievable target. However, G10 has the fewest number of data points, so extrapolation may be inaccurate. ![Plot of $\gamma_{SAT}$, the absolute value of the second hyperpolarizability at saturation, defined by $\gamma_{int} = 1$, as a function of rank. The classes are ranked based on their highest value of $\gamma_{int}$, which is labelled $\gamma_{int}^{max}$ in Table \[tab:gamma\].[]{data-label="fig:gammaSAT"}](Figure5.eps "fig:"){width="3.4in"}\ A possible figure of merit is the number of repeat units required to attain the quantum limit (such as $\gamma_{int} \rightarrow 1$) and the nonlinear response saturates. The number of repeat units required to saturate the absolute second hyperpolarizability, $n_{SAT}$ is obtained by extrapolation of the linear fit $\gamma_{int} = a n + b$: $$n_{SAT}=\frac{1-b}{a}.$$ The smaller the value of $n_{SAT}$, the better the molecular class. Figure \[fig:gammaSAT\] shows $\gamma_{SAT}$, the predicted absolute value of the second hyperpolarizability when the number of repeat units is large enough to attain the quantum limit as a function of rank. The ranking is based on the highest value of $\gamma_{int}$ in the class, $\gamma_{int}^{max}$ (listed int Table \[tab:gamma\]). Interestingly, all classes that super-scale have the same saturating second hyperpolarizability within experimental uncertainty. ![The expected absolute second hyperpolarizability that would be achieved when the class saturates ($\gamma_{SAT}$), as a function of the number of repeat units needed to reach saturation ($n_{SAT}$). The expected absolute hyperpolarizability for classes that sub-scale (not shown in the plot) is zero.[]{data-label="fig:gammaVnSAT"}](Figure6.eps "fig:"){width="3.4in"}\ Figure \[fig:gammaVnSAT\] shows a plot of $\gamma_{SAT}$ as a function of $n_{SAT}$. Since all of the super-scaling series have about the same values of $\gamma_{SAT}$, the ones with the smallest value of $n_{SAT}$ are best. As homologues are made longer, it becomes more unlikely that the molecules will retain the scaling properties due to breaks in conjugation. A more practical measure of the scaling performance of a class takes into account the number of repeat units needed to attain the largest value allowed for the second hyperpolarizability. A figure of merit that accounts for both $\gamma_{SAT}$ and $n_{SAT}$ is, $$FOM_{\gamma} = \frac {\gamma_{SAT}} {n_{SAT}}. \label{eq:gammafom}$$ ![The figure of merit ($FOM_{\gamma}$) defined as the ratio $\gamma_{SAT} / n_{SAT}$ as a function of the number of repeat units needed to reach saturation.[]{data-label="fig:gammaFOM"}](Figure7.eps "fig:"){width="3.4in"}\ Figure \[fig:gammaFOM\] shows the figure of merit ($FOM_{\gamma}$) as a function of the number of repeat units needed to reach saturation, $n_{SAT}$. The super-scaling classes all share a very similar value of $FOM_{\gamma} \approx 170 \times 10^{-34} $ esu. A more telling quantity, when the goal is to make just a few longer molecules, is how much the absolute second hyperpolarizability increases as a new repeat unit is added and is parameterized by $\Delta \gamma_{exp}$, which can be expressed as: $$\Delta \gamma_{exp} = \frac{a}{c}. \label{eq:gammarac}$$ The values of $\Delta \gamma_{exp}$ are plotted in Figure \[fig:GammaPer\] and listed in Table \[tab:gamma\]. Within experimental uncertainty, each of the super-scaling molecules have the same incremental contribution ($\Delta \gamma_{exp} \approx 180 \times 10^{-34}$ esu). In turn, the nominal scaling classes all share a similar value ($\Delta \gamma_{exp} \approx 5 \times 10^{-34}$ esu). ![The incremental contribution to the absolute second hyperpolarizability per repeat unit, $\Delta \gamma_{exp}$, as a function of $b$ (the intrinsc hyperpolarizability in the limit of the base molecule, i.e. with $n=0$).[]{data-label="fig:GammaPer"}](Figure8.eps "fig:"){width="3.4in"}\ Classes G07 and G10 have both the largest figure of merit and the highest incremental contribution to the absolute second hyperpolarizability per repeat unit. While class G06 appears to be in line with the others, its experimental uncertainty is high, so its its figure of merit could actually be low. The data in Figure \[Gmerger\] shows that the error bars are larger than the slope, and that nominal scaling is also consistent with the data. Similarly, its value of $\Delta \gamma_{exp}$ could actually be null or even negative. Thus, enough data is not available to evaluate this class, while classes G07 and G10 deserve further discussion. However, given that only three points were used to determine the nature of scaling in class G10, additional measurements are needed to confirm that this system is in the super scaling class. The data for class G07 is the most reliable. As was found for the first hyperpolarizability, the simplest bridge forming a linear chain appears to best. In contrast, classes G01 and G09 - with a more complex bridge, do not have good scaling properties. Also note that the cyclic end groups found on class G07 seem to make the polyyne bridge more effective, which is clear when comparing it to class G08, which shares the same bridge but has non-conjugated end groups. As a result, class G08 scales well but its absolute second hyperpolarizability is small, requiring many more repeat units to reach $\gamma_{SAT}$. Note that classes G03 and G04 have the cyclic end group theme, thus leading to a good saturation second hyperpolarizability; but, the polyene bridge does not seem as effective as the polyyne one. We thus conclude that polyyne bridges with cyclic conjugated end groups may be the best paradigm where the end groups are sources and sinks of charge and the polyyne bridge serves as an efficient conduit between the two. Indeed, materials based on polyynes have been studied by Slepkov and coworkers[@slepk04.01; @luu05.01; @eisle05.01] as examples of systems that scale well. Molecular classes such as those studies by May et al., of the type given by G11[@May05.01; @May07.01] were found to have very large second hyperpolarizabilities for relatively small molecules, though G11 may suffer somewhat in its scaling properties because the chains are not aligned to reinforce the nonlinear response. Approach to analyzing molecular series ====================================== The importance of the present work is no sot much in the results that we have presented, which serve as an example, but in the protocols that we define for a methodology that identifies the promising series of molecules for further study and optimization for scale-up. Based on the examples above, we propose the following approach. The goal is to find the ideal unit that can be scaled up by linking the units together. The simplest units are ones that connect to form linear chains, but others are possible, including the formation of dendrimers, space filling structures, or any other novel shapes. The units can be stand alone; used to link two ends together – thus having the end type as an additional degree of freedom; or can be formed into fractal-like dendrimer units with multiple external units and joints. The evaluation protocol proceeds as follows: 1. \[step-Type\] Identify a structure type that includes repeat units and end/exterior units that is expected to show promise based on semi-empirical calculations or intuition 2. \[step-ChooseEnds\] Choose end/exterior units and keep them fixed and synthesize a series of structure of varying length between 1 and a minimum of 4 repeat units. 3. Measure the linear absorption spectrum for each to determine $\gamma_{max}$, and then measure $\gamma$ as a function of the number of repeat units to determine $\gamma_{int}$. 4. From a linear fit of $\gamma_{int}$ versus $\gamma_{Exp}$ and $\gamma_{int}$ versus $n$, determine $n_{SAT}$, $\gamma_{SAT}$ and the Figures of merit. 5. If the figure of merit for $\gamma$ exceeds $10^{-32} esu$, then make structures with greater numbers of repeat units. Otherwise, go to Set \#\[step-ChooseEnds\]. 6. If the scaling law breaks down for longer units, start again at Step \#\[step-Type\]. If not, you have a promising molecule for ultra-large second hyperpolarizability. This procedure identifies useful paradigms that have the potential for ultra-large third-order nonlinear-optical response. Since making lager molecules is a more involved process, the proposed methodology identifies a series that is worth the effort for additional synthetic efforts. Conclusion ========== Making a direct comparison between the nonlinear-optical response of two molecules is problematic because they may be of differing sizes, so differences may be due solely to simple scaling and not to the intrinsic nonlinear response of the molecule. The size of a molecule is not well defined from the quantum perspective because molecules do not have sharp boundaries. However, the difference in energy between the first excited state and the ground state, $E_{10}$, and the effective number of electrons, $N$, defines a size, which is embodied in the fundamental limit of the second-order nonlinear response, $\gamma_{max}$, a function of only $N$ and $E_{10}$. Dividing the nonlinear response by the fundamental limit defines the intrinsic response, which is a scale invariant property that can be used to compare molecules of disparately different sizes. Indeed, the range of the intrinsic nonlinear is much smaller than the absolute nonlinearities because much of the difference is due to size effects. Using the idea of scale invariance, we have defined a figure of merit that can be used to compare a series of molecules that differ mostly in just their length. This figure of merit can be used to identify new paradigms that are scalable; that is, longer versions of the molecule return a nonlinearity that is far larger than one would attain if it were due only to the increased length. We have shown how this method can be used to analyze which material classes are the most promising. In the case of the second hyperpolarizability, we find that that the response is optimized by the simple polyyne bridge with simple cyclic end groups. More importantly, our work uses a review of the literature to illustrate a new approach for identifying better molecular classes. Using this type of a well-defined procedure may be required to make the next big leap in the design of new molecules. Funding Information =================== We acknowledge the National Science Foundation(ECCS-1128076) for generously supporting this work.
--- abstract: 'Unmanned aerial vehicles (UAVs) are envisioned to complement the 5G communication infrastructure in future smart cities. Hot spots easily appear in road intersections, where effective communication among vehicles is challenging. UAVs may serve as relays with the advantages of low price, easy deployment, line-of-sight links, and flexible mobility. In this paper, we study a UAV-assisted vehicular network where the UAV jointly adjusts its transmission control (power and channel) and 3D flight to maximize the total throughput. First, we formulate a Markov decision process (MDP) problem by modeling the mobility of the UAV/vehicles and the state transitions. Secondly, we solve the target problem using a deep reinforcement learning method under unknown or unmeasurable environment variables especially in 5G, namely, the deep deterministic policy gradient (DDPG), and propose three solutions with different control objectives. Environment variables are unknown and unmeasurable, therefore, we use a deep reinforcement learning method. Moreover, considering the energy consumption of 3D flight, we extend the proposed solutions to maximize the total throughput per energy unit by encouraging or discouraging the UAV’s mobility. To achieve this goal, the DDPG framework is modified. Thirdly, in a simplified model with small state space and action space, we verify the optimality of proposed algorithms. Comparing with two baseline schemes, we demonstrate the effectiveness of proposed algorithms in a realistic model.' author: - 'Ming Zhu$^*$,  Xiao-Yang Liu$^*$,  and Xiaodong Wang [^1] [^2] [^3]' title: 'Deep Reinforcement Learning for Unmanned Aerial Vehicle-Assisted Vehicular Networks' --- [Shell : Bare Demo of IEEEtran.cls for Journals]{} Unmanned aerial vehicle, vehicular networks, smart cities, Markov decision process, deep reinforcement learning, power control, channel control. Introduction {#Sec:Introduction} ============ Intelligent transportation system [@chaqfeh2018DataDissemination] [@zhu2016PublicVehicle] [@zhu2018JointTransportationCharging] [@zhu2019PathPlanning] is a key component of smart cities, which employs real-time data communication for traffic monitoring, path planning, entertainment and advertisement [@li2018CrowdTracking]. High speed vehicular networks [@cunha2016CommunicationVANET] emerge as a key component of intelligent transportation systems that provide cooperative communications to improve data transmission performance. With the increasing number of vehicles, the current communication infrastructure may not satisfy data transmission requirements, especially when hot spots (e.g., road intersections) appear during rush hours. Unmanned aerial vehicles (UAVs) or drones [@sedjelmaci2017UAV] can complement the 4G/5G communication infrastructure, including vehicle-to-vehicle (V2V) communications, and vehicle-to-infrastructure (V2I) communications. Qualcomm has received a certification of authorization allowing for UAV testing below 400 feet [@2018UAV_5G]; Huawei will cooperate with China Mobile to build the first cellular test network for regional logistics UAVs [@2018Drone_5G_Huawei]. A UAV-assisted vehicular network in Fig. \[Fig:Scenario\] has several advantages. First, the path loss will be much lower since the UAV can move nearer to vehicles compared with stationary base stations. Secondly, the UAV is flexible in adjusting the transmission control [@alzenad2017UAV_Placement] based on the mobility of vehicles. Thirdly, the quality of UAV-to-vehicle links is generally better than that of terrestrial links [@giordani2018mmWave_5G], since they are mostly line-of-sight (LoS). Maximizing the total throughput of UAV-to-vehicle links has several challenges. First, the communication channels vary with the UAV’s three-dimensional (3D) positions. Secondly, the joint adjustment of the UAV’s 3D flight and transmission control (e.g., power control) cannot be solved directly using conventional optimization methods, especially when the environment variables are unknown and unmeasurable. Thirdly, the channel conditions are hard to acquire, e.g., the path loss from the UAV to vehicles is closely related to the height/density of buildings and street width. ![The scenario of a UAV-assisted vehicular network.[]{data-label="Fig:Scenario"}](Fig/Scenario.pdf){height="0.60\linewidth" width="0.90\linewidth"} In this paper, we propose deep reinforcement learning [@wang2018DRL] based algorithms to maximize the total throughput of UAV-to-vehicle communications, which jointly adjusts the UAV’s 3D flight and transmission control by learning through interacting with the environment. The main contributions of this paper can be summarized as follows: 1) We formulate the problem as a Markov decision process (MDP) problem to maximize the total throughput with the constraints of total transmission power and total channel; 2) We apply a deep reinforcement learning method, the deep deterministic policy gradient (DDPG), to solve the problem. DDPG is suitable to solve MDP problems with continuous states and actions. We propose three solutions with different control objectives to jointly adjust the UAV’s 3D flight and transmission control. Then we extend the proposed solutions to maximize the total throughput per energy unit. To encourage or discourage the UAV’s mobility, we modify the reward function and the DDPG framework; 3) We verify the optimality of proposed solutions using a simplified model with small state space and action space. Finally, we provide extensive simulation results to demonstrate the effectiveness of the proposed solutions compared with two baseline schemes. The remainder of the paper is organized as follows. Section \[Sec:RelatedWork\] discusses related works. Section \[Sec:SystemModel\] presents system models and problem formulation. Solutions are proposed in Section \[Sec:Solution\]. Section \[Sec:PerformanceEvaluation\] presents the performance evaluation. Section \[Sec:Conclusion\] concludes this paper. Related Works {#Sec:RelatedWork} ============= The dynamic control for the UAV-assisted vehicular networks includes flight control and transmission control. Flight control mainly includes the planning of flight path, time, and direction. Yang [*et al. *]{}[@yang2018UAV_Path] proposed a joint genetic algorithm and ant colony optimization method to obtain the best UAV flight paths to collect sensory data in wireless sensor networks. To further minimize the UAVs’ travel duration under certain constraints (e.g., energy limitations, fairness, and collision), Garraffa [*et al. *]{}[@garraffa2018UAV_Path] proposed a two-dimensional (2D) path planning method based on a column generation approach. Liu [*et al. *]{}[@liu2018UAV_RL] proposed a deep reinforcement learning approach to control a group of UAVs by optimizing the flying directions and distances to achieve the best communication coverage in the long run with limited energy consumption. The transmission control of UAVs mainly concerns resource allocations, e.g., access selection, transmission power and bandwidth/channel allocation. Wang [*et al. *]{}[@wang2018UAV_power] presented a power allocation strategy for UAVs considering communications, caching, and energy transfer. In a UAV-assisted communication network, Yan [*et al. *]{}[@yan2018UAV_game] studied a UAV access selection and base station bandwidth allocation problem, where the interaction among UAVs and base stations was modeled as a Stackelberg game, and the uniqueness of a Nash equilibrium was obtained. Joint control of both UAVs’ flight and transmission has also be considered. Wu [*et al. *]{}[@wu2018UAV_Trajectory] considered maximizing the minimum achievable rates from a UAV to ground users by jointly optimizing the UAV’s 2D trajectory and power allocation. Zeng [*et al. *]{}[@zeng2018UAV_Trajectory] proposed a convex optimization method to optimize the UAV’s 2D trajectory to minimize its mission completion time while ensuring each ground terminal recovers the file with high probability when the UAV disseminates a common file to them. Zhang [*et al. *]{}[@zhang2018UAV_Trajectory] considered the UAV mission completion time minimization by optimizing its 2D trajectory with a constraint on the connectivity quality from base stations to the UAV. However, most existing research works neglected adjusting UAVs’ height to obtain better quality of links by avoiding various obstructions or non-line-of-sight (NLoS) links. Fan [*et al. *]{}[@fan2018UAV_Placement_Resource] optimized the UAV’s 3D flight and transmission control together; however, the 3D position optimization was converted to a 2D position optimization by the LoS link requirement. The existing deep reinforcement learning based methodd only handle UAVs’ 2D flight and simple transmission control decisions. For example, Challita [*et al. *]{}[@challita2018UAV_RL] proposed a deep reinforcement learning based method for a cellular UAV network by optimizing the 2D path and cell association to achieve a tradeoff between maximizing energy efficiency and minimizing both wireless latency and the interference on the path. A similar scheme is applied to provide intelligent traffic light control in [@Liu2018NIPS]. In addition, most existing works assumed that the ground terminals are stationary; whereas in reality, some ground terminals move with certain patterns, e.g., vehicles move under the control of traffic lights. This work studies a UAV-assisted vehicular network where the UAV’s 3D flight and transmission control can be jointly adjusted, considering the mobility of vehicles in a road intersection. System Models and Problem Formulation {#Sec:SystemModel} ===================================== In this section, we first describe the traffic model and communication model, and then formulate the target problem as a Markov decision process. The variables in the communication model are listed in Table \[Tab:VariablesCommunicationModel\] for easy reference. Traffic Model {#Subsec:TrafficModel} ------------- ![A one-way-two-flow road intersection.[]{data-label="Fig:RoadIntersectionInModel"}](Fig/RoadIntersectionInModel.pdf){height="0.61\linewidth" width="0.73\linewidth"} We start with a one-way-two-flow road intersection, as shown in Fig. \[Fig:RoadIntersectionInModel\], while a much more complicated scenario in Fig. \[Fig:RoadIntersection\] will be described in Section \[Subsec:RealisticTrafficModel\]. Five blocks are numbered as 0, 1, 2, 3, and 4, where block 0 is the intersection. We assume that each block contains at most one vehicle, indicated by binary variables $\bm{n} = (n^0, ..., n^4) \in \{ 0, 1 \}$. There are two traffic flows in Fig. \[Fig:RoadIntersectionInModel\], - [“Flow 1"]{}: $1 \rightarrow 0 \rightarrow 3$; - [“Flow 2"]{}: $2 \rightarrow 0 \rightarrow 4$. ------------------- --------------------------------------------------------------- $h^i_t, H^i_t$ channel power gain and channel state from the UAV to a vehicle in block $i$ in time slot $t$. $\psi^i_t$ SINR from the UAV to a vehicle in block $i$ in time slot $t$. $d^i_t, D^i_t$ horizontal distance and Euclidean distance between the UAV and a vehicle in block $i$. $P, C, b$ total transmission power, total number of channels, and band- width of each channel. $\rho^i_t, c^i_t$ transmission power and number of channels allocated for the vehicle in block $i$ in time slot $t$. ------------------- --------------------------------------------------------------- : Variables in communication model[]{data-label="Tab:VariablesCommunicationModel"} ![Traffic light states along time.[]{data-label="Fig:TimeSlotTrafficLight"}](Fig/TimeSlotTrafficLight.pdf){height="0.30\linewidth" width="0.83\linewidth"} The traffic light $L$ has four configurations: - [$L\!=\!0$]{}: red light for flow 1 and green light for flow 2; - [$L\!=\!1$]{}: red light for flow 1 and yellow light for flow 2; - [$L\!=\!2$]{}: green light for flow 1 and red light for flow 2; - [$L\!=\!3$]{}: yellow light for flow 1 and red light for flow 2. Time is partitioned into slots with equal duration. The duration of a green or red light occupies $N$ time slots, and the duration of a yellow light occupies a time slot, which are shown in Fig. \[Fig:TimeSlotTrafficLight\]. We assume that each vehicle moves one block in a time slot if the traffic light is green. Communication Model {#Subsec:CommunicationModel} ------------------- We focus on the downlink communications (UAV-to-vehicle), since they are directly controlled by the UAV. There are two channel states of each UAV-to-vehicle link, line-of-sight (LoS) and non-line-of-sight (NLoS). Let $x$ and $z$ denote the block (horizontal position) and height of the UAV respectively, where $x \in \{ 0, 1, 2, 3, 4 \}$ corresponds to these five blocks in Fig. \[Fig:RoadIntersectionInModel\], and $z$ is discretized to multiple values. We assume that the UAV stays above the five blocks since the UAV trends to get nearer to vehicles. Next, we describe the communication model, including the channel power gain, the signal to interference and noise ratio (SINR), and the total throughput. First, the channel power gain between the UAV and a vehicle in block $i$ in time slot $t$ is $h^i_t$ with a channel state $H^i_t \in \{ \text{NLoS}, \text{LoS} \}$. $h^i$ is formulated as [@alzenad2017UAV_Placement] [@al2014OptimalAltitude] $$\begin{aligned} && \hspace{-0.20in} h^i_t = \label{Eqn:ChannelPowerGain} \begin{cases} (D^i_t)^{-\beta_1}, ~~\,\, ~\text{if}~ H^i_t = \text{LoS}, \\ \beta_2 (D^i_t)^{-\beta_1}, ~\text{if}~ H^i_t = \text{NLoS}, \end{cases}\end{aligned}$$ where $D^i_t$ is the Euclidean distance between the UAV and the vehicle in block $i$ in time slot $t$, $\beta_1$ is the path loss exponent, and $\beta_2$ is an additional attenuation factor caused by NLoS connections. The probabilities of LoS and NLoS links between the UAV and a vehicle in block $i$ in time slot $t$ are [@mozaffari2016UAV_D2D] $$\begin{aligned} && \hspace{-0.3in} p(H^i_t \! = \! \text{LoS}) \! = \! \frac{1}{1 + \alpha_1 \exp(-\alpha_2 (\frac{180}{\pi}\arctan \frac{z}{d^i_t} - \alpha_1))}, \label{Eqn:LOSProbability} \\ && \hspace{-0.4in} p(H^i_t \! = \! \text{NLoS}) \! = \! 1 - p(H^i_t \! = \! \text{LoS}), ~i \in \{ 0, 1, 2, 3, 4 \}, \label{Eqn:NLOSProbability}\end{aligned}$$ where $\alpha_1$ and $\alpha_2$ are system parameters depending on the environment (height/density of buildings, and street width, etc.). We assume that $\alpha_1$, $\alpha_2$, $\beta_1$, and $\beta_2$ have fixed values among all blocks in an intersection. $d^i_t$ is the horizontal distance in time slot $t$. The angle $\frac{180}{\pi} \arctan \frac{z}{d^i_t}$ is measured in “degrees" with the range $0^{\circ} \sim 90^{\circ}$. Both $d^i_t$ and $z_t$ are discrete variables, therefore, $D^i_t = \sqrt{(d^i_t)^2 + z^2_t}$ is also a discrete variable. Secondly, the SINR $\psi^i_t$ in time slot $t$ from the UAV to a vehicle in block $i$ is characterized as [@oehmann2015sinr] $$\begin{aligned} && \psi^i_t = \frac{\rho^i_t h^i_t}{b c^i_t \sigma^2}, ~i \in \{ 0, 1, 2, 3, 4 \}, \label{Eqn:SINR}\end{aligned}$$ where $b$ is the equal bandwidth of each channel, $\rho^i_t$ and $c^i_t$ are the allocated transmission power and number of channels for the vehicle in block $i$ in time slot $t$, respectively, and $\sigma^2$ is the additive white Gaussian noise (AWGN) power spectrum density, and $h^i$ is formulated by (\[Eqn:ChannelPowerGain\]). We assume that the UAV employs orthogonal frequency division multiple access (OFDMA) [@gupta2016SubcarrierOFDM]; therefore, there is no interference among these channels. Thirdly, the total throughput (reward) of UAV-to-vehicle links is formulated as [@ramezani2017throughput] $$\begin{aligned} && \hspace{-0.45in} \sum_{i \in \{ 0, 1, 2, 3, 4 \}} \! b c^i_t \log (1 \! + \! \psi^i_t ) \! = \! \sum_{i \in \{ 0, 1, 2, 3, 4 \}} \! b c^i_t \log (1 \! + \! \frac{\rho^i_t h^i_t}{b c^i_t \sigma^2} ). \label{Eqn:TotalThroughput}\end{aligned}$$ MDP Formulation {#Sec:MDP} --------------- The UAV aims to maximize the total throughput with the constraints of total transmission power and total channels: $$\begin{aligned} && \sum_{i \in \{ 0, 1, 2, 3, 4 \}} \rho^i_t \leq P, ~\sum_{i \in \{ 0, 1, 2, 3, 4 \}} c^i_t \leq C, \nonumber \\ && 0 \leq \rho^i_t \leq \rho_{\text{max}}, ~~~~~~~~ 0 \leq c^i_t \leq c_{\text{max}}, ~i \in \{ 0, 1, 2, 3, 4 \}, \nonumber\end{aligned}$$ where $P$ is the total transmission power, $C$ is the total number of channels, $\rho_{\text{max}}$ is the maximum power allocated to a vehicle, $c_{\text{max}}$ is the maximum number of channels allocated to a vehicle, $\rho^i_t$ is a discrete variable, and $c^i_t$ is a nonnegative integer variable. The UAV-assisted communication is modeled as a Markov decision process (MDP). On one hand, from (\[Eqn:LOSProbability\]) and (\[Eqn:NLOSProbability\]), we know that the channel state of UAV-to-vehicle links follows a stochastic process. On the other hand, the arrival of vehicles follows a stochastic process under the control of the traffic light, e.g., (\[Eqn:n1a\]) and (\[Eqn:n1b\]). Under the MDP framework, the state space $\mathcal{S}$, action space $\mathcal{A}$, reward $r$, policy $\pi$, and state transition probability $p(s_{t + 1}|s_t, a_t)$ of our problem are defined as follows. - [State]{} $\mathcal{S} = (L, x, z, \bm{n}, \bm{H})$, where $L$ is the traffic light state, $(x, z)$ is the UAV’s 3D position with $x \in \{ 0, 1, 2, 3, 4 \}$ being the block and $z$ being the height, and $\bm{H} = (H^0, ..., H^4)$ is the channel state from the UAV to each block $i \in \{ 0, 1, 2, 3, 4 \}$ with $H^i \in \{ \text{NLoS}, \text{LoS} \}$. Let $z \in [z_{\text{min}}, z_{\text{max}}]$, where $z_{\text{min}}$ and $z_{\text{max}}$ are the UAV’s minimum and maximum height, respectively. The block $x$ is the location projected from UAV’s 3D position to the road. ![The position state transition diagram when the UAV’s height is fixed.[]{data-label="Fig:LocationStateTransitionDiagramInModel"}](Fig/LocationStateTransitionDiagramInModel.pdf){height="0.70\linewidth" width="0.73\linewidth"} - [Action]{} $\mathcal{A} = (\bm{f}, \bm{\rho}, \bm{c})$ denotes the action set. $f^x$ denotes the horizontal flight, and $f^z$ denotes the vertical flight, both of which constitute the UAV’s 3D flight $\bm{f} = (f^x, f^z)$. With respect to horizontal flight, we assume that the UAV can hover or flight to its adjacent block in a time slot, thus $f^x \in \{ 0, 1, ..., 7 \}$ in Fig. \[Fig:LocationStateTransitionDiagramInModel\]. With respect to vertical flight, we assume $$\begin{aligned} && f^z \in \{ -5, 0, 5 \}, \label{Eqn:UAVVerticalFlight} \end{aligned}$$ which means that the UAV can flight downward 5 meters, horizontally, and up 5 meters in a time slot. The UAV’s height changes as $$\begin{aligned} && z_{t + 1} = f^z_t + z_t. \label{Eqn:UAVHeightChange} \end{aligned}$$ $\bm{\rho} = (\rho^0_t, ..., \rho^4_t)$ and $\bm{c} = (c^0_t, ..., c^4_t)$ are the transmission power and channel allocation actions for those five blocks, respectively. At the end of time slot $t$, the UAV moves to a new 3D position according to action $\bm{f}$, and over time slot $t$, the transmission power and number of channels are $\bm{\rho}$ and $\bm{c}$, respectively. <!-- --> - [Reward $r(s_t, a_t) = \sum_{i \in \{ 0, 1, 2, 3, 4 \}} b n^i_t c^i_t \log (1 + \frac{\rho^i_t h^i_t}{b c^i_t \sigma^2} )$]{} is the total throughput after a transition from state $s_t$ to $s_{t + 1}$ taking action $a_t$. Note that the total throughput over the $t$-th time slot is measured at the state $s_t = (L_t, x_t, z_t, \bm{n}_t, \bm{H}_t)$. <!-- --> - [Policy]{} $\pi$ is the strategy for the UAV, which maps states to a probability distribution over the actions $\pi: \mathcal{S} \rightarrow \mathcal{P}(\mathcal{A})$, where $\mathcal{P}(\cdot)$ denotes probability distribution. In time slot $t$, the UAV’s state is $s_t = (L_t, x_t, z_t, \bm{n}_t, \bm{H}_t)$, and its policy $\pi_t$ outputs the probability distribution over the action $a_t$. We see that the policy indicates the action preference of the UAV. <!-- --> - [State transition probability $p(s_{t + 1}|s_t, a_t)$]{} formulated in (\[Eqn:TransitionProbability\]) is the probability of the UAV entering the new state $s_{t + 1}$, after taking the action $a_t$ at the current state $s_t$. At the current state $s_t = (L_t, x_t, z_t, \bm{n}_t, \bm{H}_t)$, after taking the 3D flight and transmission control $a_t = (\bm{f}, \bm{\rho}, \bm{c})$, the UAV moves to the new 3D position $(x_{t + 1}, z_{t + 1})$, and the channel state changes to $\bm{H}_{t + 1}$, with the traffic light changes to $L_{t + 1}$ and the number of vehicles in each block changes to $\bm{n}_{t + 1}$. The state transitions of the traffic light along time are shown in Fig. \[Fig:TimeSlotTrafficLight\]. The transition of the channel state for UAV-to-vehicle links is a stochastic process, which is reflected by (\[Eqn:LOSProbability\]) and (\[Eqn:NLOSProbability\]). Next, we discuss the MDP in three aspects: the state transition probability, the state transitions of the number of vehicles in each block, and the UAV’s 3D position. Note that the transmission power control and channel control do not affect the traffic light, the channel state, the number of vehicles, and the UAV’s 3D position. First, we discuss the state transition probability $p(s_{t + 1}|s_t, a_t)$ $=$ $p((L_{t + 1}, x_{t + 1}, z_{t + 1}, \bm{n}_{t + 1}, \bm{H}_{t + 1})$ $|(L_t, x_t, z_t, \bm{n}_t, \bm{H}_t)$, $(\bm{f}_t, \bm{\rho}_t, \bm{c}_t))$. The UAV’s 3D fight only affects the UAV’s 3D position state and the channel state, the traffic light state of the next time slot relies on the current traffic light state, and the number of vehicles in each block of the next time slot relies on the current number of vehicles and the traffic light state. Therefore, the state transition probability is $$\begin{aligned} && p(s_{t + 1}|s_t, a_t) = p(x_{t + 1}, z_{t + 1}|x_t, z_t, \bm{f}_t) \nonumber \\ && \hspace{0.93in} \times p(\bm{H}_{t + 1}|x_t, z_t, \bm{f}_t) \times p(L_{t + 1}|L_t) \nonumber \\ && \hspace{0.93in} \times p(\bm{n}_{t + 1}|L_t, \bm{n}_t), \label{Eqn:TransitionProbability}\end{aligned}$$ where $p(x_{t + 1}, z_{t + 1}|x_t, z_t, \bm{f}_t)$ is easily obtained by the 3D position state transition based on the UAV’s flight actions in Fig. \[Fig:LocationStateTransitionDiagramInModel\], $p(\bm{H}_{t + 1}|x_t, z_t, \bm{f}_t)$ is easily obtained by (\[Eqn:LOSProbability\]) and (\[Eqn:NLOSProbability\]), $p(L_{t + 1}|L_t)$ is obtained by the traffic light state transition in Fig. \[Fig:TimeSlotTrafficLight\], and $p(\bm{n}_{t + 1}|L_t, \bm{n}_t)$ is easily obtained by (\[Eqn:n0\]) $\sim$ (\[Eqn:n1b\]). Secondly, we discuss the state transitions of the number of vehicles in each block. It is a stochastic process. The UAV’s states and actions do not affect the number of vehicles of all blocks. Let $\lambda_1$ and $\lambda_2$ be the probabilities of the arrivals of new vehicles in flow 1 and 2, respectively. The state transitions for the number of vehicles in block 0, 3, and 4 are $$\begin{aligned} && \hspace{-0.098in} n^0_{t + 1} = \label{Eqn:n0} \begin{cases} n^2_t, ~\text{if}~ L_t = 0, \\ n^1_t, ~\text{if}~ L_t = 2, \\ 0, ~\,\, ~\text{otherwise}, \end{cases}\end{aligned}$$ $$\begin{aligned} && \hspace{0in} n^3_{t + 1} = \label{Eqn:n3} \begin{cases} n^0_t, \, ~\text{if}~ L_t = 2, 3, \\ 0, ~\, \, ~\text{otherwise}, \end{cases}\end{aligned}$$ $$\begin{aligned} && \hspace{0in} n^4_{t + 1} = \label{Eqn:n4} \begin{cases} n^0_t, \, ~\text{if}~ L_t = 0, 1, \\ 0, ~\, \, ~\text{otherwise}. \end{cases}\end{aligned}$$ The transition probability is 1 in (\[Eqn:n0\]), (\[Eqn:n3\]) and (\[Eqn:n4\]) since the transitions are deterministic in block 0, 3, and 4. While the state transition probabilities for the number of vehicles in block 1 and 2 are nondeterministic, moreover, both of them are affected by their current number of vehicles and the traffic light. Taking block 1 when the traffic light state $L_t = 2$ as an example, the probability for the number of vehicles is $$\begin{aligned} && p(n^1_{t + 1} = 1|L_t = 2) = \lambda_1, \label{Eqn:n1a} \\ && p(n^1_{t + 1} = 0|L_t = 2) = 1 - \lambda_1. \label{Eqn:n1b}\end{aligned}$$ When $(n^1_t = 0, L_t \neq 2)$ and $(n^1_t = 1, L_t \neq 2)$, the probability for the number of vehicles will be obtained in a similar way. **Algorithm 1**: Q-learning-based algorithm ------------------------------------------------------------------------------------------------------------------------------------------------------------------- --  **Input**: the number of episodes $K$, the learning rate $\alpha$, parameter $\epsilon$. 1: Initialize all states. Initialize $Q(s, a)$ for all state-action pairs randomly. 2: **for** episode $k = 1$ to $K$ 3:   Observe the initial state $s_1$. 4:   **for** each slot $t = 1$ to $T$ 5:     Select the UAV’s action $a_t$ from state $s_t$ using (\[Eqn:EpsilonGreedy\]). 6:     Execute the UAV’s action $a_t$, receive reward $r_t$, and observe a new state $s_{t+1}$ from the environment. 7:     Update Q-value function: $Q(s_t, a_t) \leftarrow Q(s_t, a_t) + \alpha \left[ r_t + \gamma \max_{a_{t + 1}} Q(s_{t + 1}, a_{t + 1}) - Q(s_t, a_t) \right]$. Thirdly, we discuss the state transition of the UAV’s 3D position. It includes block transitions and height transitions. The UAV’s height transition is formulated in (\[Eqn:UAVHeightChange\]). If the UAV’s height is fixed, the corresponding position state transition diagram is shown in Fig. \[Fig:LocationStateTransitionDiagramInModel\], where $\{ S_i \}_{i \in \{ 0, 1, 2, 3, 4 \} }$ denotes the block of the UAV: $0$ denotes staying in the current block; $\{ 1, 2, 3, 4 \}$ denotes a flight from block 0 to the other blocks (1, 2, 3, and 4); $5$ denotes an anticlockwise flight; $6$ denotes a flight from block 1, 2, 3, or 4 to block 0; $7$ denotes a clockwise flight. Proposed Solutions {#Sec:Solution} ================== In this section, we first describe the motivation, and then present an overview of Q-learning and the deep deterministic policy gradient algorithm, and then propose solutions with different control objectives, and finally present an extension of solutions that takes into account the energy consumption of 3D flight. Motivation ---------- Deep reinforcement learning methods are suitable for the target problem since environment variables are unknown and unmeasurable. For example, $\alpha_1$ and $\alpha_2$ are affected by the height/density of buildings, and the height and size of vehicles, etc. $\beta_1$ and $\beta_2$ are time dependent and are affected by the current environment such as weather [@agiwal2016_5G]. Although UAVs can detect the LoS/NloS links using equipped cameras, it is very challenging to detect them accurately for several reasons. First, the locations of recievers on vehicles should be labeled for detection. Secondly, it is hard to detect receivers accurately using computer vision technology since receivers are much small than vehicles. Thirdly, it requires automobile manufacturers to label the loactions of receivers, which may not be satisfied in several years. Therefore, it requires a large amount of labor to test these environment variables accurately. It is hard to obtain the optimal strategies even all environment variables are known. Existing works [@wu2018UAV_throughput] [@zhang2018UAV_communication] obtain the near-optimal strategies in the 2D flight scenario when users are stationary, however, they are not capable of solving our target problem since the UAV adjusts its 3D position and vehicles move with their patterns under the control of traffic lights. Q-learning ---------- The state transition probabilities of MDP are unknown in our problem, since some variables are unknown, e.g., $\alpha_1$, $\alpha_2$, $\lambda_1$, and $\lambda_2$. Our problem cannot be solved directly using conventional MDP solutions, e.g., dynamic programming algorithms, policy iteration and value iteration algorithms. Therefore, we apply the reinforcement learning (RL) approach. The return from a state is defined as the sum of discounted future reward $\sum^{T}_{i = t} \gamma^{i - t} r(s_i, a_i)$, where $T$ is the total number of time slots, and $\gamma \in (0, 1)$ is a discount factor that diminishes the future reward and ensures that the sum of an infinite number of rewards is still finite. Let $Q^{\pi}(s_t, a_t) = \mathbb{E}_{a_i \sim \pi}[\sum^{T}_{i = t} \gamma^{i - t} r(s_i, a_i) |s_t, a_t]$ represents the expected return after taking action $a_t$ in state $s_t$ under policy $\pi$. The Bellman equation gives the optimality condition in conventional MDP solutions [@sutton2018RL]: $$\begin{aligned} Q^{\pi}(s_t, a_t) \! = \! \! \! \sum_{s_{t \! + \! 1}, r_t} p(s_{t \! + \! 1}, r_t|s_t, a_t) \! \left[ r_t \! + \! \gamma \max_{a_{t \! + \! 1}} Q^{\pi} (s_{t \! + \! 1}, a_{t \! + \! 1}) \right]. \nonumber\end{aligned}$$ Q-learning [@watkins1992Q_learning] is a classical model-free RL algorithm [@Wirth2016ModelFree]. Q-learning with the essence of exploration and exploitation aims to maximize the expected return by interacting with the environment. The update of $Q(s_t, a_t)$ is $$\begin{aligned} && \hspace{-0.01in} Q(s_t, a_t) \leftarrow Q(s_t, a_t) + \alpha [ r_t + \gamma \max_{a_{t + 1}} Q(s_{t + 1}, a_{t + 1}) \nonumber \\ && \hspace{1.51in} - Q(s_t, a_t) ], \label{Eqn:UpdateQInQLearning}\end{aligned}$$ where $\alpha$ is a learning rate. Q-learning uses the $\epsilon$-greedy strategy [@van2016DoubleQLearning] to select an action, so that the agent behaves greedily most of the time, but selects randomly among all the actions with a small probability $\epsilon$. The $\epsilon$-greedy strategy is defined as follows $$\begin{aligned} && \hspace{-0.4in} a_t = \label{Eqn:EpsilonGreedy} \begin{cases} \arg\max_a Q(s_t, a) , ~\text{with probability}~1 - \epsilon, \\ \text{a random action}, ~~\, ~\text{with probability}~\epsilon. \end{cases}\end{aligned}$$ The Q-learning algorithm [@sutton2018RL] is shown in Alg. 1. Line 1 is initialization. In each episode, the inner loop is executed in lines 4 $\sim$ 7. Line 5 selects an action using (\[Eqn:EpsilonGreedy\]), and then the action is executed in line 6. Line 7 updates the Q-value. Q-learning cannot solve our problem because of several limitations. 1) Q-learning can only solve MDP problems with small state space and action space. However, the state space and action space of our problem are very large. 2) Q-learning cannot handle continuous state or action space. The UAV’s transmission power allocation actions are continuous. The transmission power control is a continuous action in reality. If we discretize the transmission power allocation actions, and use Q-learning to solve it, the result may be far from the optimum. 3) Q-learning will converge slowly using too many computational resources [@sutton2018RL], and this is not practical in our problem. Therefore, we adopt the deep deterministic policy gradient algorithm to solve our problem. Deep Deterministic Policy Gradient ---------------------------------- The deep deterministic policy gradient (DDPG) method [@lillicrap2016DDPG] uses deep neural networks to approximate both action policy $\pi$ and value function $Q(s, a)$. This method has two advantages: 1) it uses neural networks as approximators, essentially compressing the state and action space to much smaller latent parameter space, and 2) the gradient descent method can be used to update the network weights, which greatly speeds up the convergence and reduces the computational time. Therefore, the memory and computational resources are largely saved. In real systems, DDPG exploits the powerful skills introduced in AlphaGo zero [@silver2017Game] and Atari game playing [@mnih2013AtariDRL], including experience replay buffer, actor-critic approach, soft update, and exploration noise. **1) Experience replay buffer** $R_b$ stores transitions that will be used to update network parameters. At each time slot $t$, a transition $(s_t, a_t, r_t, s_{t + 1})$ is stored in $R_b$. After a certain number of time slots, each iteration samples a mini-batch of $M = |\Omega|$ transitions $\{ (s^j, a^j, r^j, s^j) \}_{j \in \Omega}$ to train neural networks, where $\Omega$ is a set of indices of sampled transitions from $R_b$. “Experience replay buffer" has two advantages: 1) enabling the stochastic gradient decent method [@daniely2017SGD]; and 2) removing the correlations between consecutive transitions. ![image](Fig/DDPG_Framework.pdf){height="0.87\linewidth" width="0.79\linewidth"} **2) Actor-critic approach**: the critic approximates the Q-value, and the actor approximates the action policy. The critic has two neural networks: the online Q-network $Q$ with parameter $\theta^Q$ and the target Q-network $Q'$ with parameter $\theta^{Q'}$. The actor has two neural networks: the online policy network $\mu$ with parameter $\theta^{\mu}$ and the target policy network $\mu'$ with parameter $\theta^{\mu'}$. The training of these four neural networks are discussed in the next subsection. **3) Soft update** with a low learning rate $\tau \ll 1$ is introduced to improve the stability of learning. The soft updates of the target Q-network $Q'$ and the target policy network $\mu'$ are as follows $$\begin{aligned} && \hspace{-0.2in} \theta^{Q'} \leftarrow \tau \theta^Q + (1 - \tau) \theta^{Q'} = \theta^{Q'} + \tau (\theta^Q - \theta^{Q'}), \label{Eqn:QUpdate} \\ && \hspace{-0.2in} \theta^{\mu'} \, \leftarrow \tau \theta^{\mu} + (1 - \tau) \theta^{\mu'} \,\, = \theta^{\mu'} + \tau (\theta^{\mu} - \theta^{\mu'}). \label{Eqn:MuUpdate}\end{aligned}$$ **4) Exploration noise** is added to the actor’s target policy to output a new action $$\begin{aligned} && a_t = \mu'(s_t|\theta^{\mu'}) + \mathcal{N}_t. \label{Eqn:NewAction}\end{aligned}$$ There is a tradeoff between exploration and exploitation, and the exploration is independent from the learning process. Adding exploration noise in (\[Eqn:NewAction\]) ensures that the UAV has a certain probability of exploring new actions besides the one predicted by the current policy $\mu'(s_t|\theta^{\mu'})$, and avoids that the UAV is trapped in a local optimum. **Algorithm 2**: Channel allocation in time slot $t$                                                                                          ----------------------------------------------------------------------------------------------------------------------------------------------- --  **Input**: the power allocation $\bm{\rho}$, the number of vehicles in all blocks $\bm{n}$, the maximum number of channels allocated to          a vehicle $c_{\text{max}}$, the total number of channels $C$.  **Output**: the channel allocation $\bm{c}_t$ for all blocks. 1: Initialize the remaining total number of channels $C_r \leftarrow C$. 2: Calculate the average allocated power for each vehicle in all blocks $\bar{\bm{\rho}}_t$ by (\[Eqn:AveragePower\]). 3: Sort $\bar{\bm{\rho}}_t$ by the descending order, and obtain a sequence of block indices $\bm{J}$. 4:   **for** block $j \in \bm{J}$ 5:     $c^j_t \leftarrow \min(C_r, n^j_t c_{\text{max}})$. 7:     $C_r \leftarrow C_r - c^j_t$. 8: Return $\bm{c}_t$. **Algorithm 3**: DDPG-based algorithms: PowerControl, FlightControl, and JointControl ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --  **Input**: the number of episodes $K$, the number of time slots $T$ in an episode, the mini-batch size $M$, the learning rate $\tau$. 1: Initialize all states, including the traffic light state $L$, the UAV’s 3D position $(x, z)$, the number of vehicles $\bm{n}$ and the    channel state $\bm{H}$ in all blocks. 2: Randomly initialize critic’s online Q-network parameters $\theta^Q$ and actor’s online policy network parameters $\theta^{\mu}$, and    initialize the critic’s target Q-network parameters $\theta^{Q'} \leftarrow \theta^Q$ and actor’s target policy network parameters $\theta^{\mu'} \leftarrow \theta^{\mu}$. 3: Allocate an experience replay buffer $R_b$. 4: **for** episode $k = 1$ to $K$ 5:   Initialize a random process (a standard normal distribution) $\mathcal{N}$ for the UAV’s action exploration. 6:   Observe the initial state $s_1$. 7:   **for** $t = 1$ to T 8:     Select the UAV’s action $\bar{a}_t = \mu'(s_t|\theta^{\mu'}) + \mathcal{N}_t$ according to the policy of $\mu'$ and the exploration noise $\mathcal{N}_t$. 9:     **if** PowerControl 10:       Combine the channel allocation in Alg. 2 and $\bar{a}_t$ as the UAV’s action $a_t$ at a fixed 3D position. 11:     **if** FlightControl 12:       Combine the equal transmission power, equal channel allocation and $\bar{a}_t$ (3D flight) as the UAV’s action $a_t$. 13:     **if** JointControl 14:       Combine the 3D flight action, the channel allocation in Alg. 2 and $\bar{a}_t$ as the UAV’s action $a_t$. 15:     Execute the UAV’s action $a_t$, and receive reward $r_t$, and observe new state $s_{t+1}$ from the environment. 16:     Store transition $(s_t, a_t, r_t, s_{t+1})$ in the UAV’s experience replay buffer $R_b$. 17:     Sample $R_b$ to obtain a random mini-batch of $M$ transitions $\{ (s^j_t, a^j_t, r^j_t, s^j_{t + 1}) \}_{j \in \Omega} \subseteq R_b$, where $\Omega$ is a set of         indices of sampled transitions with $|\Omega| = M$. 18:     The critic’s target Q-network $Q'$ calculates and outputs $y^j_t = r^j_t + \gamma Q'(s^j_{t + 1}, \mu'(s^j_{t + 1}| \theta^{\mu'})|\theta^{Q'})$ to the critic’s         online Q-network $Q$. 19:     Update the critic’s online Q-network $Q$ to make its Q-value fit $y^j_t$ by minimizing the loss function:         $\nabla_{\theta^Q} \text{Loss}_t(\theta^Q)= \nabla_{\theta^Q} [\frac{1}{M} \sum^{M}_{j = 1} (y^j_t - Q(s^j_t, a^j_t| \theta^Q))^2]$. 20:     Update the actor’s online policy network $\mu$ based on the input $\{ \nabla_a Q(s, a| \theta^Q)|_{s = s^j_t, a = \mu(s^j_t)} \}_{j \in \Omega}$ from $Q$ using the         policy gradient by the chain rule:         $\frac{1}{M} \sum_{j \in \Omega} \mathbb{E}_{s_t} [\nabla_{a} Q(s, a| \theta^Q)|_{s = s_t, a = \mu(s_t)} \nabla_{\theta^{\mu}} \mu(s|\theta^{\mu})|_{s = s_t} ]$. 21:     Soft update the critic’s target Q-network $Q'$ and actor’s target policy network $\mu'$ to make the evaluation of the         UAV’s actions and the UAV’s policy more stable: $\theta^{Q'} \leftarrow \tau \theta^Q + (1 - \tau) \theta^{Q'}$, $\theta^{\mu'} \leftarrow \tau \theta^{\mu} + (1 - \tau) \theta^{\mu'}$. Deep Reinforcement Learning-based Solutions ------------------------------------------- The UAV has two transmission controls, power and channel. We use the power allocation as the main control objective for two reasons. 1) Once the power allocation is determined, the channel allocation will be easily obtained in OFDMA. According to Theorem 4 of [@wang2015JointEnergyBandwidth], in OFDMA, if all links have the equal weights just as our reward function (\[Eqn:TotalThroughput\]), the transmitter should send messages to the receiver with the strongest channel in each time slot. In our problem, the strongest channel is not determined since the channel state (LoS or NLoS) is a random process. DDPG trends to allocate more power to the strongest channels with large probabilities, therefore, channel allocation will be easily obtained based on power allocation actions. 2) Power allocation is continuous, and DDPG is suitable to handle these actions. However, if we use DDPG for the channel allocation, the number of action variables will be very large and the convergence will be very slow, since the channel allocation is discrete and the number of channels is generally large (e.g., 200) especially in rush hours. We choose power control or flight as control objectives since controlling power and flight is more efficient than controlling channel. Moreover, the best channel allocation strategy can be obtained indirectly if the power is allocated in OFDMA. Based on the above analysis, we propose three algorithms: - [PowerControl:]{} the UAV adjusts the transmission power allocation using the actor network at a fixed 3D position, and the channels are allocated to vehicles by Alg.2 in each time slot. <!-- --> - [FlightControl:]{} the UAV adjusts its 3D flight using the actor network, and the transmission power and channel allocation are equally allocated to each vehicle in each time slot. <!-- --> - [JointControl:]{} the UAV adjusts its 3D flight and the transmission power allocation using the actor network, and the channels are allocated to vehicles by Alg.2 in each time slot. To allocate channels among blocks, we introduce a variable denoting the average allocated power of a vehicle in block $i$: $$\begin{aligned} && \hspace{-0.098in} \bar{\rho}^i_t = \label{Eqn:AveragePower} \begin{cases} \frac{\rho^i_t}{n^i_t}, ~\text{if}~ n^i_t \neq 0, \\ 0, ~\,\, ~\text{otherwise}. \end{cases}\end{aligned}$$ The channel allocation algorithm is shown in Alg. 2, which is executed after obtaining the power allocation actions. As the above description, it achieves the best channel allocation in OFDMA if the power allocation is known [@wang2015JointEnergyBandwidth]. Line 1 is the initialization. Lines 2 $\sim$ 3 calculate and sort $\bar{\bm{\rho}}_t = \{ \bar{\rho}^i_t \}_{i \in \{ 0, 1, 2, 3, 4 \} }$. Line 5 assigns the maximum number of channels to the current possibly strongest channel, and line 6 updates the remaining total number of channels. The DDPG-based algorithms are given in Alg. 3. The algorithm has two parts: initializations, and the main process. First, we describe the initializations in lines 1 $\sim$ 3. In line 1, all states are initialized: the traffic light $L$ is initialized as 0, the number of vehicles $\bm{n}$ in all blocks is 0, the UAV’s block and height are randomized, and the channel state $H^i$ for each block $i$ is set as LoS or NLoS with the same probability. Note that the action space DDPG controls in PowerControl, FlightControl, and JointControl is different. Line 2 initializes the parameters of the critic and actor. Line 3 allocates an experience replay buffer $R_b$. Secondly, we present the main process. Line 5 initializes a random process for action exploration. Line 6 receives an initial state $s_1$. Let $\bar{a}_t$ be the action DDPG controls, and $a_t$ be the UAV’s all action. Line 8 selects an action according to $\bar{a}_t$ and an exploration noise $\mathcal{N}_t$. Lines 9 $\sim$ 10 combine the channel allocation actions in Alg. 2 and $\bar{a}_t$ as $a_t$ at a fixed 3D position in PowerControl. Lines 11 $\sim$ 12 combine the equal transmission power, equal channel allocation actions and $\bar{a}_t$ (3D flight) as $a_t$ in FlightControl. Lines 13 $\sim$ 14 combine the 3D flight action, the channel allocation actions in Alg. 2 and $\bar{a}_t$ as $a_t$ in JointControl. Line 15 executes the UAV’s action $a_t$, and then the UAV receives a reward and all states are updated. Line 16 stores a transition into $R_b$. In line 17, a random mini-batch of transitions are sampled from $R_b$. Line 18 sets the value of $y^j$ for the critic’s online Q-network. Lines 19 $\sim$ 21 update all network parameters. The DDPG-based algorithms in Alg. 3 in essence are the approximated Q-learning method in Alg. 1. The exploration noise in line 8 approximates the second case of (\[Eqn:EpsilonGreedy\]) in Q-learning. Lines 18 $\sim$ 19 in Alg. 3 make $\left[ r_t + \gamma \max_{a_{t + 1}} Q(s_{t + 1}, a_{t + 1}) - Q(s_t, a_t) \right]$ in line 7 of Alg. 1 converge. Line 20 of Alg. 3 approximates the first case of (\[Eqn:EpsilonGreedy\]) in Q-learning, since both of them aim to obtain the policy of the maximum Q-value. The soft update of $Q'$ in line 21 of Alg. 3 is exactly (\[Eqn:UpdateQInQLearning\]) in Q-learning, where $\tau$ and $\alpha$ are learning rates. Next, we discuss the training and test stages of proposed solutions. **1)** In the training stage, we train the actor and the critic, and store the parameters of their neural networks. Fig. \[Fig:DDPG\_Framework\] illustrates the data flow and parameter update process. The training stage has two parts. First, $Q$ and $\mu$ are trained through a random mini-batch of transitions sampled from the experience replay buffer $R_b$. Secondly, $Q'$ and $\mu'$ are trained through soft update. The training process is as follows. A mini-batch of $M$ transitions $\{ (s^j_t, a^j_t, r^j_t, s^j_{t + 1}) \}_{j \in \Omega}$ are sampled from $R_b$, where $\Omega$ is a set of indices of sampled transitions from $R_b$ with $|\Omega| = M$. Then two data flows are outputted from $R_b$: $\{ r^j_t, s^j_{t + 1} \}_{j \in \Omega} \rightarrow \mu'$, and $\{ s^j_t, a^j_t \}_{j \in \Omega} \rightarrow Q$. $\mu'$ outputs $\{ r^j_t, s^j_{t + 1}, \mu'(s^j_{t + 1}|\theta^{\mu'}) \}_{j \in \Omega}$ to $Q'$ to calculate $\{ y^j_t \}_{j \in \Omega}$. Then $Q$ calculates and outputs $\{ \nabla_{a} Q(s, a| \theta^Q)|_{s = s^j_t, a = \mu(s^j_t)} \}_{j \in \Omega}$ to $\mu$. $\mu$ updates its parameters by (\[Eqn:NablaQ\]). Then two soft updates are executed for $Q'$ and $\mu'$ in (\[Eqn:QUpdate\]) and (\[Eqn:MuUpdate\]), respectively. The data flow of the critic’s target Q-network $Q'$ and online Q-network $Q$ are as follows. $Q'$ takes $\{ (r^j_t, s^j_{t + 1}, \mu'(s^j_{t + 1}|\theta^{\mu'})) \}_{j \in \Omega}$ as the input and outputs $\{ y^j_t \}_{j \in \Omega}$ to $Q$. $y^j_t$ is calculated by $$\begin{aligned} && y^j_t = r^j_t + \gamma Q'(s^j_{t + 1}, \mu'(s^j_{t + 1}| \theta^{\mu'})|\theta^{Q'}). \label{Eqn:y}\end{aligned}$$ $Q$ takes $\left\{ \{ s^j_t, a^j_t \}_{j \in \Omega}, \right \}$ as the input and outputs $\{ \nabla_{a} Q(s, a| \theta^Q)|_{s = s^j_t, a = \mu(s^j_t)} \}_{j \in \Omega}$ to $\mu$ for updating parameters in (\[Eqn:NablaQ\]), where $\{ s^j_t \}_{j \in \Omega}$ are sampled from $R_b$, and $\mu(s^j_t) = \arg\max_a Q(s^j_t, a)$. The data flows of the actor’s online policy network $\mu$ and target policy network $\mu'$ are as follows. After $Q$ outputs $\{ \nabla_{a} Q(s, a| \theta^Q)|_{s = s^j_t, a = \mu(s^j_t)} \}_{j \in \Omega}$ to $\mu$, $\mu$ updates its parameters by (\[Eqn:NablaQ\]). $\mu'$ takes $\{ r^j_t, s^j_{t + 1} \}_{j \in \Omega}$ as the input and outputs $\{ r^j_t, s^j_{t + 1}, \mu'(s^j_{t + 1}|\theta^{\mu'}) \}_{j \in \Omega}$ to $Q'$ for calculating $\{ y^j_t \}_{j \in \Omega}$ in (\[Eqn:y\]), where $\{ r^j_t, s^j_{t + 1} \}_{j \in \Omega}$ are sampled from $R_b$. The updates of parameters of four neural networks ($Q$, $Q'$, $\mu$, and $\mu'$) are as follows. The online Q-network $Q$ updates its parameters by minimizing the $L_2$-norm loss function $\text{Loss}_t(\theta^Q)$ to make its Q-value fit $y^j_t$: $$\begin{aligned} && \hspace{-0.4in} \nabla_{\theta^Q} \text{Loss}_t(\theta^Q)= \nabla_{\theta^Q} [\frac{1}{M} \sum^{M}_{j = 1} (y^j_t - Q(s^j_t, a^j_t| \theta^Q))^2]. \label{Eqn:Loss}\end{aligned}$$ The target Q-network $Q'$ updates its parameters $\theta^{Q'}$ by (\[Eqn:QUpdate\]). The online policy network $\mu$ updates its parameters following the chain rule with respect to $\theta^{\mu}$: $$\begin{aligned} && \hspace{-0.46in} \mathbb{E}_{s_t} [\nabla_{\theta^{\mu}} Q(s, a| \theta^Q)|_{s = s_t, a = \mu(s_t|\theta^{\mu})}] \nonumber \\ && \hspace{-0.29in} = \mathbb{E}_{s_t} [\nabla_{a} Q(s, a| \theta^Q)|_{s = s_t, a = \mu(s_t)} \nabla_{\theta^{\mu}} \mu(s|\theta^{\mu})|_{s = s_t} ]. \label{Eqn:NablaQ}\end{aligned}$$ The target policy network $\mu'$ updates its parameters $\theta^{\mu'}$ by (\[Eqn:MuUpdate\]). In each time slot $t$, the current state $s_t$ from the environment is delivered to $\mu'$, and $\mu'$ calculates the UAV’s target policy $\mu'(s_t)|\theta^{\mu'}$. Finally, an exploration noise $\mathcal{N}$ is added to $\mu'(s_t|\theta^{\mu'})$ to get the UAV’s action in (\[Eqn:NewAction\]). **2)** In the test stage, we restore the neural network of the actor’s target policy network $\mu'$ based on the stored parameters. This way, there is no need to store transitions to the experience replay buffer $R_b$. Given the current state $s_t$, we use $\mu'$ to obtain the UAV’s optimal action $\mu'(s_t|\theta^{\mu'})$. Note that there is no noise added to $\mu'(s_t|\theta^{\mu'})$, since all neural networks have been trained and the UAV has got the optimal action through $\mu'$. Finally, the UAV executes the action $\mu'(s_t|\theta^{\mu'})$. Extension on Energy Consumption of 3D Flight -------------------------------------------- The UAV’s energy is used in two parts, communication and 3D flight. The above proposed solutions in Alg. 3 do not consider the energy consumption of 3D flight. In this subsection, we discuss how to incorporate the energy consumption of 3D flight into Alg. 3. To encourage or discourage the UAV’s 3D flight actions in different directions with different amount of energy consumption, we modify the reward function and the DDPG framework. The UAV aims to maximize the total throughput per energy unit since the UAV’s battery has limited capacity. For example, the UAV DJI Mavic Air [@2019UAV_DJI_MavicAir] with full energy can only fly 21 minutes. Given that the UAV’s energy consumption of 3D flight is much larger than that of communication, we only use the former part as the total energy consumption. Thus, the reward function (\[Eqn:TotalThroughput\]) is modified as follows $$\begin{aligned} && \hspace{-0.45in} \bar{r}(s_t, a_t) = \frac{1}{e(a_t)} \sum_{i \in \{ 0, 1, 2, 3, 4 \}} b n^i_t c^i_t \log (1 + \frac{\rho^i_t h^i_t}{b c^i_t \sigma^2} ), \label{Eqn:RewardExtension}\end{aligned}$$ where $e(a_t)$ is the energy consumption of taking action $a_t$ in time slot $t$. Our energy consumption setups follow the UAV DJI Mavic Air [@2019UAV_DJI_MavicAir]. The UAV has three vertical flight actions per time slot just as in (\[Eqn:UAVVerticalFlight\]). If the UAV keeps moving downward, horizontally, or upward until the energy for 3D flight is used up, the flight time is assumed to be 27, 21, and 17 minutes, respectively. If the duration of a time slot is set to 6 seconds, so the UAV can fly 270, 210, and 170 time slots, respectively. Therefore, the formulation of $e(a_t)$ is given by $$\begin{aligned} && \hspace{-0.098in} e(a_t) = \label{Eqn:EnergyConsumption} \begin{cases} \frac{1}{270} E_{\text{full}}, ~\text{if~moving downward 5 meters}, \\ \frac{1}{210} E_{\text{full}}, ~\text{if moving horizontally}, \\ \frac{1}{170} E_{\text{full}}, ~\text{if moving upward 5 meters}, \end{cases}\end{aligned}$$ where $E_{\text{full}}$ is the total energy if the UAV’s battery is full. Let $\delta(t)$ be a prediction error as follows $$\begin{aligned} && \delta(t) = \bar{r}(s_t, a_t) - Q(s_t, a_t), \label{Eqn:Delta}\end{aligned}$$ where $\delta(t)$ evaluates the difference between the actual reward $\bar{r}(s_t, a_t)$ and the expected return $Q(s_t, a_t)$. To make the UAV learn from the prediction error $\delta(t)$, not the difference between the new Q-value and old Q-value in (\[Eqn:UpdateQInQLearning\]), the Q-value is updated by the following rule $$\begin{aligned} && \hspace{-0.2in} Q(s_t, a_t) \leftarrow Q(s_t, a_t) + \alpha \delta(t) \Leftrightarrow \nonumber \\ && \hspace{-0.2in} Q(s_t, a_t) \leftarrow Q(s_t, a_t) + \alpha (\bar{r}(s_t, a_t) - Q(s_t, a_t)), \label{Eqn:UpdateQInExtension}\end{aligned}$$ where $\alpha$ is a learning rate similar to (\[Eqn:UpdateQInQLearning\]). We introduce $\alpha^+$ and $\alpha^-$ to represent the learning rate when $\delta(t) \geq 0$ and $\delta(t) < 0$, respectively. Therefore, the UAV can choose to be active or inactive by properly setting the values of $\alpha^+$ and $\alpha^-$. The update of Q-value in Q-learning is modified as follows, inspired by [@lefebvre2017optimisticRL] $$\begin{aligned} && \hspace{-0.4in} Q(s_t, a_t) \leftarrow Q(s_t, a_t) + \begin{cases} \alpha^+ \delta(t), ~\text{if}~\delta(t) \geq 0, \\ \alpha^- \delta(t), ~\text{if}~\delta(t) < 0. \end{cases}\end{aligned}$$ We define the prediction error $\delta(t)$ as the difference between the actual reward and the output of the critic’s online Q-network $Q$: $$\begin{aligned} && \delta(t) = \bar{r}(s_t, a_t) - Q(s_t, a_t|\theta^Q). \label{Eqn:Delta_DDPG}\end{aligned}$$ We use $\tau^+$ and $\tau^-$ to denote the weights when $\delta(t) \geq 0$ and $\delta(t) < 0$, respectively. The update of the critic’s target Q-network $Q'$ is $$\begin{aligned} && \hspace{-0.4in} \theta^{Q'} \leftarrow \label{Eqn:QUpdateExtension} \begin{cases} \tau^+ \theta^Q + (1 - \tau^+) \theta^{Q'} , ~\text{if}~\delta(t) \geq 0, \\ \tau^- \theta^Q + (1 - \tau^-) \theta^{Q'} , ~\text{if}~\delta(t) < 0. \end{cases}\end{aligned}$$ The update of the actor’s target policy network $\mu'$ is $$\begin{aligned} && \hspace{-0.4in} \theta^{\mu'} \leftarrow \label{Eqn:MuUpdateExtension} \begin{cases} \tau^+ \theta^{\mu} + (1 - \tau^+) \theta^{\mu'} , ~\text{if}~\delta(t) \geq 0, \\ \tau^- \theta^{\mu} + (1 - \tau^-) \theta^{\mu'} , ~\text{if}~\delta(t) < 0. \end{cases}\end{aligned}$$ If $\tau^+ > \tau^-$, the UAV is active and prefers to move. If $\tau^+ < \tau^-$, the UAV is inactive and prefers to stay. If $\tau^+ = \tau^-$, the UAV is neither active nor inactive. To approximate the Q-value, we introduce $\bar{y}^j_t$ similar to (\[Eqn:y\]) and then make the critic’s online Q-network $Q$ to fit it. We optimize the loss function $$\begin{aligned} && \hspace{-0.4in} \nabla_{\theta^Q} \text{Loss}_t(\theta^Q)= \nabla_{\theta^Q} [\frac{1}{M} \sum^{M}_{j = 1} (\bar{y}^j_t - Q(s^j_t, a^j_t| \theta^Q))^2 ],\end{aligned}$$ where $\bar{y}^j_t = \bar{r}^j_t$. We modify the MDP, DDPG framework, and DDPG-based algorithms by considering the energy consumption of 3D flight: - The MDP is modified as follows. The state space $\mathcal{S} = (L, x, z, n, H, E)$, where $E$ is the energy in the UAV’s battery. The energy changes as follows $$\begin{aligned} && E_{t + 1} = \max \{ E_t - e(a_t), 0 \}. \label{Eqn:EnergyChange} \end{aligned}$$ The other parts of MDP formulation and state transitions are the same as in Section \[Sec:MDP\]. <!-- --> - There are three modifications in the DDPG framework: a) The critic’s target Q-network $Q'$ feeds $\bar{y}^j = \bar{r}^j$ to the critic’s online Q-network $Q$ instead of $y^j$ in (\[Eqn:y\]). b) The update of the critic’s target Q-network $Q'$ is (\[Eqn:QUpdateExtension\]) instead of (\[Eqn:QUpdate\]). c) The update of the actor’s target policy network $\mu'$ is (\[Eqn:MuUpdateExtension\]) instead of (\[Eqn:MuUpdate\]). <!-- --> - The DDPG-based algorithms are modified from Alg. 3. Initialize the energy state of the UAV as full in the start of each episode. In each time step of an episode, the energy state is updated by (\[Eqn:EnergyChange\]), and this episode terminates if the energy state $E_t \leq 0$. The reward function is replaced by (\[Eqn:RewardExtension\]). Performance Evaluation {#Sec:PerformanceEvaluation} ====================== For a one-way-two-flow road intersection in Fig. \[Fig:RoadIntersectionInModel\], we present the optimality verification of deep reinforcement learning algorithms. Then, we study a more realistic road intersection as shown in Fig. \[Fig:RoadIntersection\], and present our simulation results. Our simulations are executed on a server with Linux OS, 200 GB memory, two Intel(R) Xeon(R) Gold 5118 CPUs@2.30 GHz, a Tesla V100-PCIE GPU. The implementation of Alg. 3 includes two parts: building the environment (including traffic and communication models) for our scenarios, and using the DDPG algorithm in TensorFlow [@abadi2016tensorflow]. Optimality Verification of Deep Reinforcement Learning {#Subsec:OptimalityVerification} ------------------------------------------------------ The parameter settings are summarized in Table \[Tab:ValuesParameters\]. In the simulations, there are three types of parameters: DDPG algorithm parameters, communication parameters, and UAV/vehicle parameters. First, we describe the DDPG algorithm parameters. The number of episodes is 256, and the number of time slots in an episode is 256, so the number of total time slots is 65,536. The experience replay buffer capacity is 10,000, and the learning rate of target networks $\tau$ is 0.001. The mini-batch size $M$ is $512$. The training data set is full in the $10,000^{th}$ time slot, and is updated in each of the following 256 $\times$ 256 - 10,000 = 55,536 time slots. The test data set is real-time among all the 256 $\times$ 256 = 65,536 time slots. Secondly, we describe communication parameters. $\alpha_1$ and $\alpha_2$ are set to 9.6 and 0.28, which are common values in urban areas [@mozaffari2015UAV_LOS]. $\beta_1$ is 3, and $\beta_2$ is 0.01, which are widely used in path loss modeling. The duration of a time slot is set to 6 seconds, and the number of occupied red or green traffic light $N$ is 10, i.e., 60 seconds constitute a red/green duration, which is commonly seen in cities and can ensure that the vehicles in blocks can get the next block in a time slot. The white power spectral density $\sigma^2$ is set to -130 dBm/Hz. The total UAV transmission power $P$ is set to $6$ W in consideration of the limited communication ability. The total number of channels $C$ is 10. The bandwidth of each channel $b$ is 100 KHz. Therefore, the total bandwidth of all channels is 1 MHz. The maximum power allocated to a vehicle $\rho_{\text{max}}$ is 3 W, and the maximum number of channels allocated to a vehicle $c_{\text{max}}$ is 5. We assume that the power control for each vehicle has 4 discrete values (0, 1, 2, 3). Thirdly, we describe UAV/vehicle parameters. $\lambda$ is set to 0.1 $\sim$ 0.7. The length of a road block $\widehat{d}$ is set to 3 meters. The blocks’ distance is easily calculated as follows: $D(1, 0) = \widehat{d}$, and $D(1, 3) = 2 \widehat{d}$, where $D(i, j)$ is the Euclidean distance from block $i$ to block $j$. We assume the arrival of vehicles in block 1 and 2 follows a binomial distribution with the same parameter $\lambda$ in the range $0.1 \sim 0.7$. The discount factor $\gamma$ is 0.9. The assumptions of the simplified scenario in Fig. \[Fig:RoadIntersectionInModel\] are as follows. To keep the state space small for verification purpose, we assume the channel states of all communication links are LoS, and the UAV’s height is fixed as 150 meters, so that the UAV can only adjusts its horizontal flight control and transmission control. The traffic light state is assumed to have two values (red or green). The configure of neural networks in proposed solutions is based on the configure of the DDPG action space. A neural network consists of an input layer, fully-connected layers, and an output layer. The number of fully-connected layers in actor is set to 4. ------------ --------------------- ------------------ ----------- ------------------ ------------------ $\alpha_1$ $\alpha_2$ $\beta_1$ $\beta_2$ $\sigma^2$ $\widehat{d}$ 9.6 0.28 3 0.01 -130 dBm/Hz 3 $P$ $C$ $N$ $\gamma$ $z_{\text{min}}$ $z_{\text{max}}$ 1 $\sim$ 6 10 10 0.9 10 200 $M$ $\lambda$ $g^s_i$ $g^l_i$ $g^r_i$ $b$ 512 0.1 $\sim$ 0.7 0.4 0.3 0.3 100 KHz $\tau$ $\rho_{\text{max}}$ $c_{\text{max}}$ 0.001 3 W 5 ------------ --------------------- ------------------ ----------- ------------------ ------------------ : Values of parameters in simulation settings[]{data-label="Tab:ValuesParameters"} Theoretically, it is well-known that deep reinforcement learning algorithms (including DDPG algorithms) solve MDP problems and achieve the optimal results with much less memory and computational resources. We provide the optimality verification of DDPG-based algorithms in Alg. 3 in a one-way-two-flow road intersection in Fig. \[Fig:RoadIntersectionInModel\]. The reasons are as follows: (i) the MDP problem in such a simplified scenario is explicitly defined and the theoretically optimal policy can be obtained using the Python MDP Toolbox [@Python_MDP]; and (ii) this optimality verification process also serves a good code debugging process before we apply the DDPG algorithm in TensorFlow [@abadi2016tensorflow] to the more realistic road intersection scenario in Fig. \[Fig:RoadIntersection\]. ![Total throughput vs. vehicle arrival probability $\lambda$ in optimality verification.[]{data-label="Fig:ThroughputWithLambdaDDPGAndOptimal"}](Fig/ThroughputWithLambdaDDPGAndOptimal.pdf){height="0.84\linewidth" width="0.93\linewidth"} The result of DDPG-based algorithms matches that of the policy iteration algorithm using Python MDP Toolbox [@Python_MDP] (serving as the optimal policy). The total throughput obtained by the policy iteration algorithm and DDPG-based algorithms are shown as dashed lines and solid lines in Fig. \[Fig:ThroughputWithLambdaDDPGAndOptimal\]. Therefore, DDPG-based algorithms achieve near optimal policies. We see that, the total throughput in JointControl is the largest, which is much higher than PowerControl and FlightControl. This is in consistent with our believes that the JointControl of power and flight allocation will be better than the control of either of both. The performance of PowerControl is better than FlightControl. The throughput increases with the increasing of vehicle arrival probability $\lambda$ in all algorithms, and it saturates when $\lambda \geq 0.6$ due to traffic congestion. More Realistic Traffic Model {#Subsec:RealisticTrafficModel} ---------------------------- ![Realistic road intersection model.[]{data-label="Fig:RoadIntersection"}](Fig/RoadIntersection.pdf){height="0.99\linewidth" width="0.99\linewidth"} We consider a more realistic road intersection model in Fig. \[Fig:RoadIntersection\]. There are totally 33 blocks with four entrances (block 26, 28, 30, and 32), and four exits (block 25, 27, 29, and 31). Vehicles in block $i \in \{2, 4, 6, 8\}$ go straight, turn left, turn right with the probabilities $g^s_i$, $g^l_i$, and $g^r_i$, such that $g^s_i + g^l_i + g^r_i = 1$. We assume vehicles can turn right when the traffic light is green. Now, we describe the settings different from the last subsection. The discount factor $\gamma$ is $0.4 \sim 0.9$. The total UAV transmission power $P$ is set to $1 \sim 6$ W. The total number of channels $C$ is 100 $\sim$ 200, which is much larger than that in subsection \[Subsec:OptimalityVerification\] since there are more vehicles in the realistic model. The bandwidth of each channel $b$ is 5 KHz, therefore, the total bandwidth of all channels is $0.5 \sim 1$ MHz. The maximum power allocated to a vehicle $\rho_{\text{max}}$ is 0.9 W, and the maximum number of channels allocated to a vehicle $c_{\text{max}}$ is 50. The minimum and maximum height of the UAV is 10 meters and 200 meters. The probability of a vehicle going straight, turning left, and turning right ($g^s_i$, $g^l_i$, and $g^r_i$) is set to 0.4, 0.3, and 0.3, respectively, and each of them is assumed to be the same in block 2, 4, 6, and 8. We assume the arrival of vehicles in block 26, 28, 30, and 32 follows a binomial distribution with the same parameter $\lambda$ in the range $0.1 \sim 0.7$. The UAV’s horizontal and vertical flight actions are as follows. We assume that the UAV’s block is 0 $\sim$ 8 since the number of vehicles in the intersection block 0 is generally the largest and the UAV will not move to the block far from the intersection block. Moreover, within a time slot we assume that the UAV can stay or only move to its adjacent blocks. The UAV’s vertical flight action is set by (\[Eqn:UAVVerticalFlight\]). In PowerControl, the UAV stays at block 0 with the height of 150 meters. Baseline Schemes ---------------- We compare with two baseline schemes. Generally, the equal transmission power and channels allocation is common in communication systems for fairness. Therefore, they are used in baseline schemes. The first baseline scheme is Cycle, i.e., the UAV cycles anticlockwise at a fixed height (e.g., 150 meters), and the UAV allocates the transmission power and channels equally to each vehicle in each time slot. The UAV moves along the fixed trajectory periodically, without considering the vehicle flows. The second baseline scheme is Greedy, i.e., at a fixed height (e.g., 150 meters), the UAV greedily moves to the block with the largest number of vehicles. If a nonadjacent block has the largest number of vehicles, the UAV has to move to block 0 and then move to that block. The UAV also allocates the transmission power and the channels equally to each vehicle in each time slot. The UAV tries to serve the block with the largest number of vehicles by moving nearer to them. Simulation Results ------------------ The training time is about 4 hours, and the test time is almost real-time, since it only uses the well trained target policy network. Next, we first show the convergence of loss functions, and then show total throughput vs. discount factor, total transmission power, total number of channels and vehicle arrival probability, and finally present the total throughput and the UAV’s flight time vs. energy percent for 3D flight. ![Convergence of loss functions in training stage.[]{data-label="Fig:Loss"}](Fig/Loss.pdf){height="0.73\linewidth" width="0.99\linewidth"} The convergence of loss functions in training stage for PowerControl, FlightControl, and JointControl indicates that the neural network is well-trained. It is shown in Fig. \[Fig:Loss\] when $P = 6$, $C = 200$, $\lambda = 0.5$ and $\gamma = 0.9$ during time slots 10,000 $\sim$ 11,000. The first 10,000 time slots are not shown since during the 0 $\sim$ 10,000, the experience replay buffer has not achieved its capacity. We see that, the loss functions in three algorithms converge after time slot 11,000. The other metrics in the paper are measured in test stage by default. ![Throughput vs. discount factor $\gamma$.[]{data-label="Fig:ThroughputWithDiscount"}](Fig/ThroughputWithDiscount.pdf){height="0.84\linewidth" width="0.93\linewidth"} Total throughput vs. discount factor $\gamma$ is drawn in Fig. \[Fig:ThroughputWithDiscount\] when $P = 6$, $C = 200$, and $\lambda = 0.5$. We can see that, when $\gamma$ changes, the throughput of three algorithms is steady; and JointControl achieves higher total throughput, comparing with PowerControl and FlightControl, respectively. PowerControl achieves higher throughput than FlightControl since PowerControl allocates power and channel to strongest channels while FlightControl only adjusts the UAV’s 3D position to enhance the strongest channel and the equal power and channel allocation is far from the best strategy in OFDMA. ![Total throughput vs. total transmission power (C = 200).[]{data-label="Fig:ThroughputWithPower"}](Fig/ThroughputWithPower.pdf){height="0.84\linewidth" width="0.93\linewidth"} ![Total throughput vs. total number of channels (P = 6).[]{data-label="Fig:ThroughputWithChannel"}](Fig/ThroughputWithChannel.pdf){height="0.84\linewidth" width="0.93\linewidth"} Total throughput vs. total transmission power ($P = 1 \sim 6$) and total number of channels ($C = 100 \sim 200$) are shown by Fig. \[Fig:ThroughputWithPower\] and Fig. \[Fig:ThroughputWithChannel\], where we set $\lambda = 0.5$ and $\gamma = 0.9$. We see that JointControl achieves the best performance for different transmission power and channel budgets, respectively. Moreover, the total throughput of all algorithms increases when the total transmission power or total number of channels increases. PowerControl and FlightControl only adjust the transmission power or 3D flight, while JointControl jointly adjusts both of them, so its performance is the best. The total throughput of DDPG-based algorithms is improved greatly than that of Cycle and Greedy. The performance of Greedy is a little better than Cycle, since Greedy tries to get nearer to the block with the largest number of vehicles. ![Total throughput vs. vehicle arrival probability $\lambda$.[]{data-label="Fig:ThroughputWithLambda"}](Fig/ThroughputWithLambda.pdf){height="0.84\linewidth" width="0.93\linewidth"} Total throughput vs. vehicle arrival probability $\lambda$ is shown in Fig. \[Fig:ThroughputWithLambda\]. Note that the road intersection has a capacity of $2$ units, i.e., it can serve at most two traffic flows at the same time, therefore, it cannot serve traffic flows where $\lambda$ is very high, e.g., $\lambda = 0.8$ and $\lambda = 0.9$. We see that, when $\lambda$ increases, i.e., more vehicles arrive at the intersection, the total throughput increases. However, when $\lambda$ gets higher, e.g., $\lambda = 0.6$, the total throughput saturates due to traffic congestion. ![Total throughput vs. energy percent for 3D flight in JointControl (P = 6, C = 200).[]{data-label="Fig:ThroughputWithEnergyPercent"}](Fig/ThroughputWithEnergyPercent.pdf){height="0.84\linewidth" width="0.93\linewidth"} ![UAV’s flight time vs. energy percent for 3D flight in JointControl (P = 6, C = 200).[]{data-label="Fig:FlightTimeWithEnergyPercent"}](Fig/FlightTimeWithEnergyPercent.pdf){height="0.84\linewidth" width="0.93\linewidth"} Next, we test the metrics considering of the energy consumption of 3D flight. The total throughput vs. energy percent for 3D flight in JointControl is shown in Fig. \[Fig:ThroughputWithEnergyPercent\]. When $\tau^+$ increases, the total throughput almost increases. We get that if the UAV is more active in 3D flight, it will help to improve the total throughput. However, the improvement of the total throughput is not very clear since the UAV has to consider the energy consumption in the new reward function (\[Eqn:RewardExtension\]). In addition, when $\tau^+$ is higher, the total throughput has more variance since the UAV prefers to get higher reward through more ventures. The UAV’s flight time vs. energy percent for 3D flight in JointControl is shown in Fig. \[Fig:FlightTimeWithEnergyPercent\]. When $\tau^- = 0.001$ and $\tau^+ = 0.0008$, the UAV’s flight time is the longest since the UAV is inactive. When $\tau^- = 0.001$ and $\tau^+ = 0.0012$, the UAV’s flight time is the shortest, since the UAV is active and prefers to flight. When $\tau^- = \tau^+ = 0.001$, the UAV’s flight time is between the other two cases. If the energy percent for 3D flight increases, the UAV’s flight time increases linearly in the three cases. Conclusions {#Sec:Conclusion} =========== We studied a UAV-assisted vehicular network where the UAV acted as a relay to maximize the total throughput between the UAV and vehicles. We focused on the downlink communication where the UAV could adjust its transmission control (power and channel) under 3D flight. We formulated our problem as a MDP problem, explored the state transitions of UAV and vehicles under different actions, and then proposed three deep reinforcement learning schemes based on the DDPG algorithms, and finally extended them to account for the energy consumption of the UAV’s 3D flight by modifying the reward function and the DDPG framework. In a simplified scenario with small state space and action space, we verified the optimality of DDPG-based algorithms. Through simulation results, we demonstrated the superior performance of the algorithms under a more realistic traffic scenario compared with two baseline schemes. In the future, we will consider the scenario where multiple UAVs constitute a relay network to assist vehicular networks and study the coverage overlap/probability, relay selection, energy harvesting communications, and UAV cooperative communication protocols. We pre-trained the proposed solutions using servers, and we hope the UAV trains the neural netwroks in the future if light and low energy consumption GPUs are applied at the edge. [10]{} M. Chaqfeh, H. El-Sayed, and A. Lakas, “Efficient data dissemination for urban vehicular environments,” [*IEEE Transactions on Intelligent Transportation Systems (TITS)*]{}, no. 99, pp. 1–11, 2018. M. Zhu, X.-Y. Liu, F. Tang, M. Qiu, R. Shen, W. Shu, and M.-Y. Wu, “Public vehicles for future urban transportation,” [*IEEE Transactions on Intelligent Transportation Systems (TITS)*]{}, vol. 17, no. 12, pp. 3344–3353, 2016. M. Zhu, X.-Y. Liu, and X. Wang, “Joint transportation and charging scheduling in public vehicle systems - a game theoretic approach,” [*IEEE Transactions on Intelligent Transportation Systems (TITS)*]{}, vol. 19, no. 8, pp. 2407–2419, 2018. M. Zhu, X.-Y. Liu, and X. Wang, “An online ride-sharing path-planning strategy for public vehicle systems,” [*IEEE Transactions on Intelligent Transportation Systems (TITS)*]{}, vol. 20, no. 2, pp. 616–627, 2019. K. Li, C. Yuen, S. S. Kanhere, K. Hu, W. Zhang, F. Jiang, and X. Liu, “An experimental study for tracking crowd in smart cities,” [*IEEE Systems Journal*]{}, 2018. F. Cunha, L. Villas, A. Boukerche, G. Maia, A. Viana, R. A. Mini, and A. A. Loureiro, “Data communication in [VANETs]{}: protocols, applications and challenges,” [*Elsevier Ad Hoc Networks*]{}, vol. 44, pp. 90–103, 2016. H. Sedjelmaci, S. M. Senouci, and N. Ansari, “Intrusion detection and ejection framework against lethal attacks in [UAV]{}-aided networks: a bayesian game-theoretic methodology,” [*IEEE Transactions on Intelligent Transportation Systems (TITS)*]{}, vol. 18, no. 5, pp. 1143–1153, 2017. “Paving the path to [5G]{}: optimizing commercial [LTE]{} networks for drone communication (2018).”\ <https://www.qualcomm.cn/videos/paving-path-5g-optimizing-commerci> [al-lte-networks-drone-communication](al-lte-networks-drone-communication). “Huawei signs [MoU]{} with [China]{} [Mobile]{} [Sichuan]{} and [Fonair]{} aviation to build cellular test networks for logistics drones (2018).”\ <https://www.huawei.com/en/press-events/news/2018/3/MoU-ChinaMobi> [le-FonairAviation-Logistics](le-FonairAviation-Logistics). M. Alzenad, A. El-Keyi, F. Lagum, and H. Yanikomeroglu, “[3-D]{} placement of an unmanned aerial vehicle base station ([UAV-BS]{}) for energy-efficient maximal coverage,” [*IEEE Wireless Communications Letters (WCL)*]{}, vol. 6, no. 4, pp. 434–437, 2017. M. Giordani, M. Mezzavilla, S. Rangan, and M. Zorzi, “An efficient uplink multi-connectivity scheme for [5G]{} [mmWave]{} control plane applications,” [*IEEE Transactions on Wireless Communications (TWC)*]{}, 2018. S. Wang, H. Liu, P. H. Gomes, and B. Krishnamachari, “Deep reinforcement learning for dynamic multichannel access in wireless networks,” [*IEEE Transactions on Cognitive Communications and Networking (TCCN)*]{}, vol. 4, no. 2, pp. 257–265, 2018. Q. Yang and S.-J. Yoo, “Optimal [UAV]{} path planning: sensing data acquisition over [IoT]{} sensor networks using multi-objective bio-inspired algorithms,” [*IEEE Access*]{}, vol. 6, pp. 13671–13684, 2018. M. Garraffa, M. Bekhti, L. L[é]{}tocart, N. Achir, and K. Boussetta, “Drones path planning for [WSN]{} data gathering: a column generation heuristic approach,” in [*IEEE Wireless Communications and Networking Conference (WCNC)*]{}, pp. 1–6, 2018. C. H. Liu, Z. Chen, J. Tang, J. Xu, and C. Piao, “Energy-efficient [UAV]{} control for effective and fair communication coverage: A deep reinforcement learning approach,” [*IEEE Journal on Selected Areas in Communications (JSAC)*]{}, vol. 36, no. 9, pp. 2059–2070, 2018. H. Wang, G. Ding, F. Gao, J. Chen, J. Wang, and L. Wang, “Power control in [UAV]{}-supported ultra dense networks: communications, caching, and energy transfer,” [*IEEE Communications Magazine*]{}, vol. 56, no. 6, pp. 28–34, 2018. S. Yan, M. Peng, and X. Cao, “A game theory approach for joint access selection and resource allocation in [UAV]{} assisted [IoT]{} communication networks,” [*IEEE Internet of Things Journal (IOTJ)*]{}, 2018. Y. Wu, J. Xu, L. Qiu, and R. Zhang, “Capacity of [UAV]{}-enabled multicast channel: joint trajectory design and power allocation,” in [*IEEE International Conference on Communications (ICC)*]{}, pp. 1–7, 2018. Y. Zeng, X. Xu, and R. Zhang, “Trajectory design for completion time minimization in [UAV]{}-enabled multicasting,” [*IEEE Transactions on Wireless Communications (TWC)*]{}, vol. 17, no. 4, pp. 2233–2246, 2018. S. Zhang, Y. Zeng, and R. Zhang, “Cellular-enabled [UAV]{} communication: trajectory optimization under connectivity constraint,” in [*IEEE International Conference on Communications (ICC)*]{}, pp. 1–6, 2018. R. Fan, J. Cui, S. Jin, K. Yang, and J. An, “Optimal node placement and resource allocation for [UAV]{} relaying network,” [*IEEE Communications Letters*]{}, vol. 22, no. 4, pp. 808–811, 2018. U. Challita, W. Saad, and C. Bettstetter, “Deep reinforcement learning for interference-aware path planning of cellular-connected [UAVs]{},” in [*IEEE International Conference on Communications (ICC)*]{}, 2018. X.-Y. Liu, Z. Ding, S. Borst, and A. Walid, “Deep reinforcement learning for intelligent transportation systems,” in [*NeurIPS Workshop on Machine Learning for Intelligent Transportation Systems*]{}, 2018. A. Al-Hourani, S. Kandeepan, and S. Lardner, “Optimal [LAP]{} altitude for maximum coverage,” [*IEEE Wireless Communications Letters (WCL)*]{}, vol. 3, no. 6, pp. 569–572, 2014. M. Mozaffari, W. Saad, M. Bennis, and M. Debbah, “Unmanned aerial vehicle with underlaid device-to-device communications: Performance and tradeoffs,” [ *IEEE Transactions on Wireless Communications (TWC)*]{}, vol. 15, no. 6, pp. 3949–3963, 2016. D. Oehmann, A. Awada, I. Viering, M. Simsek, and G. P. Fettweis, “[SINR]{} model with best server association for high availability studies of wireless networks,” [*IEEE Wireless Communications Letters (WCL)*]{}, vol. 5, no. 1, pp. 60–63, 2015. N. Gupta and V. A. Bohara, “An adaptive subcarrier sharing scheme for ofdm-based cooperative cognitive radios,” [*IEEE Transactions on Cognitive Communications and Networking (TCCN)*]{}, vol. 2, no. 4, pp. 370–380, 2016. P. Ramezani and A. Jamalipour, “Throughput maximization in dual-hop wireless powered communication networks,” [*IEEE Transactions on Vehicular Technology (TVT)*]{}, vol. 66, no. 10, pp. 9304–9312, 2017. M. Agiwal, A. Roy, and N. Saxena, “Next generation [5G]{} wireless networks: A comprehensive survey,” [*IEEE Communications Surveys & Tutorials*]{}, vol. 18, no. 3, pp. 1617–1655, 2016. Q. Wu and R. Zhang, “Common throughput maximization in uav-enabled ofdma systems with delay consideration,” [*IEEE Transactions on Communications (TOC)*]{}, vol. 66, no. 12, pp. 6614–6627, 2018. S. Zhang, Y. Zeng, and R. Zhang, “Cellular-enabled uav communication: Trajectory optimization under connectivity constraint,” in [*IEEE International Conference on Communications (ICC)*]{}, pp. 1–6, 2018. R. S. Sutton and A. G. Barto, [*Reinforcement learning: An introduction*]{}. MIT press, 2018. C. J. Watkins and P. Dayan, “Q-learning,” [*Springer Machine learning*]{}, vol. 8, no. 3-4, pp. 279–292, 1992. C. Wirth and G. Neumann, “Model-free preference-based reinforcement learning,” in [*AAAI Conference on Artificial Intelligence*]{}, 2016. H. Van Hasselt, A. Guez, and D. Silver, “Deep reinforcement learning with double q-learning,” in [*AAAI Conference on Artificial Intelligence*]{}, 2016. T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, “Continuous control with deep reinforcement learning,” in [*International Conference on Learning Representations (ICLR)*]{}, 2016. D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, [*et al.*]{}, “Mastering the game of go without human knowledge,” [*Nature*]{}, vol. 550, no. 7676, p. 354, 2017. V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, “Playing atari with deep reinforcement learning,” [ *https://arxiv.org/pdf/1312.5602*]{}, 2013. A. Daniely, “[SGD]{} learns the conjugate kernel class of the network,” in [ *Advances in Neural Information Processing Systems (NIPS)*]{}, pp. 2422–2430, 2017. Z. Wang, V. Aggarwal, and X. Wang, “Joint energy-bandwidth allocation in multiple broadcast channels with energy harvesting,” [*IEEE Transactions on Communications (TOC)*]{}, vol. 63, no. 10, pp. 3842–3855, 2015. “Homepage of [DJI Mavic Air]{} (2019).”\ <https://www.dji.com/cn/mavic-air?site=brandsite&from=nav>. G. Lefebvre, M. Lebreton, F. Meyniel, S. Bourgeois-Gironde, and S. Palminteri, “Behavioural and neural characterization of optimistic reinforcement learning,” [*Nature Human Behaviour*]{}, vol. 1, no. 4, p. 0067, 2017. M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, [*et al.*]{}, “Tensorflow: a system for large-scale machine learning,” in [*USENIX Symposium on Operating Systems Design and Implementation (OSDI)*]{}, pp. 265–283, 2016. M. Mozaffari, W. Saad, M. Bennis, and M. Debbah, “Drone small cells in the clouds: design, deployment and performance analysis,” in [*IEEE Global Communications Conference (GLOBECOM)*]{}, pp. 1–6, 2015. “[Python Markov decision process (MDP) Toolbox]{} (2019).”\ <https://pymdptoolbox.readthedocs.io/en/latest/api/mdptoolbox.html>. \   [**Ming Zhu**]{} received the Ph.D. degree in Computer Science and Engineering in Shanghai Jiao Tong University, Shanghai, China. He is now a Post-Doctoral Researcher and an Assistant Professor in Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China. His research interests are in the area of big data, intelligent transportation systems, smart cities, and artificial intelligence. [Xiao-Yang Liu]{} received the B.Eng. degree in Computer Science from Huazhong University of Science and Technology, and the PhD degree in the Department of Computer Science and Engineer, Shanghai Jiao Tong University, China. He is currently a PhD in the Department of Electrical Engineering, Columbia University. His research interests include tensor theory, deep learning, non-convex optimization, big data analysis and IoT applications. [Xiaodong Wang]{} (S’98-M’98-SM’04-F’08) received the Ph.D. degree in electrical engineering from Princeton University. He is currently a Professor of electrical engineering with Columbia University, New York NY, USA. His research interests fall in the general areas of computing, signal processing, and communications. He has authored extensively in these areas. He has authored the book entitled Wireless Communication Systems: Advanced Techniques for Signal Reception, (Prentice Hall, 2003). His current research interests include wireless communications, statistical signal processing, and genomic signal processing. He has served as an Associate Editor of the IEEE TRANSACTIONS ON COMMUNICATIONS, IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, IEEE TRANSACTIONS ON SIGNAL PROCESSING, and IEEE TRANSACTIONS ON INFORMATION THEORY. He is an ISI Highly Cited Author. He received the 1999 NSF CAREER Award, the 2001 IEEE Communications Society and Information Theory Society Joint Paper Award, and the 2011 IEEE Communication Society Award for Outstanding Paper on New Communication Topics. [^1]: $^*$Equal contribution. [^2]: M. Zhu is with the Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China. E-mail: . [^3]: X.-Y. Liu and X. Wang are with the Department of Electrical Engineering, Columbia University, New York, NY 10027, USA E-mail: {xiaoyang, wangx}@ee.columbia.edu.
--- author: - | Yongge Wang\ Certicom Research, Certicom Corporation\ 5520 Explorer Dr., 4th floor, L4W 5L1, Canada\ [ywang@certicom.com]{} date: title: 'Using Mobile Agent Results to Create Hard-To-Detect Computer Viruses' --- \[section\] \[thm\][Lemma]{} \[thm\][Corollary]{} \[thm\][Proposition]{} \[thm\][Example]{} \[thm\][Definition]{} \[thm\][Question]{} \[thm\][Problem]{} -0.7cm 1.5cm Introduction ============ The term [*computer virus*]{} is often used to indicate any software that can cause harm to systems or networks. People often does not include certain malicious software, such as Trojan horses and network worms, into the computer virus family. In our discussion, a [computer virus ]{} refers to any code that cause computer or network systems to behave in a different manner than the desired one. A Trojan horse program is a useful or apparently useful program or a shell script containing hidden code that performs some unwanted function. A simple example of a Trojan horse program might be a [**telnet**]{} program. When a user invokes the program, it appears to be performing telneting and nothing more, however it may also be quietly changing the file access permissions. Some Trojan horse programs are difficult to detect, for example, a compiler on a multi-user system that has been modified to insert additional code into certain programs when they are compiled (see Thompson [@thompson]). The hidden code of a Trojan horse program is deliberately placed by the program’s author. Generally, the hidden code in a computer virus program is added by another program, that program itself is a computer virus. Thus one of the typical characteristics of a computer virus is to copy its hidden code to other programs, thereby [*infecting*]{} them. Generally, a computer virus exhibits three characteristics (see, e.g., [@pb; @wca; @wc]): a [*replication*]{} mechanism, an [*activation*]{} mechanism, and an [*objective*]{}. The replication mechanism performs the following functions: search for other programs to infect, when finds a program, [*possibly*]{} determines whether the program has been previously infected, insert the hidden instructions somewhere in the program, modifies the execution sequence of the program’s instruction such that the hidden code will be executed whenever the program is invoked, and set some flags indicating that the program has been infected. The flag may be necessary because without it, programs could be repeatedly infected and grow noticeably large. The activation mechanism checks for the occurrence of some event. When the event occurs, the computer virus executes its objective, which is generally some unwanted, harmful action. Anti-virus tools performs three basic functions: [*detect, identify,*]{} or [*remove viruses*]{}. Detection tools perform proactive detection, active detection, or reactive detection. That is, they detect a virus before it executes, during execution, or after execution. Identification and removal tools are more straightforward in their application. For detection of viruses, there are five classes of techniques (see, e.g., [@pb]): signature scanning and algorithmic detection, general purpose monitors, access control shells, checksums for change detection, and heuristic binary analysis. Note that after the paper [@pb], a new technique [*emulation detection*]{} has been extensively used to detect polymorphic viruses. In this paper, we will concentrate on designing viruses which are signature-free and we will not discuss the techniques for identification and removal. A common class of anti-virus tools employs the complementary techniques of signature scanning and algorithmic detection. This class of tools are known as scanners. Scanners are limited intrinsically to the detection of known viruses. In signature scanning an executable is searched for a selected binary code sequence, called a [*virus signature*]{}. General purpose monitors protect a system from the replication of viruses by actively intercepting malicious actions. Access control shells function as part of the operating system, much like monitoring tools. Rather than monitoring for virus-like behavior, the shell attempts to enforce an access control policy for the system. Change detection works on the theory that executables are static objects; therefore, modification of an executable implies a possible virus infection. However, this theory has a basic flaw: some executables are self-modifying. Heuristic binary analysis is a method whereby the analyzer traces through an executable looking for suspicious, virus-like behavior. If a program appears to perform virus-like actions, a warning is displayed. Indeed, the signature scanner is the mostly used technique to detect viruses. Note that at the beginning of this section we mentioned that at the end of the replication mechanism, the virus will generally put a flag in the infected program (this flag can often be included in the signature of the virus). Otherwise the program may be repeatedly infected, and the size may increase geometrically. Hence the virus will be detected easily. The authors of computer viruses always try to design viruses that are immune from those anti-virus software on market. A [*polymorphic virus*]{} (see, e.g., [@pb; @fs]) creates copies during replication that are functionally equivalent but have distinctly different byte streams. To achieve this, the virus may randomly insert superfluous instructions, interchange the order of independent instructions, or choose from a number of different encryption schemes. This variable quality makes the virus difficult to locate, identify, and remove. This kind of virus can be thought as static signature-free viruses. However, up to my knowledge, all known polymorphic viruses can be detected by algorithmic detection (or emulation detection) due to their dynamic signatures. In this paper, we will introduce a new concept: [*dynamic signatures*]{} of viruses and present a method to design viruses which are static signature-free and whose dynamic signatures are hard to determine unless some cryptographic assumption fails. This will answer a long time open question in this area. Notations ========= Turing machines are the basic model for computation. We will present our construction in terms of Turing machines. Our notations are standard, e.g., as those in Hopcroft and Ullman [@hu]. We will use two-way infinite tape, multi-track, and multi-tape Turing machines as our model of computation. Formally, a Turing machine (TM) is denoted $$M=(Q,\Sigma,\Gamma,\delta,q_0,B,F)$$ where $Q$ is the finite set of [*states*]{}, $\Gamma$ is the finite set of allowable [*tape symbols*]{}, $B$ is the [*blank symbol*]{} which belongs to $\Gamma$, $\Sigma$ is the set of [*input symbols*]{} which is a subset of $\Gamma$ not including $B$, $\delta$ is the [*next move function*]{} which is a mapping from $Q\times \Gamma$ to $Q\times \Gamma\times\{L,R\}$ ($\delta$ may, however, be undefined for some arguments), $q_0\in Q$ is the [*start state*]{}, and $F\subseteq Q$ is the set of [*final states*]{}. For each TM $M$, there is a read only input tape and a write only output tape. Without loss of generality, we will assume that $\Sigma = \{0,1\}$ in this paper unless specified otherwise. The set of strings over $\Sigma$ will be denoted as $\Sigma^*$, and the set of length $n$ strings will be denoted as $\Sigma^n$. A subset of $\Sigma^*$ is called a set, a problem, or just a language. The characteristic function of a set $A$ is defined by letting $A(x) = 1$ if $x\in A$ and $A(x)=0$ otherwise. Generating undetectable signatures ================================== From our analysis in previous sections, in order for a virus to be undetectable, the virus must have some mechanism to check whether a program has already been infected by it or not. For this purpose, most viruses will put some flags in infected programs. These flags generally can be considered as signatures. Formally, the [*signature*]{} of a virus is a byte sequence by which the virus can be distinguished from other programs or viruses. That is, the virus code or programs infected by it contain this byte sequence, but all other programs or viruses generally do not contain this byte sequence. This definition of virus signature is a static one (which is generally used in the literature). In this paper, we will introduce a new concept: a virus can have a [*dynamic signature*]{}. Formally, a [*dynamic signature*]{} of a virus is a function which can be used to distinguish it from other programs or viruses. From the definition, it is clear that dynamic signatures of viruses can only be determined by simulating the execution of the virus or the programs infected by it. It is clear that each non-trivial virus should have either a static signature or a dynamic signature. Polymorphic viruses do not have static signatures, but generally they have dynamic signatures. The reason why all previous polymorphic viruses can be detected by current anti-virus software in market is that their dynamic signatures are generally easy to determine. Obviously, viruses which have static signatures or whose dynamic signatures are easy to determine can easily be detected. In order for a virus to be undetectable, it should have the following properties: - they have no static signatures in infected programs, - their dynamic signatures are undetectable, for example, their dynamic signatures are hard to detect unless some cryptographic assumption fails. In this section, we present a method to design undetectable dynamic signatures. In the next section, we will present a method for a virus to infect programs so that it leaves no static signature and in the same time keeps the dynamic signature undetectable. Let $F:N\rightarrow N$ be a secure function with the following properties: - $|F(x)| = |x|$ for all $x\in \Sigma^*$, - Given the values of $F$ on some given set $A\subset N$, it is hard for the enemy to compute any value of $F$ on $x\notin A$, where $N$ is the set of positive integers. Such kind of functions can be chosen from any variant of digital signature schemes or some encryption schemes, for example, a possible candidate is the encryption scheme of Cramer and Shoup [@sc] which is secure against adaptive chosen ciphertext attack. The reader is referred to [@mov] for more details on digital signature schemes and encryption schemes.. A set $D$ is called [*super sparse*]{} if $|D\cap \Sigma^{n}|\le 1$ for all $n$ and $|D|=\infty$. For each virus we will assign a super sparse set $D$ as its dynamic signature. Note that only the virus creator knows the dynamic signature of the virus. It should also be the case that it is computationally hard to guess the dynamic signature of any virus. Let $n_0>100$ be a large enough integer and $F$ be a secure function as mentioned above. Then the dynamic signature for a virus is assigned as follows: $$D_v=\{x\in\Sigma^n: x\mbox{ is the binary representation of } F(n), n>n_0\}$$ Our method to insert the dynamic signature into a virus is as follows: the virus will modify the behavior of the host program such that the program will output random values of $0$ or $1$ on inputs from $D_v$. Here we assume that the host program is a Turing machine $M$ which computes the characteristic function of a set $A$ (that is, $A$ means the normal behavior of the program). Then the infected program will be a Turing machine $M^{\prime}$ with the properties that: $M^{\prime}(x)=A(x)$ for all $x\notin D_v$ and $M^{\prime}(x)$ is a randomly chosen element from $\Sigma$ ($=\{0,1\}$) for $x\in D_v$. From the property of randomness, $M^{\prime}(x)= 1$ for approximately one half inputs $x$ from $D_v$. Note that if $n_0$ is large enough, then the set $D_v$ is a super sparse subset of $\Sigma^*$, whence the terminal user will not notice the existence of the virus. Also any anti-virus program will not be able to detect the virus unless one of the following conditions holds: 1. the anti-virus program can search a space of at least $2^{2n_0}$; 2. the function $F$ can be broken, that is, the anti-virus program can compute the value $F(n)$ for sufficiently many $n>n_0$. (Indeed, in practice, the anti-virus program even does not know the function $F$). Let $M_{D_v}$ be the Turing machine that models a virus with a dynamic signature $D_v$. That is, $M_{D_v}$ has the following properties: - $M_{D_v}$ computes the desired virus function (or behavior), - $M_{D_v}$ outputs randomly chosen elements from $\Sigma$ for inputs from $D_v$. Checking infected programs {#ccheck} ========================== Each time when a virus find a chance to infect a program, it will first decide whether it has already infected the target program or not. For a virus with a static signature, this will be straightforward. For a virus with a dynamic signature (but without static signature), this can be done by virtually simulating the execution of the target program on some inputs. The following algorithm can be used for this purpose. [**Checking algorithm:**]{} 1. Let $c=0, j=n_0+1$, and $K_0$ be a specific reasonably large integer (e.g., $K_0=100000$); 2. \[loopbg\] Let $x_j$ be the binary representation of $F(j)$. We distinguish the following two cases: [**Case 1**]{}: $M(x_j) = 1$. Let $c=c+1$ and $j=j+1$. [**Case 2**]{}: $M(x_j) =0$. Let $j=j+1$. 3. If $j < K_0$ then go to (\[loopbg\]) else go to (\[looped\]). 4. \[looped\] If $c\approx \frac{K_0-n_0}{2}$ then $M$ has been infected. Otherwise, $M$ is not infected. Note that the above checking algorithm sometime may output wrong information. That is, sometime it will consider a non-infected program as infected. However, this will not be a big problem for a virus. The purpose of a virus is to infect many programs, and does not need to infect all programs. Also it should be noted that if one can observe one execution of the above checking algorithm, then one can easily determine an initial part of the dynamic signature $D_v$, whence help the anti-virus software to detect the virus. In order to solve this problem, we can use the protocol of computation with encryption (see [@af]). Abadi and Feigenbaum [@af] described a protocol how to securely evaluate a Boolean circuit in encrypted data. They further reduced the problem of evaluation of encrypted functions to the problem of processing of encrypted data by representing the Boolean circuit that is to be hidden as data fed to a universal Boolean circuit. Sander and Tschudin ([@st; @st1]) have considered the similar problems in the context of protecting mobile agents against malicious hosts. They have shown that it is possible for a mobile agent to actively protect itself against its execution environment that tries to get some information of the agent for some malicious goal. Especially, they have identified a special class of functions – polynomials and rational functions – together with encryption schemes that lead to a non-trivial example of cryptographically hiding a function such that it can nevertheless be executed with a non-interactive protocol. That is, mobile agents executing this class of functions can protect themselves against tampering by a malicious host and can conceal the programs they want to have executed. Obviously our above checking algorithm can be considered as a Boolean function of Abadi and Feigenbaum [@af], though generally it does not belong to the function classes identified by Sander and Tschudin ([@st; @st1]). In practice, a virus writer may find some trade-off between the results of Abadi and Feigenbaum [@af] and Sander and Tschudin [@st; @st1], and achieve the following goal: it can check whether a program has already been infected but not leak any information of its dynamic signature. From the discussion of this section, we can model a virus with a dynamic signature $D_v$ as a Turing machine $M_{D_v}$ with the following properties: - $M_{D_v}$ computes the desired virus function (or behavior), - $M_{D_v}$ outputs randomly chosen elements from $\Sigma$ for inputs from $D_v$. - $M_{D_v}$ can check whether a given program has already been infected without leaking any information about its dynamic signature $D_v$. In the next section, we will give a method to embed this virus $M_{D_v}$ into any host program without leaving any static signature. Inserting a virus into a program ================================ Assume that we have a Turing machine $M$ (that is, a program) which computes the characteristic function of a set $A$. That is, for each $x\in \Sigma^*$, $M(x) = A(x)$. In order to infect $M$ with a virus $M_{D_v}$, we will combine the two Turing machines into one Turing machine and introduce randomness into the codes so that no static signature is left in the combined Turing machine. For the reason of simplicity, we will assume that the virus Turing machine $M_{D_v}$ will only compute the following function $$M_{D_v}(x)=\left\{ \begin{array}{ll} ? & x\notin D_v\\ 0\mbox{ or } 1 & x\in D_v \end{array} \right.$$ Of course, a virus will do many other harmful things in practice. Our results can be easily extended to these harmful viruses since we can regard the host program Turing machine $M$ and the harmful part $M_v$ of the virus as one single Turing machine $\overline{M}$ and then apply our method (note that it is trivial to combine two Turing machines into one single Turing machine). Now the following problem remains to be addressed: - For each virus Turing machine $M_{D_v}$ and a host program Turing machine $M$, how to construct a Turing machine $M^\prime$ which computes the function $$\mbox{vcomb}\{A(x), M_{D_v}(x)\}=\left\{ \begin{array}{ll} A(x) & \mbox{if } M_{D_v}(x) = ?\\ M_{D_v}(x) & \mbox{otherwise} \end{array} \right.$$ such that from $M^\prime$ it is computationally hard to get any information of $D_v$? That is, how to construct $M^\prime$ such that the anti-virus software cannot construct sufficient many elements of $D_v$? Note that the anti-virus software may have no time to run the $M^\prime$s on all inputs to get some information of the dynamic signature $D_v$. One way vcomb-combination of Turing machines {#one-way-vcomb-combination-of-turing-machines .unnumbered} -------------------------------------------- A Turing machine $M$ is called a [*[vcomb]{}-combination*]{} of TMs $M_1$ and $M_2$ if $M(x) =\mbox{vcomb}\{M_1(x), M_2(x)\}$ for all $x\in \Sigma^*$. A Turing machine $M$ is called a [*composition*]{} of TMs $M_1$ and $M_2$ if $M(x) = M_2(M_1(x))$ for all $x\in \Sigma^*$. It is widely believed that in practice it is hard to decompose a Turing machine into two parts. For example, a cryptosystem system based on the hardness of decomposition (and inverse) of finite automaton is suggested in Tao and Chen [@tao]. However, if $M$ is just a simple composition of two TMs $M_1$ and $M_2$, then obviously one can construct $M_1$ (and $M_2$) from $M$ easily. In the following, we will present a method for constructing the vcomb-combination $M$ of two TMs $M_1$ and $M_2$ with the properties that it is hard for the adversary to construct a TM $M_2^\prime$ which is almost equivalent to $M_2$ in semantics, that is, $M_2^\prime(x) = M_2(x)$ for sufficiently many (the exact number depends on applications) $x\in \Sigma^*$. The procedure is as follows: Input: TMs $M_1$ and $M_2$ which compute $A(x)$ and $M_{D_v}(x)$ respectively. Without loss of generality, we may assume that $M_1=(Q_1,\Sigma,\Gamma,\delta_1,q_{1,0},B,F)$ and $M_2=(Q_2,\Sigma,\Gamma,\delta_2,q_{2,0},B,F)$ satisfy the following properties. 1. $|Q_1|=|Q_2|$ and $|\delta_1|=|\delta_2|$; 2. Both $M_1$ and $M_2$ are $3$ tapes Turing machines. That is, $M_i$ has one read-only input tape, one working tape, and one write-only output tape for $i=1,2$. 3. There is a number $k>0$ such that both $M_1$’s and $M_2$’s working tapes have $k$ tracks[^1]. Now construct $M$ as follows: 1. Construct a 7-tape Turing machine $M_a$ with the following properties: - The first tape is the input tape; - The second tape corresponds to the input tape of $M_1$; - The third tape corresponds to the $k$-track working tape of $M_1$; - The fourth tape corresponds to the output tape of $M_1$; - The fifth tape corresponds to the $k$-track working tape of $M_2$; - The sixth tape corresponds to the output tape of $M_2$; - The seventh tape is the output tape; - On input $x$, $M_a$ first copies $x$ from the first tape to the second tape, then parallely simulates the computations of $M_1(x)$ and $M_2(x)$ (note that $M_2$ uses the input from the second tape) on the corresponding working tapes respectively. In the end, if the content on the tape $6$ is $?$, then $M_a$ copies the content on the tape $4$ to the tape $7$, otherwise $M_a$ copies the content on the tape $6$ to the tape $7$. And $M_a$ halts. 2. As in the proof of [@hu Theorem 7.2, pp.161], convert $M_a$ into a 3-tape Turing machine $M_b$ with the following properties: - The first tape is the input tape (the same as $M_a$’s input tape); - The second tape has $2k+8$ tracks, where the first two tracks are for the second tape of $M_a$ (one to record the tape contents and one to record the second tape’s control head position), tracks $3$ through $k+3$ are for the third tape of $M_a$ ($k$ tracks to record the tape contents and one track to record the third tape’s control head position), tracks $k+4$ through $k+5$ are for the fourth tape of $M_a$, tracks $k+6$ through $2k+6$ are for the fifth tape of $M_a$, and tracks $2k+7$ through $2k+8$ are for the sixth tape of $M_a$. - The third tape is the output tape (the same as $M_a$’s output tape). 3. \[onewayfunction\] Choose a random permutation $R:\Sigma^{2k+8}\rightarrow \Sigma^{2k+8}$. Using the permutation $R$ to convert $M_b$ into a 3-tape Turing machine $M$ with the following properties: - The second tape of $M$ has only one track. That is, the second tape of $M$ is divided into $k^{2k+8}$-size tuple-blocks. Each tuple-block on the second tape of $M$ corresponds to one block of the $M_b$’s second tape. - Each possible block on $M_b$’s second tape can be denoted by a $2k+8$-tuple of symbols from $\Sigma$. The next move function $\delta_{M_b}$ of $M_b$ is converted to the next move function of $\delta_M$ in such a way that each instantaneous description block $x$ on $M_b$’s second tape (that is, a $2k+8$-tuple) is mapped to an instantaneous description tuple-block $R(x)$ of $M$. Note that generally it is the case that the anti-virus software can get a copy of $M_1$ and $M$, and wants to find a description of $M_2$. In order for our construction to be robust against this situation, we can make the following change to the vcomb-combination construction: First make a random permutation of $M_1$ (similar to what we have done for $M_b$ in the construction), then apply our above vcomb-combination construction. Now it is clear that if one can recover the $M_2$ from $M$, then one can compute the permutation $R$. \[9870\] Let $f$ be an algorithm to construct $M_2$ from $M$, then $f$ can be used to compute the permutation $R$. [**Proof**]{}. It is straightforward. $\Box$ \[thisscor\] The probability that one can decompose $M$ into $M_1$ and $M_2$ equals to $2^{2k+8}!$. \[thissthm\] There is an efficient process to construct from any two given Turing machines $M_1$ and $M_2$ a [vcomb]{}-combination Turing machine $M$ with the following properties: given $M$ and $M_1$, with extremely high probability, one cannot get any information of $M_2$. [**Proof**]{}. This follows from Theorem \[9870\], Corollary \[thisscor\], and our above discussion. $\Box$ Signature undetectability {#signature-undetectability .unnumbered} ------------------------- Due to the random permutation $R$ in the vcomb-combination process, it is straightforward that our virus will not have any static signature. The dynamic signature of our virus is also hard to detect. First, by Corollary \[thisscor\] and Theorem \[thissthm\], with extremely high probability the anti-virus software cannot get a description of Turing machine $M_{D_v}$ for the dynamic signature. Whence, from a static analysis of the infected program, the anti-virus software cannot get any information of the dynamic signature of the virus. Secondly, when the anti-virus software simulates a virtual execution of the virus (or the infected program), it can monitor the process that the virus checks whether a program has already been infected and the process that the virus inserts the hidden virus code into a program. However, as mentioned in section \[ccheck\], special techniques can be used to avoid the leakage of the dynamic sugnature. Conclusions =========== Even though our results show that a virus could be written in such a way that its dynamic signature is hard to detect, each virus is still detectable. For example, when we notice the existence of a virus, we can write a simple program and let it be infected. Then we can modify this infected program as the virus detector as follows: run this program in a protected environment and activate its virus infection code. If this program decided not to infect the target code, then with high probability that the taget code is already infected by this virus. This implies that theoretically all known viruses can be detected. However this method is infeasible in practice. The main difficulty here is that there may be millions of different viruses. If we include all of these viruses into the anti-virus software package, the package will have a huge size, and the detecting process will be extremely slow. In this paper, we have used some results from mobile agents to improve the quality of viruses. Indeed, there is a close relationship between viruses and mobile agents. In some sense, network worms could be considered as the prototype of mobile agents. Mobile agents have got extreme attention recently. We are sure that any future breakthrough in mobile agents protection will also be a breakthrough for the design of undetectable viruses. As have been noticed by computer virus research community, the best way to protect computer and network systems against viruses is to use digital signatures. That is, each time when a computer application package is developed, a digital signature of that software should also be available to the users. This will also defeat the viruses we have designed in this paper. However, this is difficult to achieve in practice. People like to download some shareware (e.g., games) from Internet and run it. For this kind of software, you have to trust the author. Even if you trust the share ware author, it is practically difficult to include the signature keys of all shareware writers in the virus scanner. Hence the virus may be written in such a way that it will only infect this kind of shareware. [99]{} M. Abadi and J. Feigenbaum. Secure circuit evaluation. [*Journal of Cryptology*]{}, [**2**]{}(1):1–12, 1990. F. Cohen. Computer viruses, theory and experiments. In: [*Proc. of 7th Security Conference*]{}, DOD/NBS Sept 1984. F. Cohen. Current best practices against computer viruses with examples from the DOS operating system. In: [*Proc. of the 5th International Computer Virus & Security Conference*]{}, 1992. R. Cramer and V. Shoup. A practical public key cryptosystem provably secure against adaptive chosen ciphertext attack. In: [*Advances in Cryptology, Proc. of Crypto ’98*]{}, Springer-Verlag. P. Denning. Computer viruses. [*American Scientist*]{}, Vol 76, May-June 1988. J. Dvorak. Virus wars: a serious warning. [*PC Magzine*]{}, Feb 29, 1988. J. Hopcroft and J. Ullman. [*Introduction to Automata Theory, Languages, and Computation*]{}. Addison-Wesley Publication Company, 1979. A. Menezes, P. van Oorschot, and S. Vanstone. [*Handbook of Applied Cryptography*]{}. CRC Press, 1996. W. Polk and L.Bassham. A guide to the selection of anti-virus techniques. National Institute of Standards and Technology, Computer Security Division, 1992. T. Sander and C. Tschudin. Protecting mobile agents against malicious hosts. In: [*Mobile Agents and Security*]{}, Lecture Notes in Computer Science 1419. Springer Verlag, 1998. T. Sander and C. Tschudin. Towards mobile cryptography. In:[*Proc. of the 1998 IEEE Symposium on Security and Privacy*]{}. IEEE Press, 1998. F. Skulason. The mutation engine – the final nail? [*Virus Bulletin*]{}, pp. 11-12, April, 1992. R. Tao and S. Chen. On finite automaton public-key cryptosystem. [*Theoretical Computer Science*]{}, (226)1-2:143–172, 1999. K. Thompson. Reflections on trusting trust. [*Commun. ACM*]{}, [**27**]{}(8):761–763, 1984. J. Wack and L. Carnahan. Computer viruses and related threats: a management guide. NIST Special Publication 500-166, 1989. S. White and D. Chess. Coping with computer viruses and related problems. IBM Research Report RC 14405 (\#64367), Jan 1989. [^1]: For any Turing machine $M$, it is easy to change it to a $3$-tape (one input tape, one $k$-track working tape, and one output tape) Turing machine.
--- abstract: | It is shown that photon shot noise and radiation-pressure back-action noise are the sole forms of quantum noise in interferometric gravitational wave detectors that operate near or below the standard quantum limit, if one filters the interferometer output appropriately. No additional noise arises from the test masses’ initial quantum state or from reduction of the test-mass state due to measurement of the interferometer output or from the uncertainty principle associated with the test-mass state. Two features of interferometers are central to these conclusions: (i) The interferometer output (the photon number flux $\hat {\cal N}(t)$ entering the final photodetector) commutes with itself at different times in the Heisenberg Picture, $[\hat {\cal N}(t),\hat {\cal N}(t')] = 0$ and thus can be regarded as classical. (ii) This number flux is linear to high accuracy in the test-mass initial position and momentum operators $\hat x_o$ and $\hat p_o$, and those operators influence the measured photon flux $\hat {\cal N}(t)$ in manners that can easily be removed by filtering. For example, in most interferometers $\hat x_o$ and $\hat p_o$ appear in $\hat{\cal N}(t)$ only at the test masses’ $\sim 1$ Hz pendular swinging freqency and their influence is removed when the output data are high-pass filtered to get rid of noise below $\sim 10$ Hz. The test-mass operators $\hat x_o$ and $\hat p_o$ contained in the unfiltered output $\hat {\cal N}(t)$ make a nonzero contribution to the commutator $[\hat {\cal N}(t), \hat {\cal N}(t')]$. That contribution is precisely cancelled by a nonzero commutation of the photon shot noise and radiation-pressure noise, which also are contained in $\hat {\cal N}(t)$. This cancellation of commutators is responsible for the fact that it is possible to derive an interferometer’s standard quantum limit from test-mass considerations, and independently from photon-noise considerations, and get identically the same result. These conclusions are all true for a far wider class of measurements than just gravitational-wave interferometers. To elucidate them, this paper presents a series of idealized thought experiments that are free from the complexities of real measuring systems. address: - '$^1$Physics Faculty, Moscow State University, Moscow Russia' - '$^2$Department of Physics, Texas A&M University, College Station, TX 77843-4242' - '$^3$Theoretical Astrophysics, California Institute of Technology, Pasadena, CA 91125' author: - 'Vladimir B. Braginsky$^1$, Mikhail L. Gorodetsky$^1$, Farid Ya. Khalili$^1$, Andrey B. Matsko$^2$, Kip S. Thorne$^3$, and Sergey P. Vyatchanin$^1$' date: 'Received 2 September 2001; Revised 4 July 2002 and 1 December 2002' title: 'The noise in gravitational-wave detectors and other classical-force measurements is not influenced by test-mass quantization ' --- Questions to be analyzed and summary of answers {#sec:Questions} =============================================== It has long been known that the Heisenberg uncertainty principle imposes a “standard quantum limit” (SQL) on high-precision measurements [@sql; @qnd1; @QuantumMeasurement]. This SQL can be circumvented by using “quantum nondemolition” (QND) techniques [@qnd1; @QuantumMeasurement; @qnd2; @toys; @VMZ; @VM1; @VM2; @VL]. For broad-band interferometric gravitational-wave detectors the SQL is a limiting (single-sided) spectral density $$S_h(f) = {8\hbar\over m(2\pi f)^2 L^2}\; \label{sql}$$ for the gravitational-wave field $h(t)$ [@caves1; @300years]. Here $\hbar$ is Planck’s constant divided by $2\pi$, $m$ is the mass of each of the interferometer’s four test masses, $L$ is the interferometer’s arm length, and $f$ is frequency. This SQL firmly constrains the sensitivity of all conventional interferometers (interferometers with the same optical topology as LIGO’s first-generation gravitational-wave detectors) [@pace; @kimble]. LIGO’s second-generation interferometers (LIGO-II; ca. 2008) are expected to reach this SQL for their $m= 40$ kg test masses in the vicinity of $f\sim 100$ Hz [@lsc], and may even beat it by a modest amount thanks to a “signal recycling mirror” that converts them from conventional interferometers into QND devices [@BC1; @BC2; @BC3]. LIGO-III interferometers are likely to beat the SQL by a factor $\sim 4$ or more; see, e.g., [@kimble]. In the research and development for LIGO-II interferometers [@lsc; @BC1; @BC2; @BC3] and in the attempts to invent strongly QND LIGO-III interferometers [@unruh; @OpticalBar; @SymphotonicState; @DualResonator; @LNRigidity; @FDRigidity; @SpeedMeter; @kimble], it is important to understand clearly the physical nature of the quantum noise which imposes the SQL, and to be able to compute with confidence the spectral density of this quantum noise for various interferometer designs. These issues are the subject of this paper. There are two standard ways to derive the gravitational-wave SQL (\[sql\]), and correspondingly two different viewpoints on it. The first derivation [@caves1; @hollenhorst] focuses on the quantum mechanics of the interferometer’s test masses and ignores the interferometer’s other details. In the simplest version of this derivation, one imagines a sequence of instantaneous measurements of the difference $$\hat x \equiv (\hat x_1 - \hat x_2) - (\hat x_3 - \hat x_4) \label{xDef}$$ of the center-of-mass positions of the four test masses, and from this measurement sequence one infers the changes of $x$ and thence the time varying gravitational-wave field $h(t) = x(t)/L$. At time $t$ immediately after one of the measurements, the test masses’ reduced state has position variance $[\Delta x(t)]^2$ no smaller than the measurement’s accuracy. During the time interval $\tau = t'-t$ between this measurement and the next, the test masses are free, so $\hat x(t)$ evolves as the position of a free particle with mass $$\mu = m/4 \label{muDef}$$ \[the reduced mass of the four-body system with relative position (\[xDef\])\]. The Heisenberg-Picture commutation relations for a free particle $$[\hat x(t), \hat x({t'})] = {i\hbar (t'-t)\over \mu} = {4i\hbar \tau\over m} \label{TMCommutator}$$ imply that, whatever may be the state of the test masses, the variance $[\Delta x({t'})]^2$ of $\hat x$ just before the next measurement must satisfy the Heisenberg uncertainty relation $$\Delta x(t) \Delta x(t') \ge {\hbar |t-t'|\over 2\mu} = {2\hbar \tau\over m}\;. \label{xUncertainty}$$ The accuracy with which the change of $x$ between $t$ and $t'$ can be measured is no better than the value obtained by setting $\Delta x(t) = \Delta x(t')$, and in classical language that accuracy is related to the minimum possible spectral density of the noise at frequency $f\simeq 1/\pi\tau$ by $\Delta x(t) = \Delta x(t') \simeq \sqrt{S_h(f) /\tau}$. Simple algebra then gives expression (\[sql\]) for the SQL of $S_h(f)$. A more sophisticated analysis [@caves1], based on measurements that are continuous rather than discrete and on a nonunitary Feynman-path-integral evolution of the test-mass state [@caves2; @mensky2], gives precisely the SQL (\[sql\]). The second derivation of the SQL [@caves3; @caves4] ignores the quantum mechanics of the test mass, and focuses instead on that of the laser light which monitors the test-mass motion. The light produces two kinds of noise: photon shot noise, which gets superposed on the output gravitational-wave signal, and radiation-pressure fluctuations, which produce a random back-action force on the test masses, thereby influencing their position evolution and thence the interferometer output. In an ideal, SQL-limited interferometer, both noises — shot and radiation-pressure — arise from quantum electrodynamic vacuum fluctuations that enter the interferometer through its dark port and superpose on the highly classical laser light [@caves3; @caves4]. The radiation-pressure spectral density is proportional to the laser-light power $P$, the shot-noise spectral density is proportional to $1/P$, and their product is independent of $P$ and is constrained by the uncertainty principle for light (or equivalently by the electromagnetic field commutation relations) to be no smaller than $$S_x S_F = \hbar^2 \label{LightUncertainty}$$ \[cf. Eqs. (6.7) and (6.17) of [@QuantumMeasurement] in which there is a factor 1/4 on the right side because Ref. [@QuantumMeasurement] uses a double-sided spectral density, while the present paper uses the gravity-wave community’s single-sided convention\]. In Eq.(\[LightUncertainty\]) $S_x(f)$ is the spectral density of the shot noise that is superposed on the interferometer’s output position signal $x(t)$, $S_F(f)$ is the spectral density of the radiation-pressure force that acts on the test-mass center-of-mass degree of freedom $x$, and we have assumed that the shot noise and radiation-pressure force are uncorrelated as is the case for conventional (LIGO-I type) interferometers [@kimble; @BC1; @BC2; @BC3]. At frequency $f$ the test mass responds to the Fourier component $\tilde F(f)$ of the force with a position change $\tilde x(f) = - \tilde F(f)/[\mu(2\pi f)^2]$, and correspondingly the net gravitational-wave noise is $$S_h(f) = {1\over L^2}\left(S_x + {S_F\over \mu^2(2\pi f)^4}\right)\;. \label{NetLightNoise}$$ By combining Eqs. (\[LightUncertainty\]), (\[NetLightNoise\]) and (\[muDef\]), we obtain the SQL (\[sql\]) for a conventional interferometer, e.g. LIGO-I. In view of these two very different derivations of the SQL, test-mass quantization and light quantization, three questions arise: (i) Are the test-mass quantization and the light quantization just two different viewpoints on the same physics?— in which case the correct SQL is Eq.(\[sql\]). Or are they fully or partially independent effects? — in which case we would expect their noises to add, causing the true SQL for $S_h$ to be larger by, perhaps, a factor 2 (and thence the event rate in an SQL-limited interferometer to be reduced by a factor $\sim(\sqrt2)^3 \simeq 3$). (ii) How should one compute the quantum noise in candidate designs for the QND LIGO-II and LIGO-III interferometers? One inevitably must pay close attention to the behavior of the light (and thus also its quantization), since the optical configuration will differ markedly from one candidate design to another. Must one also pay close attention to the quantum mechanics of the test masses, including their commutation relation (\[TMCommutator\]) and the continual reduction of their state as information about them is continually put onto the light’s modulations and then measured? (iii) Similarly, how should one design a QND interferometer? Need one adjust one’s design so as to drive both the light’s noise and the test-mass noise below the SQL? As we shall show, the answers are these: (ii) The test-mass quantization is irrelevant to the interferometer’s noise and correspondingly test-mass state reduction is irrelevant, if one filters the output data appropriately. (For interferometers with conventional optical topology such as LIGO-I, it is sufficient to discard all data near the test masses’ $\sim 1$ Hz swinging frequency.) Therefore, one can ignore test-mass quantization and state reduction when computing the noise of a candidate interferometer. (iii) Similarly, one can ignore the test mass’s quantum noise when designing a QND interferometer that beats the SQL. One need only pay attention to the light’s quantum noise, and in principle, by manipulating the light appropriately (and filtering the output data appropriately), one can circumvent the SQL completely. (i) Correspondingly, the SQL (\[sql\]) as derived from light quantization is precisely correct; there is no extra factor 2 caused by test-mass quantization. \[The fact that one can also derive the SQL from test-mass quantization is a result of an intimate connection between the uncertainty principles for a measured system (the test masses in our case) and the system that makes the measurement (the light). We shall elucidate this intimate connection from one viewpoint at the end of Sec. \[ClassicalSQL\]. From another viewpoint, it is due to the fact that the commutator $[\hat x(t), \hat x(t')]$, which underlies the test-mass derivation (\[TMCommutator\]), (\[xUncertainty\]) of the SQL, also underlies the derivation of the measuring light’s uncertainty relation (\[LightUncertainty\]); see the role of the generalized susceptibility $\chi(t,t') = (1/i\hbar)[\hat x(t'),\hat x(t)]$ in Sec. 6.3 of Ref.[@QuantumMeasurement].\] Central to our answers (i), (ii) and (iii) is the fact that an interferometric gravitational-wave detector does [*not*]{} monitor the time-evolving test-mass position $\hat x(t)$. Rather, it only monitors [*classical changes*]{} in $\hat x(t)$ induced by the classical gravitational-wave field $h(t)$ and other classical[^1] forces (thermal, seismic, ...) acting on the test masses, and it does so without extracting information about the actual quantized position $\hat x(t)$. The detector has a classical input \[$h(t)$\] and a classical output \[$h(t)$ contaminated by noise that (as we shall see) commutes with itself at different times and that therefore can be regarded as a time-evolving c-number\]. The quantum properties of the test masses and the light are merely intermediaries through which the classical signal must pass. This would not be the case for a device designed to make a sequence of absolute measurements of the quantum mechanical position $\hat x(t)$. Our answers (i), (ii), (iii) hold true for a far wider range of measuring devices than just interferometric gravitational-wave detectors. They hold quite generally for any well-designed device that measures a classical force acting on any quantum mechanical system. In particular, they remain true if the device makes measurements that are [*linear*]{} in the sense of Appendix \[app:LinearMeasurements\], and one filters the device’s output to remove all information at the natural frequencies of the quantum system’s dynamics (e.g. at its eigenfrequency if the quantum system is a harmonic oscillator). In Sec. \[sec:Pedagogy\] we will elucidate these answers by considering pedagogical examples of idealized devices that make discrete, quick measurements on a test mass. These examples will reveal two central underpinnings of our answers: (a) the vanishing of the measurement’s “output commutators” — i.e., the commutators of the observables (Hermitian operators) that represent the entries in the output data stream, and (b) a data-processing procedure that removes from the data all influence of the test-mass quantum observables (initial position $\hat x_o$ and initial momentum $\hat p_o$). Our examples will also elucidate two strategies for beating the SQL: (A) put the measuring apparatus (“meters”) into specially chosen initial states (the analog of squeezed states), and (B) measure a wisely chosen linear combination of position and momentum for the test mass and thereby remove the effects of the meters’ back action from the output data (make a “quantum variational measurement”). Our examples are the following: We will begin in Sec. \[sec:PositionMeasurement\] with a simple, idealized, instantaneous single measurement of the position of a single test mass. This example will demonstrate that the noise associated with test-mass quantization and the noise associated with the meter’s quantization are truly independent (though closely linked), and will illustrate how under some circumstances they can add, producing a doubling of the noise power. Then, in Sec.\[sec:VonNeumann\], we will analyze the use of a sequence of these idealized, instantaneous position measurements to monitor a classical force that acts on the test mass. This example will illustrate the vanishing self-commutator of the output data samples, which arises from a cancellation of the test-mass-position commutator by the measurement-noise commutator; it also will illustrate how signal processing can remove all influence of test-mass quantization and test-mass state reduction from the output data stream. Our third example (Sec.  \[sec:PulsedLightMeasurements\]) will be a Heisenberg-microscope-like realization of these instantaneous, idealized position measurements, in which a pulse of near-monochromatic light is reflected off the test mass, thereby encoding the test-mass position in a phase shift of the light. This example will give reality to the idealized examples in Secs. \[sec:PositionMeasurement\] and \[sec:VonNeumann\], and will help connect them to the subsequent discussion of interferometric gravitational-wave detectors. In Sec. \[sec:IFOs\] we will use the insights from our pedagogical examples to prove and elucidate our three answers \[(i), (ii), (iii) above\] for gravitational-wave interferometers, and also for a wide range of other classical force measurements. The underpinnings for our answers will be: (a) a proof that for a quantized electromagnetic wave, such as that entering the final photodetector of an interferometer, the photon number flux operator commutes with itself at different times (this flux is the output data stream), and (b) a proof that all influence of the test-mass quantum observables can be removed from the output data stream by appropriate filtering, and for conventional interferometers it is sufficient to remove all data near the test masses’ $\sim 1$ Hz swinging frequency, e.g.by the kind of high-pass filtering that is routinely used in gravitational-wave detectors. Our analysis will also elucidate QND interferometer designs based on (A) squeezed-input states for light and (B) variational-output measurements. The issues studied in this paper are most efficiently analyzed in the Heisenberg picture, and the Heisenberg picture gives particularly clear insights into them. For this reason, we will use the Heisenberg picture throughout the body of this paper. Readers who are uncomfortable with the Heisenberg picture may find Appendix \[app:TripleMeasurement\] reassuring; there we will give a detailed Schroedinger-picture analysis of the most important of our pedagogical examples, that of Sec.  \[sec:VonNeumann\] Pedagogical Examples {#sec:Pedagogy} ==================== A single position measurement:\ “double” uncertainty relation {#sec:PositionMeasurement} ------------------------------- We begin with a simple pedagogical example of a single measurement of the position of a single test mass. The Heisenberg microscope is a famous realization of this example; see Sec.\[sec:PulsedLightMeasurements\]. The measurement is idealized as instantaneous and as occuring at time $t=0$. At times arbitrarily close to $t=0$, the Hamiltonian for the test mass (with position and momentum $\hat x$ and $\hat p$) and the measuring device (the [*meter*]{}, with generalized position $\hat Q$ and generalized momentum $\hat P$) is $$H = {\hat p^2\over2\mu} - \delta(t)\hat x \hat P +{\hat P^2\over2M}\;.$$ Here $\delta(t)$ is the Dirac delta function, $\mu$ is the test mass’s mass and $M$ is the generalized mass of the meter. For pedagogical simplicity we make $M$ arbitrarily large so $\hat Q$ and $\hat P$ do not evolve in the Heisenberg Picture except at the moment of interaction, and correspondingly we rewrite the Hamiltonian as $$H = {\hat p^2\over2\mu} - \delta(t)\hat x \hat P\;. \label{HSimple}$$ A simple calculation in the Heisenberg picture gives the following expressions for the positions and momenta immediately after the measurement, in terms of those immediately before: $$\begin{aligned} \hat P_{\rm after} &=& \hat P_{\rm before} \;, \label{BeforeAfterA} \\ \hat x_{\rm after} &=& \hat x_{\rm before} \;, \label{BeforeAfterB}\\ \hat Q_{\rm after} &=& \hat Q_{\rm before} - \hat x_{\rm before} \; \label{BeforeAfterC}\\ \hat p_{\rm after} &=& \hat p_{\rm before} + \hat P_{\rm before}\;. \label{BeforeAfterD}\end{aligned}$$ \[BeforeAfter\] The meter’s generalized position $\hat Q_{\rm after}$ is amplified and read out classically immediately after the interaction, to determine the test-mass position. The resulting measured position, expressed as an operator, is $\hat x_{\rm meas} \equiv - \hat Q_{\rm after} = \hat x_{\rm before} - \hat Q_{\rm before}$ \[Eq. (\[BeforeAfterC\])\], and the measurement leaves the actual test-mass position operator unperturbed \[Eq. (\[BeforeAfterB\])\] but it perturbs the test-mass momentum \[Eq. (\[BeforeAfterD\])\]. It is instructive to rewrite Eqs. (\[BeforeAfterC\]) and (\[BeforeAfterD\]) in the form $$\begin{aligned} \hat x_{\rm meas} & = & \hat x_{\rm before} + \delta \hat x_{\rm meas}\;, \label{SimpleEqsA}\\ \hat p_{\rm after} & = & \hat p_{\rm before} + \delta \hat p_{\rm BA}\;, \label{SimpleEqsB}\end{aligned}$$ \[SimpleEqs\] with $$\delta \hat x_{\rm meas} = - \hat Q_{\rm before}\;, \quad \delta \hat p_{\rm BA} = + \hat P_{\rm before}\;. \label{MeasBADef}$$ The simple equations (\[SimpleEqs\]) embody the measurement result and its back action; $\hat x_{\rm meas}$ is the measured value of $\hat x_{\rm before} = \hat x_{\rm after}$, $\delta \hat x_{\rm meas}$ is the noise superposed on that measured value by the meter, and $\delta \hat p_{\rm BA}$ is the back-action impulse given to the test mass by the meter. Equations (\[SimpleEqs\]) are actually much more general than our simple example; they apply to any sufficiently quick,[^2] “linear” measurement; see Eqs. (5.2), (5.14) and (5.23) of Ref. [@QuantumMeasurement], and see Appendix \[app:LinearMeasurements\] below. The initial test-mass position and momentum and the initial meter position and momentum have the usual commutation relations, $$[\hat x_{\rm before},\hat p_{\rm before}] = i\hbar = [\hat Q_{\rm before}, \hat P_{\rm before}]\;. \label{UsualCommutator}$$ The second of these and Eqs. (\[MeasBADef\]) imply that the measurement noise $\delta \hat x_{\rm meas}$ and the back-action impulse $\delta \hat p_{\rm BA}$ have this same standard commutator, but with the sign reversed $$[\delta\hat x_{\rm meas}, \delta\hat p_{\rm BA}] = -i\hbar\;. \label{OppositeCommutator}$$ This has an important implication: The measured value of the test-mass position and the final value of the test-mass momentum commute: $$[\hat x_{\rm meas},\hat p_{\rm after}] = 0\;. \label{VanishingCommutator}$$ This result, like the simple measurement and back-action equations (\[SimpleEqs\]), is true not only for this pedagogical example, but also for any other sufficiently quick, linear measurement; see, e.g., Sec.  \[sec:PulsedLightMeasurements\] below. It is evident from Eqs. (\[SimpleEqs\]) and (\[MeasBADef\]) that the variances of $\hat x_{\rm meas}$ and $\hat p_{\rm after}$ are influenced by the initial states of both the meter and the test mass: $$\begin{aligned} (\Delta x_{\rm meas})^2 & = & (\Delta x_{\rm before})^2 + (\Delta Q_{\rm before})^2 \;, \label{PositionVariance}\\ (\Delta p_{\rm after})^2 & = & (\Delta p_{\rm before})^2 + (\Delta P_{\rm before})^2\;. \label{MomentumVariance}\end{aligned}$$ Here we have assumed, as is easy to arrange, that the initial states of the meter and the test mass are uncorrelated. Now, the initial states of the test mass and meter are constrained by the uncertainty relations $$\begin{aligned} \Delta x_{\rm before} \cdot \Delta p_{\rm before} &\ge& \frac{\hbar}{2}\;, \label{TMUncertainty}\\ \Delta Q_{\rm before} \cdot \Delta P_{\rm before} &\ge& \frac{\hbar}{2}\;, \label{MeterUncertainty}\end{aligned}$$ which follow from the commutators (\[UsualCommutator\]). From the viewpoint of the measurement equations (\[SimpleEqs\]), the meter equation (\[MeterUncertainty\]) is an uncertainty relation between the noise $\delta \hat x_{\rm meas} = - \hat Q_{\rm before}$ that the meter superimposes on the output signal, and the back-action impulse $\delta \hat p_{\rm BA} = \hat P_{\rm before}$ that the meter gives to the test mass. In the Heisenberg microscope, $\delta \hat x_{\rm meas}$ would be photon shot noise and $\delta \hat p_{\rm BA}$ would be radiation-pressure impulse. The test-mass uncertainty relation (\[TMUncertainty\]) and meter uncertainty relation (\[MeterUncertainty\]) both constrain the product of the measurement error (\[PositionVariance\]) and the final momentum uncertainty (\[MomentumVariance\]), and by equal amounts. The result is a “doubling” of the uncertainty relation, so $$\Delta x_{\rm meas} \cdot \Delta p_{\rm after} \ge 2\cdot\frac{\hbar}{2}\;. \label{DoubledUR}$$ This doubling of the uncertainty relation relies crucially on our assumption that the initial states of the test mass and meter are uncorrelated. Correlations can produce a violation of the uncertainty relation (\[DoubledUR\]). For example, initial correlations can be arranged so as to produce (in principle) a vanishing total measurement error $\Delta x_{\rm meas} = 0$ and a finite $\Delta p_{\rm after}$ so the product $\Delta x_{\rm meas} \cdot \Delta p_{\rm after}$ vanishes — a result permitted by the vanishing commutator (\[VanishingCommutator\]). Monitoring a classical force:\ “single” uncertainty relation {#sec:VonNeumann} ------------------------------ As we emphasized in Sec. \[sec:Questions\], the goal of LIGO-type detectors is [*not*]{} to measure any observables of a test mass, but rather to monitor an external force that acts on it. Correspondingly, it is desirable to design the measurement so the output is devoid of any information about the test mass’s initial state. As we shall see, this is readily done in a way that removes the initial-state information during data processing. The result is a “single” uncertainty relation: the measurement result is influenced only by the quantum properties of the meter and not by those of the test mass. ### Von Neumann’s thought experiment {#sec:VNThought} We illustrate this by a variant of a thought experiment devised by von Neumann [@vonneumann] and often used to illustrate issues in the quantum theory of measurement; see, e.g., [@cavesmilburn] and references therein. We analyze this thought experiment using the Heisenberg picture in the body of this paper, and we give a Schroedinger-picture analysis in Appendix \[app:TripleMeasurement\]. Our von Neumann thought experiment is a simple generalization of the position measurement described above. Specifically, we consider a free test mass, with mass $\mu$, position $\hat x$ and momentum $\hat p$, on which acts a classical force $F(t)$. To monitor $F(t)$, we probe the test mass instantaneously at times $t=0$, $\tau$, $\ldots$, $(N-1)\tau$ using $N$ independent meters labeled $r=0,1,...,N-1$. Each meter is prepared in a carefully chosen state, it then interacts with the test mass, and then is measured. We filter the measurement results to deduce $F(t)$. Meter $r$ has generalized coordinate and momentum $\hat {Q}_r$ and $\hat {P}_r$, and its free Hamiltonian is vanishingly small, so $\hat{Q}_r$ and $\hat{P}_r$ do not evolve except at the moment of interaction. The total Hamilton for test mass plus classical force plus meters is $$\hat H = {\hat p^2\over 2 \mu} - F(t) \hat x - \sum_{r=0}^{N-1} \delta(t-r\tau) \hat x \hat {P}_r\;. \label{Hamiltonian}$$ We denote by $\hat x_0$ and $\hat p_0$ the test-mass position and momentum at time $t=0$ when the experiment begins, and by $\hat x_r$ and $\hat p_r$ their values immediately [*after*]{} interacting with meter $r$, at time $t=r\tau$. The momentum of meter $r$ is a constant of the motion, so we denote it by $\hat{P}_r$ at all times. The meter coordinate changes due to the interaction; we denote its value before the interaction by $\hat {Q}_r^{\rm before}$ and after the interaction by $\hat {Q}_r$. It is easy to show, from the Heisenberg equations for the Hamiltonian (\[Hamiltonian\]), that the test-mass position immediately after its $r$’th interaction is $$\hat x_r = \hat x_o + {\hat p_o\over \mu}r\tau + \sum_{s=0}^{r}\hat{P}_s {(r-s)\tau\over \mu} + \xi_r\;. \label{x_r}$$ Here the first two terms are the free evolution of the test mass, the third (with the sum) is the influence of the meters’ back-action forces (analog of radiation-pressure force in an interferometer), and the fourth, $$\xi_r \equiv \frac{1}{\mu}\int_0^{r\tau} \int_0^t F(t') dt'dt = \frac{1}{\mu}\int_0^{r\tau} (r\tau-t')F(t') dt'\;, \label{xirDef}$$ is the effect of the classical force. The force $F(t)$ is encoded in the sequence of classical displacements $\{\xi_1,\xi_2, ... , \xi_N\}$. It is also easy to show from the Heisenberg equations that the meter’s generalized coordinate after interaction with the test mass is $$\begin{aligned} \hat{Q}_r & = & \hat{Q}_r^{\rm before} - \hat x_r \nonumber\\ & = & {\hat Q}_r^{\rm before} - \hat x_o - {\hat p_o\over \mu}r\tau - \sum_{s=0}^{r} \hat{P}_s {(r-s)\tau\over \mu} - \xi_r\;. \label{Qr}\end{aligned}$$ ### Vanishing of the output’s self commutator The set of final meter coordinates $\vec Q \equiv \{\hat{Q}_0, \hat{Q}_1, ... ,$ $\hat{Q}_{N-1}\}$ forms the final data string for data analysis. It has vanishing self commutator, $$[\hat{Q}_s, \hat{Q}_r] = 0 \quad \hbox{for all $s$ and $r$} \label{CommutatorVN}$$ — a result that can be deduced from the vanishing single-measurement commutator $[\hat x_{\rm meas}, \hat p_{\rm after}] = 0$ \[Eq.(\[VanishingCommutator\])\] for the earlier of the two measurements. It is instructive to see explicitly how this vanishing commutator arises, without explicit reference to our single-measurement analysis. The test-mass contributions to the $Q$’s \[$\hat x_o$ and $\hat p_o$ in Eq. (\[Qr\])\] produce $$\begin{aligned} [\hat Q_s,\hat Q_r]_{\rm test-mass} &=& \left[-x_o-{p_o\over\mu}s\tau,\; -x_o-{p_o\over\mu}r\tau\right] \nonumber\\ &=&\frac{i\hbar (r-s)\tau}{\mu},\end{aligned}$$ which is the analog of Eq. (\[TMCommutator\]) for an interferometer test mass. This must be cancelled by a contribution from the meters. Indeed it is. If (for concreteness) $r>s$, then the cancelling contribution comes from a commutator of (i) the $\hat{Q}_s^{\rm before}$ piece of $\hat{Q}_s$ (the noise superposed on the output signal $s$ by meter $s$) and (ii) the $\hat P_s$ term in $\hat Q_r$ (the noise in the later measurement produced by the back-action of the earlier measurement): $$\begin{aligned} [\hat Q_s,\hat Q_r]_{\rm meter} &=& \left[ \hat Q_s^{\rm before}, - \hat P_s{(r-s)\tau\over \mu}\right] \nonumber \\ &=& {-i \hbar (r-s)\tau\over\mu}\;.\end{aligned}$$ In this example, one can trace these cancellations to the bilinear form $\hat x \hat P_s$ and $\hat x \hat P_r$ of each piece of the interaction Hamiltonian. However, this type of cancellation is far more general than just bilinear Hamiltonians: In [*every sequence of measurements on any kind of system*]{}, by the time a human looks at the output data stream, its entries have all been amplified to classical size, and therefore they must all be classical quantities and must commute, $[\hat Q_s, \hat Q_r] = [Q_s,Q_r] = 0$. Remarkably, quantum mechanics is so constructed that, for a wide variety of measurements, the measured values (regarded as Hermitian observables) commute even before the amplification to classical size. This is true in the above example. It is true in a realistic variant of this example involving pulsed-light measurements (Sec. \[sec:PulsedLightMeasurements\]). It is true in a variant of this example involving continuous measurements by an electromagnetic wave in an idealized transmission line [@gardiner]. And, as we shall see in Sec. \[sec:PhotonFluxCommutator\] and Appendix \[app:VanishingCommutator\], it is also true for gravitational-wave interferometers — and indeed for all measurements in which the measured results are encoded in the photon number flux of a (quantized) electromagnetic wave; i.e., all measurements based on photodetection. More generally, it is true for any [*linear measurement*]{} (Appendix \[app:LinearMeasurements\] below, Ref. [@QuantumMeasurement], and Eq. (2.34) of Ref. [@BC3]); and, in fact, all the measurements discussed above, including gravitational-wave measurements, are linear. The classical nature of the output signal (the commutation of the data entries) guarantees that, when a human looks at one data entry, the resulting reduction of the state of the measured system cannot have any influence on the observed values of the other data entries. Correspondingly, we can carry out any data processing procedures we wish on the $\hat Q_r$, without fear of introducing new quantum noise. ### Removal of test-mass influence from the output Our goal is to measure the classical force $F(t)$ that acted on the test mass, without any contamination from the test mass’s quantum properties — more specifically, without any contamination from uncertainty-principle aspects of the test mass’s initial state. The initial state [*does*]{} influence the measured values $\tilde Q_r$ of the output observables $\hat Q_r$, since in the Heisenberg Picture the $\hat Q_r$ contain the test mass’s initial position $\hat x_o$ and momentum $\hat p_o$ \[Eq. (\[Qr\])\]. Therefore, our goal translates into finding a data analysis procedure that will remove from the output data set $\{\tilde Q_1, \tilde Q_2, \ldots\}$ all influence of the test-mass initial state (or equivalently all influence of $\hat x_o$ and $\hat p_o$), while retaining the influence of $F(t)$. In fact, we can do so rather easily, regardless of what the test-mass initial state might have been. As we shall see, our ability to do so relies crucially on the [*linearity*]{} of our measurements; in particular, on the fact that the output observables $\hat Q_r$ are linear in $\hat x_o$ and $\hat p_o$. To bring out the essence, we shall restrict ourselves to just three meters, $N=3$. The generalization to large $N$ is straightforward. The measured data sample $\hat Q_r$ is equal to the freely evolving test-mass position at time $r\tau$, $\hat x_{\rm free}(t=r \tau ) = \hat x_o + (\hat p_o/\mu)r\tau$ (which is linear in $\hat x_o$, $\hat p_o$), plus noise. Since the free evolution satisfies the equation of motion $d^2\hat x_{\rm free}/dt^2 = 0$, it is a reasonable guess that we can remove the influence of $\hat x_o$ and $\hat p_o$ from the data $\tilde Q_r$ by applying to them the discrete version of a second time derivative[^3] (which is a linear signal processing procedure). Accordingly, from the measured values $\{\tilde Q_0, \tilde Q_1, \tilde Q_2\}$ of $\{\hat Q_0, \hat Q_1, \hat Q_2\}$ in a representative experiment, we construct the discrete second time derivative $$\tilde R = (\tilde Q_2 - \tilde Q_1) - (\tilde Q_1 - \tilde Q_0) = \tilde Q_0 - 2\tilde Q_1 + \tilde Q_2 \label{R}\;.$$ The following argument shows that all the statistical properties of this quantity, in a large series of experiments (in which the initial states $|{\rm in}\rangle$ of the test mass and meters are always the same) are, indeed, devoid of any influence of $\hat x_o$ and $\hat p_o$, and thus are unaffected by the test-mass initial state.[^4] These statistical properties are embodied in the means, over all the experiments, of arbitrary functions $G(\tilde R)$. The theory of measurement tells us that, because the $\hat Q$’s all commute, the computed mean of $G(\tilde R)$ is given by $$[\hbox{computed mean of } G(\tilde R)] = \langle {\rm in} | G( \hat R) | {\rm in} \rangle\;, \label{GExpectation}$$ where $\hat R$ is the operator corresponding to $\tilde R$ $$\begin{aligned} \hat R &=& \hat Q_0 - 2\hat Q_1 + \hat Q_2 \ = - ( \xi_0 -2\xi_1 + \xi_2 ) + \nonumber\\ &&\quad + \left[ \hat Q_0^{\rm before}-2\hat Q_1^{\rm before}-{\hat P_1\tau\over \mu}+\hat Q_2^{\rm before} \right]\; \label{hatR}\end{aligned}$$ cf. Eq. (\[Qr\]). Because $\hat R$ is independent of $\hat x_o$ and $\hat p_o$, [*the computed mean (\[GExpectation\]) and thence all the measurement statistics of $\tilde R$ will be completely independent of the test-mass quantum mechanics, and in particular independent of the test mass’s initial state.*]{} Moreover, Eq. (\[GExpectation\]) implies that, so far as measurement results and statistics are concerned, measuring the $\hat Q$’s and then computing $\tilde R$ is completely equivalent to measuring $\hat R$ directly. Although $\hat R$ is independent of $\hat x_o$ and $\hat p_o$ it contains $$\xi_0 - 2\xi_1 + \xi_2 = \frac{1}{\mu}\int_0^{2\tau}(\tau-|t-\tau|)F(t)dt \equiv {\tau^2\over \mu}\bar F\;, \label{xiVal}$$ where $\bar F$ is a weighted mean of the classical force $F$ over the time interval $0<t<2\tau$; cf. Eq. (\[xirDef\]).[^5] Thus, [*this measurement of $\hat R$ is actually a measurement of $\bar F$, and is contaminated by quantum noise from the meters but [**not**]{} by quantum noise from the test mass.*]{} The only role of the quantum mechanical test mass is to feed the classical signal $\bar F$ and the meter back-action noise $\hat P_1 \tau/m$ into the output. For those readers who are uncomfortable with our use of the Heisenberg picture to derive this very important result, we present a Schroedinger-picture derivation in Appendix \[app:TripleMeasurement\]. This three-meter thought experiment is a prototype for our discussion of gravitational-wave interferometers in Sec. \[sec:Filter\]. There as here, the [*linearity of the output*]{} in the test-mass initial positions and momenta will enable us to find a linear signal processing procedure that removes the initial-state influence. Here that procedure was a discrete second time derivative. For an interferometer it will be a discrete Fourier transform of the measured photon flux (the output), and a discarding of Fourier components at the test masses’ natural frequencies (the 1 Hz pendular swinging frequency in the case of conventional interferometers). For an elegant path-integral analysis of the removal of test-mass initial conditions from the output of measurements of any harmonic oscillator on which a classical force acts, see the last portion of Sec. III$\;$C of Caves [@caves2]. ### The SQL for the classical-force measurement {#ClassicalSQL} How small can the test-mass noise be? A “naive” optimization of the meters leads to the standard quantum limit on the measured force, in the same way as a “naive” optimization of a gravitational-wave interferometer’s design (forcing it to retain the conventional LIGO-I optical topology but optimizing its laser power) leads to the gravitational-wave SQL. Specifically: Let the three meters all be prepared in initial states that are “naive” in the sense that they have no correlations between their coordinates and momenta. Then Eqs. (\[hatR\]) and (\[xiVal\]) imply that the variance of the measured mean force is $$\begin{aligned} (\Delta \bar F)^2 &=& {\mu^2\over \tau^4} \Big[ (\Delta Q_0^{\rm before})^2 + (2\Delta Q_1^{\rm before})^2 + \left({\Delta P_1 \tau\over\mu}\right)^2 \nonumber\\ && + (\Delta Q_2^{\rm before})^2 \Big]\;. \nonumber\\ \label{DeltabarF}\end{aligned}$$ Obviously, this variance is minimized by putting meters 0 and 2 into (near) eigenstates of their coordinates, so $\Delta Q_0^{\rm before} = \Delta Q_2^{\rm before} = 0$. To minimize the noise from meter 1, we require that it have the smallest variances compatible with its uncertainty relation, $$\Delta Q_{1}^{\rm before}\Delta P_{1} = \frac{\hbar}{2} \; ,$$ and we adjust the ratio $\Delta Q_1^{\rm before}/\Delta P_1$ so as to minimize $(\Delta \bar F)^2$. The result is $$(\Delta \bar F)^2 = {2\mu \hbar \over \tau^3}\;, \label{barFSQL}$$ which is the SQL for measuring a classical force, up to a factor of order unity; cf. Sec. 8.1 of Ref. [@QuantumMeasurement]. It is evident from this analysis that [*the true physical origin of the SQL in classical force measurements is the meter’s noise*]{}, not the test-mass noise. On the other hand, the quantum properties of the meter and of the test mass are intimately coupled through the requirement that the meter commutators cancel the test-mass commutator in the measurement output, so that $[\hat Q_r, \hat Q_s] = 0$ \[Eq. (\[CommutatorVN\])\]. This intimate coupling — which, as we have discussed, has enormous generality — ensures that the SQL can be derived equally well from test-mass considerations and from meter considerations. We saw this explicitly in Sec.  \[sec:Questions\] for an interferometric gravitational-wave detector. ### Beating the SQL {#sec:BeatSQL} Equation (\[hatR\]) suggests a way to beat the classical-force SQL and, in fact, achieve arbitrarily high accuracy: As in our “naive” optimization, before the measurement we place meters 0 and 2 in (near) eigenstates of their coordinates, so $\Delta Q_0 = \Delta Q_2 = 0$, but instead of putting meter 1 in a “naive” state with uncorrelated coordinate and momentum, we place it in a (near) eigenstate of $$\hat Q_1^{\rm squeeze} \equiv \hat Q_1^{\rm before}-\hat P_1\tau/2\mu\;. \label{IdealSqueezed}$$ (This meter-1 state is analogous to the squeezed-vacuum state, which Unruh [@unruh] has proposed be inserted into a conventional interferometer’s dark port in order to beat the gravitational-wave SQL; see Sec. \[sec:PulsedLightMeasurements\] below.) These initial meter states, together with Eqs. (\[hatR\]) and (\[GExpectation\]), guarantee that the variance of the computed quantity $\tilde R$ vanishes $\Delta \tilde R = 0$, and thence \[via Eqs.  (\[xiVal\]) and (\[hatR\])\] that the variance of the measured mean force vanishes, $\Delta \bar F = 0$. Thus, by putting the initial state of meter 1 into the analog of a squeezed vacuum state, we can achieve an arbitrarily accurate measurement of $\bar F$. The SQL can also be evaded by modifying the meters’ measured quantitites instead of modifying their initial states. Specifically, measure $\hat Q_0$ and $\hat Q_2$ as before, but on meter 1 instead of measuring the coordinate $\hat Q_1$, measure the following linear combination of the coordinate and momentum (with the coefficient $\alpha$ to be chosen below): $$\begin{aligned} \hat Q^{\rm var}_1 &=& \hat Q_1+\alpha \hat P_1 \nonumber \\ & = & \hat Q_1^{\rm before} - \hat x_0 - \frac{\hat p_0}{\mu}\,\tau - \frac{\hat P_0}{\mu}\tau - \alpha \hat P_1 - \xi_1\;. \label{calQ1}\end{aligned}$$ From Eqs. (\[calQ1\]), (\[Qr\]) and (\[CommutatorVN\]), we see that the output observables $\{\hat Q_0,\hat Q^{\rm var}_1, \hat Q_2\}$ all commute with each other. Therefore, when we combine their measured values into the discrete second time derivative $$\tilde R_{\rm var} \equiv \tilde Q_0 - 2 \tilde Q^{\rm var}_1 + \tilde Q_2\;,$$ its statistics will be the same as if we had directly measured the corresponding operator $$\begin{aligned} & &\hat R_{\rm var} = \tilde Q_0 - 2 \tilde Q^{\rm var}_1 + \tilde Q_2 = -(\xi_0-2\xi_1+\xi_2) \nonumber \\ & & \quad + \left[ \hat Q_0^{\rm before} - 2\hat Q_1^{\rm before} + \frac{\hat P_1}{\mu}\tau - 2\alpha \hat P_1 + \hat Q_{2}^{\rm before} \right]\;. \label{hatRVar}\end{aligned}$$ Evidently, we should choose $2\alpha=\tau/\mu$, so the quantity measured is $$\hat Q^{\rm var}_1 = \hat Q_1 +{\hat P_1}{\tau\over2\mu}\;. \label{IdealVar}$$ Then Eqs. (\[hatRVar\]) and (\[xiVal\]) imply that $$\hat R_{\rm var} = - {\tau^2\over \mu}\bar F + \hat Q_0^{\rm before} - 2 \hat Q_1^{\rm before} + \hat Q_2^{\rm before}\;. \label{Rvar1}$$ Therefore, [*by measuring our chosen linear combination of meter 1’s coordinate and momentum, and then computing the discrete second time derivative, we have succeeded in removing from our output observable $\hat R_{\rm var}$ not only the test-mass variables $\hat x_o$, $\hat p_o$, but also the back-action influence of the meters on the measurement (all three $\hat P_r$’s)!*]{} Correspondingly, by putting the meters into “naive” initial states (states with no position-momentum correlations) that are near eigenstates of their coordinates (so $\Delta Q_0$, $\Delta Q_1$, $\Delta Q_2$ are arbitrarily small and the back-action fluctuations $\Delta P_0$, $\Delta P_1$, $\Delta P_2$ are arbitrarily large), then from the computed quantity $\tilde R_{\rm var}$, we can infer the mean position $\bar F$ with arbitrarily good precision. This strategy was devised, in the context of optical measurements of test masses, by Vyatchanin, Matsko and Zubova [@VMZ; @VM1; @VM2; @VL], and is called a [*Quantum Variational Measurement*]{}. A gravitational-wave interferometer that utilizes it (and can beat the SQL) is called a [*Variational Output Interferometer*]{} [@kimble]. Of course, one can also beat the SQL for force measurements by a combination of putting the meters into initially squeezed states and performing a quantum variational measurement on their outputs. A gravitational-wave detector based on this mixed strategy is called a [*Squeezed Variational Interferometer*]{}, and may have practical advantages over squeezed-input and variational-output interferometers [@kimble]. Pulsed-light measurements of test-mass position {#sec:PulsedLightMeasurements} ----------------------------------------------- Our two pedagogical examples (single position measurement, Sec. \[sec:PositionMeasurement\], and classical force measurement, Sec.\[sec:VonNeumann\]) can be realized using pulsed-light measurements of the test-mass position. We exhibit this realization in part to lend reality to our highly idealized examples, and in part as a bridge from those simple examples to gravitational-wave interferometers with their far greater complexity (Sec. \[sec:IFOs\] below). In each pulsed-light measurement we reflect a laser light pulse, with carrier frequency $\omega_o$ and Gaussian-profile duration $\tau_o$, off a mirror on the front face of the test mass, and from the light’s phase change we deduce the test-mass position $\hat x$ averaged over the pulse. This is a concrete realization not only of the pulsed measurements of our pedagogical examples, but also of a Heisenberg microscope. We presume that the pulse duration $\tau_o$ is long compared to the light’s period $2\pi/\omega_o$, but short compared to the time $\tau$ between measurements. We shall analyze in detail one such pulsed measurement. The electric field of the reflected wave, at some fiducial location, is $$\begin{aligned} && \hat E(t) = \sqrt{\frac{2\pi\,\hbar \omega_0}{cS} }\Bigg( e^{-i\omega_0 t}\times \nonumber \\ &&\times \left[ A_0 e^{-t^2/2\tau_0^2}\left( 1+\frac{2i\omega_0}{c}\ \hat x(t)\right) + \hat a(t) \right] +\mbox{h.c.}\Bigg), \label{ElectricField}\end{aligned}$$ where $A_0$ is the pulse’s amplitude, $S$ is its cross sectional area, $c$ is the speed of light, $2(\omega_0/c)\hat x(t)$ is the phase shift induced by the test-mass displacement $\hat x(t)$, “h.c.” means Hermitian conjugate, and $\hat a(t)$ is the electric field’s amplitude operator. Because we are concerned only about timescales of order the pulse duration $\tau_0$ or longer, which means side-band frequencies $\alt 1/\tau_0 \ll \omega_0$, we can use the [*quasimonochromatic*]{} approximation to the commutation relation for $\hat a(t)$ [@gardiner1]: $$\left[ \hat a(t),\hat a^\dag(t') \right] = \delta(t-t')\;.$$ Note that, when decomposed into quadratures with respect to the carrier frequency, this electric field is $$\hat E(t) = \hat E_A(t) \cos\omega_o t + \hat E_\phi(t) \sin\omega_o t\;, \label{EDecompose}$$ where $\hat E_A$ and $\hat E_\phi$, the amplitude and phase quadratures (i.e., the quadrature components oriented along and perpendicular to the amplitude direction in the quadrature plane) are given by $$\begin{aligned} \hat E_A &=& 2\sqrt{2\pi \hbar\omega_o\over cS}\left[ A_o e^{-t^2/2\tau_o^2} + \left({\hat a(t) + \hat a^\dag(t)}\over 2\right) \right]\;, \label{EA} \\ \hat E_\phi &=& 2\sqrt{2\pi \hbar\omega_o\over cS}\Big[ 2A_o {\omega_o\over c}e^{-t^2/2\tau_o^2} \hat x(t) \nonumber \\ &&\quad\quad\quad\quad\quad + \left({\hat a(t) - \hat a^\dag (t)}\over {2i}\right) \Big]\;. \label{Ephi}\end{aligned}$$ \[EQuadratures\] The power $\hat W(t)$ in the incident wave can be written as the sum of a mean power $\langle W(t)\rangle$ and a fluctuating (noise) part $\tilde W(t)$: $$\begin{aligned} \hat W(t)&=& S c \frac{\overline{\hat E^2(t)}}{4\pi} =\langle W(t)\rangle + \tilde W(t), \\ \langle W(t)\rangle &=& \hbar \omega_0\, A_0^2\,e^{-t^2/\tau_0^2},\\ \tilde W(t) & = & 2\hbar \omega_0\, A_0\,e^{-t^2/2\tau_0^2} \left( \frac{\hat a(t)+ \hat a^\dag(t)}{2} \right)\;.\end{aligned}$$ Here the over bar means “average over the carrier period”. The light-pressure force on the mirror is $\hat F(t) = 2 \hat W(t)/c$. The fluctuating part of this, $\tilde F(t) = 2\tilde W(t)/c$, is the back-action of the measurement on the test mass, and it produces the back-action momentum change $$\begin{aligned} \delta \hat p_{\rm BA} &=&\int_{-\infty}^\infty dt\, \frac{2\tilde W(t)}{c}= \nonumber \\ &=& \frac{4\hbar \omega_0}{c}\, A_0\, \int_{-\infty}^\infty dt\,e^{-t^2/2\tau_0^2}\left( \frac{\hat a(t)+ \hat a^\dag(t)}{2}\right)\;. \label{deltapBA}\end{aligned}$$ The test-mass momentum before and after the pulsed measurement are related by $$\hat p_{\rm after} = \hat p_{\rm before} + \delta \hat p_{\rm BA}\;. \label{pafterP}$$ The experimenter deduces the phase shift $(2\omega_o/c)\hat x(t)$ and thence the test-mass displacement $\hat x(t)$ by measuring the electric field’s phase quadrature $\hat E_\phi$ (e.g., via interferometry or homodyne detection). More precisely, the experimenter measures the phase quadrature integrated over the pulse, obtaining a result proportional to $$\begin{aligned} \hat x_{\rm meas} &=& \sqrt{cS\over 2\hbar\omega_o} {c\over 4 \pi\omega_0\tau_0 A_0}\int_{-\infty}^{+\infty} e^{-t^2/2\tau_0^2} \hat E_\phi(t) dt \nonumber\\ &=& \hat x + \delta \hat x_{\rm meas}\;; \label{xafterQ}\end{aligned}$$ cf. Eq. (\[Ephi\]). Here $\hat x$ is the mirror position averaged over the short pulse, $\hat x_{\rm meas}$ is the measured value of $\hat x$, and $\delta \hat x_{\rm meas}$ is the measurement noise superposed on the output by the light pulse $$\delta \hat x_{\rm meas} = \frac{c}{2\sqrt{\pi}\, \omega_0\tau_0\, A_0} \int_{-\infty}^\infty dt\, e^{-t^2/2\tau_0^2}\, \left(\frac{\hat a(t)- \hat a^\dag(t)}{2i}\right)\;. \label{deltaxmeas}$$ It is straightforward, from the commutator $[\hat a(t),\hat a^\dag(t')] = \delta(t-t')$, to show that the measurement noise and the back-action impulse have the same commutator $$\left[\delta \hat x_{\rm meas},\delta \hat p_{\rm BA}\right] = -i \hbar \label{SameCommutator}$$ as for the idealized single measurement of Sec. \[sec:PositionMeasurement\] \[Eq. (\[OppositeCommutator\])\], and correspondingly the mirror’s measured position and its final momentum commute, $$[\hat x_{\rm meas}, \hat p_{\rm after}] = 0\;. \label{FinalCommutator}$$ The fundamental equations (\[xafterQ\]), (\[pafterP\]), (\[SameCommutator\]) and (\[FinalCommutator\]) for this pulsed-light measurement are the same as those (\[SimpleEqs\]), (\[OppositeCommutator\]), (\[VanishingCommutator\]) for our idealized single measurement, and this measurement is thus a realistic variant of the idealized one. Similarly, a sequence of pulsed-light measurements can be used to monitor a classical force acting on a mirror, and the fundamental equations for such measurements are the same as for the idealized example of Sec.  \[sec:VonNeumann\]. In such pulsed-light experiments, the measurement noise $\delta \hat x_{\rm meas}$ is proportional to the fluctuations of the light’s phase quadrature $\hat E_\phi$ \[Eqs. (\[Ephi\]) and (\[deltaxmeas\])\], and the back-action impulse $\delta\hat p_{\rm BA}$ is proportional to the fluctuations of its amplitude quadrature $\hat E_A$ \[Eqs. (\[EA\]) and (\[deltapBA\])\]. Of course, experimenters can measure any quadrature of the reflected light pulse that they wish. To achieve a QND [*quantum variational*]{} measurement of a classical force acting on the test mass [@VMZ; @VM1; @VM2; @VL], the experimenter should measure $\hat Q^{\rm var}_1 = \hat Q_1 + \hat P_1\tau/2\mu$ in the language of our idealized thought experiment \[Eq. (\[IdealVar\])\], which \[by Eqs.(\[MeasBADef\])\] translates into $-\delta \hat x_{\rm meas} + \delta \hat p_{\rm BA}\tau/2\mu$ plus the light’s signal and carrier, which in turn is a specific linear combination of the light’s amplitude and phase quadratures $\hat E_A$ and $\hat E_\phi$ \[Eqs. (\[EQuadratures\]), (\[deltaxmeas\]), (\[deltapBA\])\]. The experimenter can also prepare the incident pulse in a [*squeezed state*]{}, in the manner required for an Unruh-type [@unruh] QND measurement of the classical force. In the language of our idealized thought experiment, the desired squeezed state is a (near) eigenstate of $\hat Q_1^{\rm squeeze} = \hat Q_1 - \hat P_1\tau/2\mu$ \[Eq. (\[IdealSqueezed\])\], which translates into a near eigenstate of $\delta \hat x_{\rm meas} + \delta \hat p_{\rm BA}\tau/2\mu$ \[cf. Eqs. (\[MeasBADef\])\], or equivalently a near eigenstate of a specific linear combination of $\hat E_A$ and $\hat E_\phi$. Gravitational-Wave Interferometers and Other Photodetection-Based Devices {#sec:IFOs} ========================================================================= We now turn our attention to gravitational-wave interferometers and other real, high-precision devices for monitoring classical forces that act on test masses. Our goal is to prove that for these devices, as for our idealized examples, the force-measurement precision can be made completely independent of the test mass’s quantum properties, including its initial state and that this can be achieved by an appropriate filtering of the output data stream. As in our examples, this conclusion relies on the vanishing commutator of the observables that constitute the output data stream. We shall now discuss the nature of the output data stream and show that its commutator does, indeed, vanish. Vanishing commutator of the output {#sec:PhotonFluxCommutator} ---------------------------------- For interferometers and many other force-monitoring devices, the data stream, shortly before amplification to classical size, is encoded in an output light beam, and that beam is sent into a photodetector which monitors its photon number flux $\hat {\cal N}(t)$. The photodetector and associated electronics integrate up $\hat {\cal N}(t)$ over time intervals with duration $\tau$ long compared to the light beam’s carrier period, $\tau \gg 2\pi/\omega_o \sim 10^{-15}$ s, but short compared to the shortest timescales on which the classical force changes ($\tau \ll \tau_{\rm GW} \sim 10^{-3}$ s for the gravitational waves sought by interferometers). For LIGO-I interferometers, the integration time has been chosen to be $\tau = 5\times 10^{-5}$ s. The result is a discretized output data stream, whose Hermitian observables are the numbers of photons in the successive data samples, $$\hat N_j = \int_{-\infty}^\infty s(t-t_j) \hat{\cal N}(t) dt\;. \label{Nj}$$ Here $t_j = j \tau_0$ is the time of sample $j$, and $s(t)$ is a sampling function approximately equal to unity during a time interval $\Delta t = \tau_0$ centered on $t_j$ and zero outside that time interval. The photon number samples $\hat N_j$ are the analogs, for an interferometer or other force-monitoring device, of the meter coordinates $\hat Q_j$ in the idealized example of Sec. \[sec:VonNeumann\]. In Appendix \[app:VanishingCommutator\] we show that [*for any free light beam, the number flux operator, evaluated at a fixed plane orthogonal to the optic axis (e.g. at the entrance to the photodetector) self commutes*]{}, $$[\hat {\cal N}(t), \hat {\cal N}(t')] = 0\;. \label{calNcommute}$$ This guarantees, in turn, that all the output photon-number data samples (\[Nj\]) commute with each other $$[\hat N_j, \hat N_k] = 0\;. \label{NjCommute}$$ As we shall see below \[Eq. (\[NTestMass\])\], the initial position and momentum of the test mass, $\hat x_o$ and $\hat p_o$, appear linearly in the output variables $\hat{\cal N}(t)$ and $\hat N_j$. They obviously will produce nonzero contributions to the output commutators. As in our simple examples (Sec. \[sec:Pedagogy\]), these nonzero test-mass contributions must be cancelled by identical nonzero contributions from noncommutation of the measurement noise (photon shot noise) and the back-action noise (radiation-pressure noise). Devising a filter to remove test-mass quantum noise {#sec:Filter} --------------------------------------------------- The vanishing output commutators constitute our first underpinning for freeing the measurements from the influence of test-mass quantization. As in the idealized measurements of Sec.\[sec:VonNeumann\], the vanishing commutators guarantee a key property of the data analysis: If, from each specific realization of the output data stream $\{\tilde N_1, \tilde N_2, \ldots \}$, our data analysis produces a new set of quantities (the “filtered output variables”) $$\tilde R_J (\tilde N_1, \tilde N_2, \ldots)\;, \label{tildeRN}$$ then the statistics of these $\tilde R_J$ will be identically the same as if we had directly measured the corresponding observables $$\hat R_J (\hat N_1, \hat N_2, \ldots)\;, \label{hatRN}$$ rather than computing them from the measured $\tilde N_j$’s. Therefore, we can regard our interferometer (or other device) as measuring the filtered output observables $\{\hat R_1, \hat R_2, \ldots\}$, whatever those observables may be. By analyzing the test-mass dynamics of the interferometer (or other measuring device) in the Heisenberg picture, one can learn how the test-mass initial position $\hat x_o$ and momentum $\hat p_o$ influence the operators $\{\hat N_1, \hat N_2, \ldots\}$. One can then deduce a set of filtered observables $\{\hat R_1, \hat R_2, \ldots\}$ in which $\hat x_o$ and $\hat p_o$ do not appear but the gravitational-wave or other classical force information is retained. (These will be the analogs of $\hat R = \hat Q_0-2\hat Q_1+\hat Q_2$ \[Eq.  (\[hatR\])\] in our simple model problem). [*The filter that leads from $\{\hat N_1, \hat N_2, \ldots\}$ to $\{\hat R_1, \hat R_2, \ldots\}$, when applied to the output (c-number) data $\{\tilde N_1, \tilde N_2, \ldots\}$ to produce $\{\tilde R_1, \tilde R_2, \ldots\}$, is guaranteed to remove all influence of $\hat x_o$ and $\hat p_o$, and thence all influence of the test-mass initial state.*]{} ### Influence of $\hat x_o$ and $\hat p_o$ on the output data To make this more specific, let us explore how $\hat x_o$ and $\hat p_o$ influence the output data train. To very high accuracy (sufficient for our purposes), interferometers (and most other force-measuring devices) are [*linear*]{}. The inputs are: (i) the test-mass position $\hat x(t)$ \[actually, the difference between four test-mass positions in the case of an interferometer; Eq. (\[xDef\])\], and (ii) the electric field operators $\hat E_a(t)$, $a=1,2,\ldots$ for the field fluctuations that enter the interferometer at the bright port, at the dark port, and at all light-dissipation locations (e.g., at mirrors where bits of light scatter out of the optical train and reciprocally new bits of field fluctuations scatter into it); see, e.g., the detailed analysis of interferometers in Ref.[@kimble]. The output photon flux is a linear functional of these inputs, $$\hat {\cal N}(t) = \int_{-\infty}^t \left[ K_x(t-t') \hat x(t') + \sum_a K_a(t-t') \hat E_a(t') \right] dt'\;; \label{LinearOutput}$$ cf. the discussion in Appendix \[app:VanishingCommutator\]. The $\hat E_a$ terms constitute the photon shot noise (analogs of $\hat Q_r^{\rm before}$ in our idealized example, Sec. \[sec:VonNeumann\]). The test-mass initial observables $\hat x_o$ and $\hat p_o$ enter $\hat {\cal N}(t)$ and thence $\{\hat N_1, \hat N_2, \ldots\}$, through $\hat x(t)$ in a manner governed by the test masses’ free dynamics. The nature of that free dynamics depends on the interferometer design. We shall consider two examples in turn: interferometers with pendular dynamics, and signal-recycled interferometers. These examples should be easily extendable to any other type of interferometer than might be conceived in the future. ### Interferometers with pendular dynamics In conventional gravitational-wave interferometers (e.g. LIGO-I, VIRGO and TAMA) and in the QND interferometers analyzed by Kimble et. al.[@kimble], the test masses swing sinusoidally at $\sim 1$ Hz frequency in response to their suspensions’ pendular restoring force (as modified slightly by the optical cavities’ radiation-pressure force): $$\hat x_{\rm free}(t) = \hat x_o \cos \omega_m t + {\hat p_o\over \mu\omega_m}\sin\omega_m t\;. \label{xFree}$$ Here $\mu$ is the reduced mass (1/4 the actual mass of one test mass in the case of an interferometer) and $\omega_m \sim 2\pi \times 1$ Hz is the pendular swinging frequency. There is no significant damping of the free motion (3.7) because the experimenters take great pains to liberate the test masses from all damping; the typical damping times in LIGO-I are of order a day, and in advanced interferometers (LIGO-II and beyond) will be of order a year or more [@lsc; @damping], which is far longer than the data segments used in the data analysis. Superimposed on the free test-mass dynamics (\[xFree\]) are (i) the influence $\xi_{\rm GW}(t)$ of the gravitational-wave signal, (ii) the “back-action” influence $\hat x_{\rm BA}(t)$ of the light’s fluctuating radiation pressure (which is linear in the input fields $\hat E_a$ and is the analog of the $\hat P_r$ and $\delta p_{\rm BA}$ of our discrete model problems), and (iii) the influence $\xi_{\rm other}(t)$ of a variety of other forces — low-frequency feedback forces from servo systems, thermal-noise forces, seismic vibration forces, etc: $$\hat x(t) = \hat x_{\rm free}(t) + \xi_{\rm GW}(t) + \hat x_{\rm BA}(t) + \xi_{\rm other}(t)\;. \label{xTrue}$$ Inserting Eq. (\[xFree\]) into (\[xTrue\]) and then (\[xTrue\]) into (\[LinearOutput\]) we see that, for a test-mass with pendular dynamics, the initial test-mass position and momentum operators appear in the output flux operator in the form $$\begin{aligned} \hat {\cal N}(t) &=& \int_{-\infty}^t K_x(t-t') \left[ \hat x_o \cos\omega_m t' + {\hat p_o\over\mu\omega_m}\sin\omega_m t' \right] dt' \nonumber\\ && + \hbox{(other contributions)}\;. \label{NTestMass}\end{aligned}$$ The interferometer’s transfer function $K_x(t-t')$ is independent of absolute time and thus transforms frequency-$\omega_m$ inputs into frequency-$\omega_m$ outputs. Therefore, $\hat x_o$ and $\hat p_o$ appear in the output solely at frequency $\omega_m/2\pi \sim$ 1 Hz. Now, because the output data generally have large noise (seismic and other) at frequencies below $\sim 10$ Hz, it is routine, in interferometers, to high-pass filter the output data so as to remove frequencies below $\sim 10$ Hz. When one does so, [*one automatically removes all influence of $\hat x_o$ and $\hat p_o$ from the filtered data $\tilde R_J$*]{} \[Eq. (\[tildeRN\])\]. This is a precise analog of applying the discrete second time derivative to the output data in our simple example (Sec. \[sec:VonNeumann\]) so as to remove $\hat x_o$ and $\hat p_o$ from the data; and it is a realization of a general class of measurement procedures, for a harmonic oscillator on which a classical force acts, that is analyzed by Caves using his path integral formalism (last part of Sec. III$\;$C of Ref. [@caves2]). ### Signal-recycled interferometers A signal-recycling mirror, placed at an interferometer’s output port, sends information about the test-mass position $\hat x(t)$ back into the interferometer as part of the back-action (radiation-pressure) force, and thereby alters the free test-mass dynamics. The altered free dynamics have been analyzed in detail by Buonanno and Chen [@BC3]; they find that the test masses and the interferometer’s side-band light form a coupled system with four degrees of freedom, so $\hat x_o$ and $\hat p_o$ appear in $\hat x_{\rm free}(t)$, and thence in $\hat x(t)$ and thence in $\hat {\cal N}(t)$ at four discrete frequencies $\omega_A$ ($A=1,2,3,4$). Correspondingly, in the output data train, the influence of the test-mass initial state is confined to the Fourier components at the frequencies $\omega_A$. If these frequencies were real, then one could remove the influence of the test-mass initial state from the data by filtering out the data’s Fourier components at these four frequencies. However, as Buonanno and Chen [@BC3] discuss, such filtering is not necessary: The frequencies are actually complex with imaginary parts that produce damping on timescales $\alt 1$ second (when a servo is introduced to control an instability). Therefore, the influence of $\hat x_o$ and $\hat p_o$ on the output flux operator $\hat{\cal N}(t)$ damps out quickly, and correspondingly (see the end of Sec. \[sec:PhotonFluxCommutator\]), the influence of the test-mass initial state on the output data train damps out quickly without any filtering. Conclusions {#sec:Conclusions} =========== To reiterate: In an interferometer (and many other force-measuring devices), the output signal is encoded in the photon number flux operator $\hat {\cal N}(t)$ of a light beam, which is converted into discrete photon number samples $\hat N_j$ by a photodetector and electronics. These outputs have vanishing commutators $[\hat {\cal N}(t), \hat {\cal N}(t')] = 0$ and $[\hat N_j,\hat N_k] = 0$ and thus can be thought of as classical quantities. These outputs are linear in the initial test-mass position $\hat x_o$ and momentum $\hat p_o$ and involve no other test-mass variables. The output commutators manage to vanish because the photon back-action noise and photon shot noise have commutators that cancel those of $\hat x_o$ and $\hat p_o$. In the output $\hat {\cal N}(t)$ of any interferometer with pendular dynamics, $\hat x_o$ and $\hat p_o$ appear only at the pendular frequency $\omega_m/2\pi \sim 1$ Hz, and all influences of $\hat x_o$ and $\hat p_o$ (including all influences of the test-mass initial state) are removed completely from the data by the high-pass filtering that is routine for interferometers. For other types of interferometers, with different test-mass dynamics, other data filtering procedures will remove the influence of $\hat x_o$ and $\hat p_o$ and the test-mass initial state — and in some cases (e.g., a signal-recycled interferometer) no filtering is needed at all. This complete removal of all influence of $\hat x_o$ and $\hat p_o$ from the filtered data implies the answers to the three questions posed in the introduction of this paper (Sec. \[sec:Questions\]): (i) The test-mass quantum mechanics has no influence on the interferometer’s noise; the only quantum noise is that arising from the light. (ii) Therefore, when analyzing a candidate interferometer design, one need not worry about the test-mass quantum mechanics, except for using it to feed the gravity-wave signal and the back-action noise through the test mass to the photon-flux output. (iii) Similarly, when conceiving new designs for interferometers, one need not worry about the test-mass quantum mechanics — except for devising appropriate data filters to remove $\hat x_o$ and $\hat p_o$ from the data. Acknowledgments {#acknowledgments .unnumbered} =============== For helpful advice or email correspondence, we thank Orly Alter, Alessandra Buonanno, Carlton Caves, Yanbei Chen, Crispin Gardiner, William Unruh, Yoshihisa Yamamoto, and the members of the 1998–99 Caltech QND Reading Group, most especially Constantin Brif, Bill Kells, Jeff Kimble, Yuri Levin and John Preskill. This research was supported in part by NSF grants PHY–9503642, PHY–9900776, PHY-0098715, and PHY–0099568, by the Russian Foundation for Fundamental Research grants \#96-02-16319a and \#97-02-0421g, and (for VBB, FYaK and SPV) by the NSF through Caltech’s Institute for Quantum Information. Triple Measurement in the Schroedinger Picture {#app:TripleMeasurement} ============================================== In this appendix we present a Schroedinger-picture analysis of the most important of this paper’s pedagogical thought experiments (Sec. \[sec:VonNeumann\]): a triple measurement of the position of a free test mass, using three independent meters, with the goal of determining the mean classical force $\bar F$ acting on the test mass without any contaminating noise whatsoever from the test mass’s initial state. Our analysis will proceed in three steps: (i) an analysis of one of the position measurements (any one of the three), Sec. \[sec:single\_measurement\]; (ii) \[relying on step (i)\] a derivation of the probability density $W(\tilde Q_0, \tilde Q_1, \tilde Q_2)$ for the outcome of the triple measurement procedure, Sec. \[sec:triple\_measurement\]; and (iii) a use of this probability density to show that the combination $\tilde R \equiv \tilde Q_0 - 2 \tilde Q_1 + \tilde Q_2$ of the measurement results contains the desired information about $\bar F$ uncontaminated by any noise from the test-mass initial state, Sec. \[sec:statistics\]. Single position measurement {#sec:single_measurement} --------------------------- Let $|{\Psi}\rangle$ be the state of the test mass before the measurement and $$|{\psi}\rangle = \displaystyle\int_{-\infty}^{\infty} \psi(Q)|{Q}\rangle\,dQ \label{psiQ}$$ be the initial state of the meter, where the meter’s eigenstates are normalized by $$\langle Q' | Q \rangle = \delta(Q-Q')\;. \label{Qnormalize}$$ We leave the test-mass state $|{\Psi}\rangle$ completely unspecified since our goal is to show that it has no influence at all on the measurement outcome. For concreteness we specify the meter’s initial wave function $\psi(Q)$ to be Gaussian: $$\label{psi} \psi(Q) = \frac{1}{\sqrt{\sqrt{2\pi}\,\Delta_Q}}\,\exp\left[ -\frac{Q^2}{2\Delta_Q^2}\left(\frac12-\frac{i\Delta_{QP}}{\hbar}\right) \right] \;.$$ Here $\Delta_Q$ (denoted $\Delta Q^{\rm before}$ in the text) is the initial variance of $Q$ and $$\Delta_{QP} = \frac{\langle{\hat Q\hat P + \hat P\hat Q}\rangle}2$$ is the initial cross correlation of the meter’s position and momentum. For this Gaussian initial state, the variance $\Delta_P$ of the meter’s momentum (denoted $\Delta P^{\rm before}$ in the text) is given by the minimum-uncertainty relation $$\Delta_Q^2\Delta_P^2 - \Delta_{PQ}^2 = \frac{\hbar^2}{4} \;.$$ The first stage of the measurement process is the interaction of the test mass and the meter. In the Schroedinger Picture this interaction puts the meter and test mass into the entangled state $$\hat U|{\psi}\rangle |{\Psi}\rangle \;,$$ where $$\label{U} \hat U = \exp{\left(\frac{i\hat x\hat P}{\hbar}\right)}$$ is the evolution operator associated with the interaction (delta function) part of the Hamiltonian (\[Hamiltonian\]). The next stage is a precise measurement of the meter’s generalized position $\hat Q$. This measurement disentangles the quantum states of the test mass and meter: the meter gets reduced to the eigenstate $|{\tilde Q }\rangle$ of $\hat Q$, where $\tilde Q$ is the $c$-number obtained as result of this measurement, and the test mass gets reduced to the state $$\label{redstate} \frac{\langle{\tilde Q}|\hat U|{\psi}\rangle |{\Psi}\rangle}{\sqrt{W(\tilde Q)}} = \frac{\hat\Omega(\tilde Q)|{\Psi}\rangle}{\sqrt{W(\tilde Q)}} \;,$$ where $$\label{Omega} \hat\Omega(\tilde Q) = \langle{\tilde Q}|\hat U|{\psi}\rangle$$ is the reduction operator describing the entire two-stage measurement procedure, and $$\label{W} W(\tilde Q) = \langle{\Psi}|\hat\Omega^\dagger(\tilde Q) \hat\Omega(\tilde Q)|{\Psi}\rangle$$ is the probability density for obtaining the result $\tilde Q$. An explicit form for the reduction operator can be obtained by substituting Eqs.  (\[psiQ\]), (\[psi\]) and (\[U\]) into Eq.(\[Omega\]); the result is: $$\begin{aligned} \Omega(\tilde Q) &=& \langle{\tilde Q}| \exp{\left(\frac{i\hat x\hat P}{\hbar}\right)} \displaystyle\int_{-\infty}^{\infty} \psi(Q)|{Q}\rangle\,dQ \nonumber\\ &=& \langle{\tilde Q}| \displaystyle\int_{-\infty}^{\infty} |{x}\rangle\langle{x}|\,\psi(Q)|{Q-x}\rangle\,dx\,dQ \nonumber\\ &=& \displaystyle\int_{-\infty}^{\infty} |{x}\rangle\langle{x}|\psi(\tilde Q+x)\,dx \nonumber\\ &=& \frac{1}{\sqrt{\sqrt{2\pi}\,\Delta_Q}}\,\exp\left[ -\frac{(\tilde Q + \hat x)^2}{2\Delta_Q^2} \left(\frac12-\frac{i\Delta_{QP}}{\hbar}\right) \right] \;, \nonumber\\ \label{OmegaNorm}\end{aligned}$$ where we have used the shift-operator relation $e^{i\hat x \hat P / \hbar} |Q\rangle = |Q-\hat x\rangle = \int_{-\infty}^{\infty} dx |x\rangle\langle x|Q-x\rangle$ and the relation $\langle \tilde Q | Q-x\rangle = \delta(Q-x-\tilde Q)$. We will need below the following formulae (some are evident, and for the others we provide outlines of the proofs): $$\label{int_none} \displaystyle\int_{-\infty}^{\infty} \hat\Omega^\dagger(\tilde Q)\hat\Omega(\tilde Q)\,d\tilde Q = 1 \;,$$ $$\label{int_Q} \displaystyle\int_{-\infty}^{\infty} \hat\Omega^\dagger(\tilde Q)\hat\Omega(\tilde Q)\, \tilde Q\,d\tilde Q = - \hat x \;,$$ $$\label{int_Q2} \displaystyle\int_{-\infty}^{\infty} \hat\Omega^\dagger(\tilde Q)\hat\Omega(\tilde Q)\, \tilde Q^2\,d\tilde Q = \hat x^2 + \Delta_Q^2 \;,$$ $$\label{int_x} \displaystyle\int_{-\infty}^{\infty} \hat\Omega^\dagger(\tilde Q)\hat x^n\hat\Omega(\tilde Q)\, d\tilde Q = \hat x^n \qquad (n=0,1,\dots) \;,$$ $$\label{int_x0} \displaystyle\int_{-\infty}^{\infty} \hat\Omega^\dagger(\tilde Q)\hat x\hat\Omega(\tilde Q)\, \tilde Q\,d\tilde Q = -\hat x^2 \;,$$ $$\begin{aligned} \label{int_p} \lefteqn{ \displaystyle\int_{-\infty}^{\infty} \hat\Omega^\dagger(\tilde Q)\hat p\hat\Omega(\tilde Q)\,d\tilde Q}\quad \nonumber\\ &&= \displaystyle\int_{-\infty}^{\infty} \hat\Omega^\dagger(\tilde Q)\left( \hat\Omega(\tilde Q)\hat p + \left[\hat p,\hat\Omega(\tilde Q)\right] \right)\,d\tilde Q \nonumber\\ &&= \displaystyle\int_{-\infty}^{\infty} \hat\Omega^\dagger(\tilde Q)\hat\Omega(\tilde Q)\, d\tilde Q\,\hat p - i\hbar \displaystyle\int_{-\infty}^{\infty} \hat\Omega(\tilde Q) \frac{d\hat\Omega^\dagger(\tilde Q)}{d\hat x}\,d\tilde Q\nonumber\\ &&= \hat p \;,\end{aligned}$$ $$\begin{aligned} \lefteqn{\displaystyle\int_{-\infty}^{\infty} \hat\Omega^\dagger(\tilde Q)\hat p^2\hat\Omega(\tilde Q)\,d\tilde Q} \nonumber\\ &=& \displaystyle\int_{-\infty}^{\infty} \left( \hat p\hat\Omega^\dagger(\tilde Q) + \left[\hat\Omega^\dagger(\tilde Q),\hat p\right] \right)\left( \hat\Omega(\tilde Q)\hat p + \left[\hat p,\hat\Omega(\tilde Q)\right] \right)\,d\tilde Q \nonumber\\ &=& \hat p \displaystyle\int_{-\infty}^{\infty} \hat\Omega^\dagger(\tilde Q)\hat\Omega(\tilde Q)\, d\tilde Q\,\hat p + \hbar^2 \displaystyle\int_{-\infty}^{\infty} \frac{d\hat\Omega^\dagger(\tilde Q)}{d\hat x} \frac{d\hat\Omega(\tilde Q)}{d\hat x}\,d\tilde Q \nonumber\\ &=& \hat p^2 + \frac1{\Delta_Q^2}\left(\frac{\hbar^2}4+\Delta_{QP}^2\right) = \hat p^2 + \Delta_P^2 \;, \label{int_1}\end{aligned}$$ $$\begin{aligned} \lefteqn{\displaystyle\int_{-\infty}^{\infty} \hat\Omega^\dagger(\tilde Q)\hat p\hat\Omega(\tilde Q)\, \tilde Q\,d\tilde Q} \nonumber\\ &=& \displaystyle\int_{-\infty}^{\infty} \hat\Omega^\dagger(\tilde Q)\left( \hat\Omega(\tilde Q)\hat p + \left[\hat p,\hat\Omega(\tilde Q)\right] \right)\,\tilde Q\,d\tilde Q \nonumber\\ &=& \displaystyle\int_{-\infty}^{\infty} \hat\Omega^\dagger(\tilde Q)\hat\Omega(\tilde Q)\, \tilde Q\,d\tilde Q\,\hat p - i\hbar \displaystyle\int_{-\infty}^{\infty} \hat\Omega(\tilde Q) \frac{d\hat\Omega^\dagger(\tilde Q)}{d\hat x}\,\tilde Q\,d\tilde Q \nonumber\\ &=& -\hat x\hat p + i\hbar\left(\frac12 - \frac{\Delta_{QP}}{\hbar}\right) = - \frac{\hat x\hat p + \hat p\hat x}2 + \Delta_{QP} \;,\end{aligned}$$ $$\label{int_xp} \displaystyle\int_{-\infty}^{\infty} \hat\Omega^\dagger(\tilde Q)(\hat x\hat p+\hat p\hat x) \hat\Omega(\tilde Q)\,d\tilde Q = \hat x\hat p+\hat p\hat x \;. \label{int_pQ}$$ The triple measurement procedure {#sec:triple_measurement} -------------------------------- The triple measurement procedure described in Sec. \[sec:VNThought\] of the text consists of the following five stages. 1. An initial position measurement of the type we have just analyzed, using meter number 0. This measurement reduces the test mass’s wave function to $$\frac{\hat\Omega_0(\tilde Q_0)|{\Psi}\rangle}{\sqrt{W_0(\tilde Q_0)}}\;$$ \[Eq. (\[redstate\])\], where $\hat\Omega_0(\tilde Q_0)$ is the reduction operator \[Eq. (\[Omega\])\], and $\tilde Q_0$ is the result of this measurement. The probability density for obtaining this result is equal to $$W_0(\tilde Q_0) = \langle{\Psi}| \hat\Omega_0^\dagger(\tilde Q_0) \hat\Omega_0(\tilde Q_0) |{\Psi}\rangle \;$$ \[Eq. (\[W\])\]. 2. Free evolution of the test mass during the time $\tau$. Denoting the corresponding evolution operator by $\hat{\cal U}_0$, the test-mass wave function after this stage is given by $$\frac{\hat{\cal U}_0\hat\Omega_0(\tilde Q_0) |{\Psi}\rangle}{\sqrt{W_0(\tilde Q_0)}}\;.$$ 3. Second position measurement of the same type as in the first stage, but using a new meter, number 1. The measurement result is denoted $\tilde Q_1$, the reduction operator is $\Omega_1(\tilde Q_1)$, and the measurement reduces the test-mass state to $$\frac{\hat\Omega_1(\tilde Q_1)\hat{\cal U}_0\hat\Omega_0(\tilde Q_0) |{\Psi}\rangle}{\sqrt{W_1(\tilde Q_0,\tilde Q_1)}}\;,$$ where $$\begin{aligned} && W_1(\tilde Q_0,\tilde Q_1) \nonumber\\ &&= \langle {\Psi}| \hat\Omega_0^\dagger(\tilde Q_0)\hat{\cal U}_0^\dagger \hat\Omega_1^\dagger(\tilde Q_1)\hat\Omega_1(\tilde Q_1) \hat{\cal U}_0\hat\Omega_0(\tilde Q_0) |{\Psi}\rangle\end{aligned}$$ is the joint probability disrtibution for the first two measurement results, $\tilde Q_0$ and $\tilde Q_1$. 4. Second free evolution of the test mass with the evolution operator $\hat{\cal U}_1$. After this stage the test-mass wave function is $$\frac{\hat{\cal U}_1\hat\Omega_1(\tilde Q_1) \hat{\cal U}_0\hat\Omega_0(\tilde Q_0) |{\Psi}\rangle}{\sqrt{W_1(\tilde Q_0,\tilde Q_1)}}\;.$$ 5. Finally, a third position measurement using a new meter, number 2, with the result $\tilde Q_2$. After this measurement the test-mass state is $$\frac{\hat\Omega_2(\tilde Q_2) \hat{\cal U}_1\hat\Omega_1(\tilde Q_1) \hat{\cal U}_0\hat\Omega_0(\tilde Q_0) |{\Psi}\rangle}{\sqrt{W_2(\tilde Q_0,\tilde Q_1,\tilde Q_2)}}\;,$$ where $$\begin{aligned} \lefteqn{ W_2(\tilde Q_0,\tilde Q_1,\tilde Q_2) = \langle{\Psi}| \hat\Omega_0^\dagger(\tilde Q_0)\hat{\cal U}_0^\dagger \hat\Omega_1^\dagger(\tilde Q_1)\hat{\cal U}_1^\dagger \hat\Omega_2^\dagger(\tilde Q_2) } \quad\quad\quad \nonumber\\ &&\times \hat\Omega_2(\tilde Q_2) \hat{\cal U}_1\hat\Omega_1(\tilde Q_1) \hat{\cal U}_0\hat\Omega_0(\tilde Q_0) |{\Psi}\rangle \;. \label{W_2}\end{aligned}$$ is the joint probability distribution for all three measurement outcomes. Equation (\[W\_2\]) is the principal result of this subsection. We shall use it to study the statistics of the measurement outcomes. In that study we shall need the following expression for each of the three reduction operators \[Eq. (\[OmegaNorm\])\]: $$\begin{aligned} \lefteqn{ \hat\Omega_s(\tilde Q_s) } \nonumber \\ && = \frac{1}{\sqrt{\sqrt{2\pi}\,\Delta_{Q\,s}}}\, \exp\left[ -\frac{(\tilde Q_s + \hat x)^2}{2\Delta_{Q\,s}^2} \left(\frac12-\frac{i\Delta_{QP\,s}}{\hbar}\right) \right] ,\end{aligned}$$ where $s=1,2,3$. Statistics of the measurement results {#sec:statistics} ------------------------------------- If an explicit form for the initial wave function $|{\Psi}\rangle$ were specified, then the probability density (\[W\_2\]) could be calculated directly. However, that calculation would be very cumbersome, the final result would be quite complicated, and we have no need for it. Our final goal is not to study $W_2$, but rather to analyze the statistics of the quantity $\tilde R = \tilde Q_0 - 2 \tilde Q_1 + \tilde Q_2$, which the experimenter computes from the three measurement outcomes $\tilde Q_s$ after the triple measurement procedure is complete. Specifically, we wish to verify the results of the text’s Heisenberg-picture analysis: (i) That the mean value of $\tilde R$ over a large number of experiments is $\langle{\tilde R}\rangle = (-\tau^2/\mu)\bar F$, where $\tau$ is the time between each pair of measurements, $\mu$ is the mass of the test mass, and $\bar F$ is the mean force that acts on the test mass \[Eqs. (\[xiVal\]) and (\[hatR\]) of the text\]. (ii) That the variance of $\tilde R$ (and thence of the measured value of $\bar F$) [*is independent of the test-mass initial state*]{} $|{\Psi}\rangle$, and is given by Eq. (\[DeltabarF\]) when the meters’ individual initial states have no position-momentum correlations, $\Delta_{QP\,s}=0$, and can be made to vanish by a clever, “squeezed” choice of the meters’ initial states. #### Mean value. The mean value of $\tilde R$ over a large number of experiments is determined by the joint probability distribution $W_3$ for the measurement outcomes: $$\begin{aligned} \langle{\tilde R}\rangle &=& \langle{\tilde Q_0-2\tilde Q_1+\tilde Q_2}\rangle \nonumber\\ &=& \displaystyle\int_{-\infty}^{\infty} (\tilde Q_0-2\tilde Q_1+\tilde Q_2) W_2(\tilde Q_0,\tilde Q_1,\tilde Q_2)\, d\tilde Q_0d\tilde Q_1d\tilde Q_2 . \nonumber\\ \label{mean_0}\end{aligned}$$ Using Eqs. (\[int\_none\]), (\[int\_Q\]), we bring this into the form $$\begin{aligned} \lefteqn{ \langle{\tilde R}\rangle = \displaystyle\int_{-\infty}^{\infty} \langle{\Psi}| \hat\Omega_0^\dagger(\tilde Q_0)\hat{\cal U}_0^\dagger \hat\Omega_1^\dagger(\tilde Q_1)\hat{\cal U}_1^\dagger } \nonumber\\ &&\quad \times \left(\tilde Q_0-2\tilde Q_1-\hat x\right) \hat{\cal U}_1\hat\Omega_1(\tilde Q_1) \hat{\cal U}_0\hat\Omega_0(\tilde Q_0) |{\Psi}\rangle\,d\tilde Q_0d\tilde Q_1 \;. \nonumber\\ \label{mean_1}\end{aligned}$$ Taking into account that $$\begin{aligned} {\cal U}_1^\dagger{\cal U}_1 &=& 1 \;, \label{U_1_none} \\ {\cal U}_1^\dagger\hat x{\cal U}_1 &=& x + \frac{\hat p\tau}\mu + x_{F\,1}\;, \label{U_1_x}\end{aligned}$$ where $\mu$ is the mass of the test mass and $$x_{F\,1} = \frac1\mu\,\int_\tau^{2\tau}(2\tau-t)F(t)\,dt$$ is the displacement of the test mass during stage 4 (the second interval of free evolution) caused by the external force $F(t)$, expression (\[mean\_1\]) can be further reduced to the form $$\begin{aligned} \langle{\tilde R}\rangle &=& \displaystyle\int_{-\infty}^{\infty} \langle{\Psi}| \hat\Omega_0^\dagger(\tilde Q_0){\cal U}_0^\dagger \hat\Omega_1^\dagger(\tilde Q_1) \left( \tilde Q_0-2\tilde Q_1 \right. \nonumber\\ &&\left. \quad -\hat x - \frac{\hat p\tau}\mu - x_{F\,1} \right) \hat\Omega_1(\tilde Q_1)\hat{\cal U}_0\hat\Omega_0(\tilde Q_0) |{\Psi}\rangle\,d\tilde Q_0d\tilde Q_1 \;. \nonumber\\ \label{mean_2}\end{aligned}$$ The next calculations are just a repetition of the previous ones, with only the addition of Eqs. (\[int\_x\]), (\[int\_p\]) and $$\begin{aligned} {\cal U}_0^\dagger\hat x{\cal U}_0 = x + \frac{\hat p\tau}\mu + x_{F\,0} \;, \label{U_0_x} \\ {\cal U}_0^\dagger\hat p{\cal U}_0 = p + p_{F\,0}\;, \label{U_0_p}\end{aligned}$$ where $$\begin{aligned} x_{F\,0} = \frac1\mu\,\int_0^\tau(\tau-t)F(t)\,dt \;, \\ p_{F\,0} = \int_0^\tau F(t)\,dt \;.\end{aligned}$$ They give: $$\begin{aligned} \lefteqn{ \langle{\tilde R}\rangle = \displaystyle\int_{-\infty}^{\infty} \langle{\Psi}| \hat\Omega_0^\dagger(\tilde Q_0){\cal U}_0^\dagger \left( \tilde Q_0 + 2\hat x - \hat x - \frac{\hat p\tau}\mu - x_{F\,1} \right) } \nonumber\\ &&\quad\times {\cal U}_0\hat\Omega_0(\tilde Q_0) |{\Psi}\rangle\,d\tilde Q_0 \nonumber\\ &=& \displaystyle\int_{-\infty}^{\infty} \langle{\Psi}| \hat\Omega_0^\dagger(\tilde Q_0){\cal U}_0^\dagger \nonumber\\ &&\quad\times \left( \tilde Q_0 + \hat x + x_{F\,0} - \frac{p_{F\,0}\tau}\mu - x_{F\,1} \right) \hat\Omega_0(\tilde Q_0) |{\Psi}\rangle\,d\tilde Q_0 \nonumber\\ &=& \langle{\Psi}|\left( x_{F\,0} - \frac{p_{F\,0}\tau}\mu - x_{F\,1} \right)|{\Psi}\rangle = x_{F\,0} - \frac{p_{F\,0}\tau}\mu - x_{F\,1} \nonumber\\ &=& -\frac1\mu\int_0^{2\tau}(\tau-|t-\tau|)F(t) \,dt \equiv -\frac{\tau^2}\mu \bar F\;. \label{mean_4}\end{aligned}$$ This agrees with the Heisenberg-picture prediction \[Eqs. (\[xiVal\]) and (\[hatR\]) of the text, where we must note that the meters’ initial states have $\langle{Q_s}\rangle = \langle{P_s}\rangle=0$\]. #### Variance The mean square value of the measurement outcome $\tilde R$ over a large number of experiments is given by $$\begin{aligned} \lefteqn{ \langle{\tilde R^2}\rangle = \langle{(\tilde Q_0-2\tilde Q_1+\tilde Q_2)^2}\rangle } \nonumber\\ && \;\;\;= \displaystyle\int_{-\infty}^{\infty} (\tilde Q_0-2\tilde Q_1+\tilde Q_2)^2 W_2(\tilde Q_0,\tilde Q_1,\tilde Q_2)\, d\tilde Q_0d\tilde Q_1d\tilde Q_2 \;. \nonumber\\ \label{sqear_0}\end{aligned}$$ Using Eqs. (\[int\_none\])–(\[int\_xp\]), (\[U\_1\_none\]), (\[U\_1\_x\]), (\[U\_0\_x\]), and (\[U\_0\_p\]), we obtain: $$\begin{aligned} \label{sqear_1} \lefteqn{ \langle{\tilde R^2}\rangle = \langle{(\tilde Q_0-2\tilde Q_1+\tilde Q_2)^2}\rangle } \nonumber\\ &&= \displaystyle\int_{-\infty}^{\infty} \langle{\Psi}| \hat\Omega_0^\dagger(\tilde Q_0)\hat{\cal U}_0^\dagger \hat\Omega_1^\dagger(\tilde Q_1)\hat{\cal U}_1^\dagger \left[ (\tilde Q_0-2\tilde Q_1-\hat x)^2 + \Delta_Q^2 \right] \nonumber\\ &&\times \hat{\cal U}_1\hat\Omega_1(\tilde Q_1) \hat{\cal U}_0\hat\Omega_0(\tilde Q_0) |{\Psi}\rangle\,d\tilde Q_0d\tilde Q_1 \nonumber\\ &&= \displaystyle\int_{-\infty}^{\infty} \langle{\Psi}| \hat\Omega_0^\dagger(\tilde Q_0){\cal U}_0^\dagger \hat\Omega_1^\dagger(\tilde Q_1) \nonumber\\ &&\times \left[ \left( \tilde Q_0 - 2\tilde Q_1 - \hat x - \frac{\hat p\tau}m - x_{F\,1} \right)^2 + \Delta_{Q\,2}^2 \right] \nonumber\\ &&\times \hat\Omega_1(\tilde Q_1)\hat{\cal U}_0\hat\Omega_0(\tilde Q_0) |{\Psi}\rangle\,d\tilde Q_0d\tilde Q_1 \nonumber\\ &&= \displaystyle\int_{-\infty}^{\infty} \langle{\Psi}| \hat\Omega_0^\dagger(\tilde Q_0){\cal U}_0^\dagger \left[ \left(\tilde Q_0 + \hat x - \frac{\hat p\tau}m - x_{F\,1}\right)^2 + 4\Delta_{Q\,1}^2 \right. \nonumber\\ && \left. + \frac{4\Delta_{QP\,1}\tau}m + \left({\Delta_{P\,1} \tau\over\mu}\right)^2 + \Delta_{Q\,2}^2 \right] \hat{\cal U}_0\hat\Omega_0(\tilde Q_0) |{\Psi}\rangle\,d\tilde Q_0 \nonumber\\ &&= \displaystyle\int_{-\infty}^{\infty} \langle{\Psi}| \hat\Omega_0^\dagger(\tilde Q_0) \left[ \left( \tilde Q_0 + \hat x + x_{F\,0} - \frac{p_{F\,0}\tau}m - x_{F\,1} \right)^2 \right. \nonumber\\ && \left. + 4\Delta_{Q\,1}^2 + \frac{4\Delta_{QP\,1}\tau}m + \left({\Delta_{P\,1} \tau\over\mu}\right)^2 + \Delta_{Q\,2}^2 \right] \hat\Omega_0(\tilde Q_0) |{\Psi}\rangle\,d\tilde Q_0 \nonumber\\ &&= \langle{\Psi}| \left[ \left(x_{F\,0} - \frac{p_{F\,0}\tau}m - x_{F\,1}\right)^2 + \Delta_{Q\,0}^2 \right. \nonumber\\ && \left. + 4\Delta_{Q\,1}^2 + \frac{4\Delta_{QP\,1}\tau}m + \left({\Delta_{P\,1} \tau\over\mu}\right)^2 + \Delta_{Q\,2}^2 \right] |{\Psi}\rangle \nonumber\\ &&= \langle{\tilde Q_0-2\tilde Q_1+\tilde Q_2}\rangle^2 + \Delta_{Q\,0}^2 \nonumber\\ && + 4\Delta_{Q\,1}^2 + \frac{4\Delta_{QP\,1}\tau}m + \left({\Delta_{P\,1} \tau\over\mu}\right)^2 + \Delta_{Q\,2}^2\end{aligned}$$ Subtracting off the square of the mean, $\langle{\tilde R}\rangle^2 = \langle{\tilde Q_0-2\tilde Q_1+\tilde Q_2}\rangle^2$, we obtain for the variance of the computed quantity $\tilde R$, over many experiments, $$\begin{aligned} {\tau^4\over\mu^2}(\Delta \bar F)^2 &=& (\Delta\tilde R)^2 = \langle \hat R^2\rangle - \langle \hat R\rangle^2 \nonumber\\ &=& \Delta_{Q\,0}^2 + 4\Delta_{Q\,1}^2 + \frac{4\Delta_{QP\,1}\tau}m + \left({\Delta_{P\,1} \tau\over\mu}\right)^2 + \Delta_{Q\,2}^2 \;; \nonumber\\ \label{DeltaR}\end{aligned}$$ see Eq. (\[mean\_4\]) for the first equality. [*This variance is independent of the test-mass initial state*]{} $|{\Psi}\rangle$, in accord with prediction of the Heisenberg-picture analysis \[passage following Eq. (\[GExpectation\]) of the text\]. When the three meters are all prepared in “naive” initial states, i.e. in states with uncorrelated generalized position $\hat Q_s$ and momentum $\hat P_s$, i.e. when $\Delta_{QP\,s} = 0$, then the variance (\[DeltaR\]) has the form that we deduced using the Heisenberg picture \[Eq. (\[DeltabarF\]) \]. When the meters are prepared in the more clever “squeezed” manner, i.e. in near eigenstates of $\hat Q_0$, $\hat Q_1^{\rm squeeze} = \hat Q_1 - \hat P_1\tau/2\mu$ and $\hat Q_2$, then the variance (\[DeltaR\]) vanishes, in accord with the Heisenberg-picture prediction \[passage following Eq. (\[IdealSqueezed\])\]. Linear measurements {#app:LinearMeasurements} =================== An important feature of our pedagogical examples (Sec. \[sec:Pedagogy\]), and of measurements performed by interferometric gravitational-wave detectors, is that they all are [*linear measurements*]{} in the sense of Ref. [@QuantumMeasurement]; i.e., they all satisfy the following two conditions: \(i) [*Linearity of the output:*]{} The meter’s output can be written as the sum of the operator for the test object’s measured variable and the operator for the meter’s additive noise \[cf. Eq. (\[SimpleEqsA\])\], and the additive noise does not depend on the initial state of the test object. Formally this sum is an operator, but it can be treated as a classical variable because it turns out to commute with itself at different times. \(ii) [*Linearity of the back action:*]{} The measurement-induced perturbations of all the test-object observables that are involved in the measurement procedure can be described by linear formulas similar to Eq. (\[SimpleEqsB\]), and the perturbations \[e.g. the second term on the right side of (\[SimpleEqsB\])\] do not depend on the initial state of the test object. This second condition requires discussion: The perturbations’ independence of the test-object initial state is particularly important when several test-object variables are measured consecutively — for example, if the same Heisenberg-Picture variable is measured quickly and repetitively at different moments of time as in our pedagogical examples (Sec. \[sec:Pedagogy\]), or if a variable is measured continuously as in a gravitational-wave detector (Sec. \[sec:IFOs\]). Suppose, for example, that the variable $\hat x_1$ is measured with precision $\Delta x_1^{\rm meas}$ thereby perturbing, via back-action, some other variable $\hat x_2$. Then the accuracy of a subsequent measurement of $\hat x_2$ will be constrained by the perturbation $$\Delta x_2^{\rm pert} = \frac{\hbar}{2\Delta x_1^{\rm meas}} |\langle [\hat x_1,\hat x_2]\rangle | \,.$$ Our condition (ii) of back-action linearity requires that this perturbation not depend on the initial state of the test object. A sufficient condition for this is that the commutator $[\hat x_1,\hat x_2]$ be a $c$-number, and that this requirement be fulfilled for all the operators involved in the measurement[^6] Linear measurements are closely related to linear systems (those for which the equations of motion for the generalized coordinates and momenta are linear; for example, a free mass and a harmonic oscillator) because the commutators of such systems’ coordinates and momenta are $c$-numbers. In [*non*]{}linear measurements (e.g. measurements of a particle in a double-welled potential), some very strange phenomena can arise, for example the quantum Zeno effect. Strictly speaking, all real meters are nonlinear. However, in most cases they can be regarded as linear to high accuracy. For example, if one measures displacements of a mirror of a Fabry-Perot cavity by monitoring the phase of light that passes through the cavity (as is done in LIGO), then the measurements are linear so long as the displacements are much smaller than the width of a cavity resonance, i.e. much smaller than $\lambda/{\cal F}$ where $\lambda$ is the wavelength of the light and $\cal F$ is the cavity finesse. If, by contrast, the displacements are comparable to or much larger than $\lambda/{\cal F}$, then the measurements are strongly nonlinear. An example is a proposed [*null-detector*]{} technique [@NullDetector] for measuring the phase of a mechanical oscillator, in which the oscillating mass is an end mirror of a Fabry-Perot cavity, and the times at which the mirror passes through cavity-resonant positions are measured with high accuracy by the cavity’s momentary transmissivity. These measurements are highly nonlinear because, in the proposed design, not only are the mirror displacements large compared to the cavity’s linearity regime, $\lambda/{\cal F}$; the mechanical oscillator’s amplitude of zero-point oscillations $\delta x_{\rm zp}$ is also large compared to $\lambda/{\cal F}$. State reduction plays an important role in this null detector’s measurements: it drives the mechanical oscillator into a squeezed-phase state, thereby facilitating a high-precision monitoring of the oscillator’s phase [@NullDetector]. It would be instructive to analyze the use of this highly nonlinear meter to monitor a classical force that acts on the oscillator’s mass. Does the oscillator’s initial quantum state influence the accuracy of the monitoring? Three properties of an interferometric gravitational-wave detector (interferometric position meter) allow one to consider it as linear with sufficiently high precision to justify the linear analysis given in this paper. [*First*]{}, its test-mass mirrors can be regarded as free masses (or as harmonic oscillators if significant electromagnetic rigidity exists in the system). [*Second*]{}, its linearity range $\lambda/{\cal F}\sim 10^{-6}$cm is much greater than the wave-induced displacements of the test masses ($\alt 10^{-15}$ cm). Hence, the signal phase shift of the output optical beam depends linearly on the displacement. [*Third*]{}, the measurement of the photon flux out the dark port is virtually equivalent to the measurement of the phase of the output beam because (i) the signal phase shift is much less than one radian and (ii) the mean value of the amplitude of the optical pumping field is much larger than the quantum uncertainties of its quadrature amplitudes. For a detailed presentation of the theory of linear measurements see Chaps. 5 and 6 of Ref. [@QuantumMeasurement]. For a detailed application of this theory to interferometric gravitational-wave detectors see Ref. [@BC3]. Vanishing self commutator of the photon number flux {#app:VanishingCommutator} =================================================== For any light beam (or other electromagnetic wave with confined cross section), the number flux operator at some chosen transverse plane (e.g. the entry to a photodetector) is $$\hat{\cal N}(t) = \int_0^\infty {d\omega\over2\pi} \int_0^\infty {d\omega'\over2\pi} \; \hat a_\omega^\dag \hat a_{\omega'} \; e^{i(\omega-\omega')t}\;. \label{calNa}$$ Here $\hat a_\omega^\dag$ is the creation operator and $\hat a_\omega$ the annihilation operator for photons of frequency $\omega$, and their commutators are $$[\hat a_\omega,\hat a_{\omega'}] = [\hat a_\omega^\dag,\hat a_{\omega'}^\dag] = 0\;,\quad [\hat a_\omega,\hat a_{\omega'}^\dag] = 2\pi \delta(\omega-\omega')\;. \label{aCommutators}$$ It is straightforward to verify from Eqs. (\[calNa\]) and (\[aCommutators\]) that $$[\hat {\cal N}(t),\hat {\cal N}(t')] = 0\;. \label{calNCommute}$$ Although this result is completely general, it is instructive to derive the vanishing self commutator for the specialized type of light beam that is used in interferometers and other force-measuring devices: a beam consisting of a monochromatic carrier with frequency $\omega_o$ plus sidebands embodied in $\hat a_\omega$ and $\hat a_\omega^\dag$. In this case to high accuracy we can linearize in the product of the carrier field and the side-band fields, obtaining for the relevant (side-band) photon flux $$\hat{\cal N}_1 (t) = \sqrt{\cal N}_0 \left[ \hat a(t) + \hat a^{\dag}(t)\right]\;. \label{calN2}$$ Here \[in the notation of Eqs. (\[ElectricField\])–(\[EQuadratures\])\] ${\cal N}_0={A_0}^2$ is the carrier’s photon flux and $\hat a(t)$, $\hat a^{\dag}(t)$ are the time-domain side-band annihilation and creation operators with commutation relations \[time-domain versions of (\[aCommutators\])\] $$\begin{aligned} [\hat a(t), && \hat a(t')] = 0 \;, \quad [\hat a^{\dag}(t),\hat a^{\dag}(t')] =0\;, \nonumber\\ &&[ \hat a(t), \hat a^{\dag} (t')] = \delta(t-t')\;. \label{calaTimeCommutator}\end{aligned}$$ It is straightforward, using these commutation relations, to verify that $$[{\cal N}_1(t),{\cal N}_1(t')] = 0\;. \label{CalNCommuteZero}$$ It is interesting to note that, although the photon number flux self commutes, the energy flux (energy passing a fixed transverse surface per unit time) $$\hat {\cal E}(t) = \hbar \int_0^\infty d\omega \int_0^\infty d\omega' \; \sqrt{\omega\omega'}\; \hat a_\omega^\dag \hat a_{\omega'} \; e^{i\omega(t-t')} \label{CalE}$$ does [*not*]{} self-commute, $$[\hat {\cal E}(t), \hat {\cal E}(t')] \ne 0\;. \label{calENotCommute}$$ This can be thought of as due to the energy-time uncertainty relation for photons. On the other hand, when (as in gravitational-wave interferometers) the light consists of a monochromatic carrier plus signals encoded in side bands with frequency $\Omega = \omega-\omega_o \ll \omega_o$, then for all practical purposes, $\hat {\cal E}(t)$ [*does*]{} self commute. [99]{} V. B. Braginsky, Sov.  Phys.—JETP, [**26**]{}, 831 (1968); V. B. Braginsky, in [*Physical Experiments with Test Bodies*]{}, NASA Technical Translation TT F-672 (U.S. Technical Information Service, Springfied, VA, 1972). V. B. Braginsky and Yu. I. Vorontsov, Sov.Phys.—Uspekhi, [**17**]{}, 644 (1975). V. B. Braginsky and F. Ya. Khalili, [*Quantum Measurement*]{} (Cambridge University Press, Cambridge, 1992). V. B. Braginsky, Yu. I. Vorontsov, and F. Ya. Khalili, Sov. Phys.—JETP, [**46**]{}, 705 (1977). V. B. Braginsky and F. Ya. Khalili, Rev. Mod. Phys., [**68**]{}, 1 (1996). S. P. Vyatchanin and E. A. Zubova, Phys. Lett. A [**203**]{}, 269 (1995); S. P. Vyatchanin and A. B. Matsko, JETP [**82**]{}, 1007 (1996). S. P. Vyatchanin and A. B. Matsko, JETP [**83**]{}, 690 (1996). S. P. Vyatchanin and A. Yu. Lavrenov, Phys. Lett. A, [**231**]{}, 38 C. M. Caves, in [*Quantum Measurement and Chaos*]{}, ed.E. R. Pike (Plenum, New York, 1987). K. S. Thorne, in [*300 Years of Gravitation*]{}, eds.S. W. Hawking and W. Israel (Cambridge U. Press, 1987). A. F. Pace, M. J. Collett and D. F. Walls, Phys.  Rev. A [**47**]{}, 3173 (1993). H. J. Kimble, Yu. Levin, A. B. Matsko, K. S. Thorne and S. P. Vyatchanin, Phys. Rev. D [**65**]{}, 022002 (2002). E. Gustafson, D. Shoemaker, K. Strain and R. Weiss, [*LSC White Paper on Detector Research and Development*]{}, LIGO document T990080-00-D (1999); available along with other relevant information at http://www.ligo.caltech.edu/$\sim$ligo2/. A. Buonanno and Y. Chen, Class. Quant. Grav. [**18**]{}, L1 (2001). A. Buonanno and Y. Chen, Phys. Rev. D [**64**]{}, 042006 (2001). A. Buonanno and Y. Chen, Phys. Rev. D [**65**]{}, 042001 (2002). W. G. Unruh, in [*Quantum Optics, Experimental Gravitation, and Measurement Theory*]{}, eds. P. Meystre and M.  O.Scully, (Plenum, 1982), p.  647. V. B. Braginsky, M. L. Gorodetsky and F. Ya. Khalili, Phys.  Lett. A [**232**]{}, 340 (1997). V. B. Braginsky, M. L. Gorodetsky and F. Ya.Khalili, Phys.  Lett. A [**246**]{}, 485 (1998). V. B. Braginsky, M. L. Gorodetsky, F. Ya. Khalili and K. S. Thorne, Phys. Rev. D [**61**]{}, 044002 (2000). V. B. Braginsky and F. Ya. Khalili, Phys. Lett. A [**257**]{}, 241 (1999). F. Ya. Khalili, Phys. Lett. A, submitted; gr-qc/9906108. P. Purdue, Phys. Rev. D, in preparation. For this type of derivation applied to a harmonic oscillator see, e.g., K.S. Thorne, R.W.P. Drever, C.M. Caves, M. Zimmermann, and V.D. Sandberg, Phys. Rev. Lett. [**40**]{}, 667 (1978); also J.N.  Hollenhorst, Phys. Rev. D [**19**]{}, 1669 (1979). C. M. Caves, Phys. Rev. D, [**33**]{}, 1643 (1986) and [**35**]{}, 1815 (1987). M. B. Mensky, Phys Rev. D [**20**]{}, 384 (1979); M.B. Mensky, Sov. Phys. JETP [**50**]{}, 667 (1979); M. B. Mensky, [*Quantum Measurement and Decoherence*]{}, (Kluwer Academic Publishers, Dordrecht, 2000). C. M. Caves, Phys. Rev. Lett., [**45**]{}, 75 (1980). C. M. Caves, Phys. Rev. D, [**23**]{}, 1693 (1981). J. von Neumann, [*Mathematische Grundlagen der Quantenmechanik*]{} (Springer, Berline, 1932), especially Chap. 6. \[English translation: [*Mathematical Foundations of Quantum Mechanics*]{} (Princeton University Press, Princeton, NJ, 1955).\] C. M. Caves and G. J. Milburn, [*Phys. Rev. A*]{}, [**36**]{}, 5543 (1987). See the Collett-Gardiner model problem presented in Sec. 3.2 of C.W. Gardiner, [*Quantum Noise*]{} (Springer-Verlag, Berlin, 1991). By combining Eqs. (3.2.27) and (3.2.28) of this reference, one deduces that $[\hat A_{\rm out}(t), \hat A_{\rm out}(t')]=0$ for all $t$, $t'$ — a result not explicitly discussed by Gardiner but closely related to issues that he does discuss. We thank Gardiner for calling our attention to this example. O. Alter and Y. Yamamoto, Phys. Lett. A [**263**]{}, 226 (1999); O. Alter and Y. Yamamoto, [*Quantum Measurement of a Single System*]{} (Wiley, New York, 2001), Chapter 7. Equation (8.1.44) of Gardiner [@gardiner]. V. B. Braginsky, V. P. Mitrofanov and K. V. Tokmakov, Phys. Lett.  A [**218**]{}, 164 (1996); K. V. Tokmakov, V. P. Mitrofanov, V. B. Braginsky, S. Rowan and J. Hough, in [*Gravitational Waves: Proceedings of the Third Edoardo Amaldi Conference*]{}, ed. S. Meskkov (Amer. Inst. Phys, Melville NY, 2000), p. 445. S. P. Vyatchanin, Vestnik Moscow University, Ser. 3 Phys. Astron., 103 (1979); V. B. Braginsky, F. Ya. Khalili and A. A. Kulaga, Phys. Lett. A [**202**]{}, 1 (1995); A. A. Kulaga, Phys. Lett. A [**202**]{}, 7 (1995). [^1]: All these forces — gravitational-wave, thermal, seismic, etc. — actually do have a quantum component, but in practice their levels of excitation are so large that we can regard them as classical. [^2]: i.e., quick compared to the evolution of the wave function of the measured quantity, so it can be regarded as constant during the measurement. [^3]: In Sec. III$\;$C of Ref. [@caves2], Caves uses his path-integral formulation of measurement theory to analyze measurements of the discrete second time derivative of the position of a free particle on which a classical force acts. His analysis reveals the same conclusion as we obtain in our pedagogical example: the measured quantity contains information about the force and is devoid of any influence from the particle’s initial state. [^4]: The crucial idea of avoiding the influence of the test-mass initial state by monitoring differences of observables \[$(\hat Q_2 - \hat Q_1) - (\hat Q_1 - \hat Q_0)$ in our case\] is contained in a paper and book by Alter and Yamamoto[@alter_yamamoto; @alter_yamamoto_book]. Alter and Yamamoto point out that, for a test mass on which a classical force acts, the momentum $\hat p(t)$ at time $t$ and the momentum $\hat p(0)$ at time 0 are correlated in that $\hat p(t) = \hat p(0) + \int_0^t dt' F(t')$; so, if one measures $\hat p(t) - \hat p(0) = \int_0^t dt' F(t')$, one thereby can get information about the force without any contaminating influence of the test-mass initial state. They say (page 96 of [@alter_yamamoto_book]) that this is so not only when one measures directly the difference $\hat p(t) - \hat p(0)$ (as in Sec. 7.2.2 of their [@alter_yamamoto_book]), but also when the difference is determined computationally from the results of measurements of $\hat p(t)$ and $\hat p(0)$ \[an analog of our way of monitoring $(\hat Q_2 - \hat Q_1) - (\hat Q_1 - \hat Q_0)$\]. When going on to discuss position measurements, Alter and Yamamoto note that $\hat x(t) - \hat x(0) = \hat p(0) t/m + \int_0^t dt' \int_0^{t'} dt'' F(t'')/m$, so a measurement of $\hat x(t) - \hat x(0)$ [*is*]{} contaminated \[via $\hat p(0)t/m$\] by noise from the test-mass initial state. Examining this contamination, they conclude that “force detection via position monitoring of a free mass is limited by ... the SQL” [@alter_yamamoto]. While this conclusion is correct when one monitors $\hat x(t) - \hat x(0)$ in the manner envisioned by Alter and Yamamoto, it is incorrect for the alternative strategy embodied in our model problem. Instead of monitoring $\hat x(t) - \hat x(0)$, one should monitor $\hat x(0) - 2 \hat x(t) + \hat x(2t)$, which for a free mass is independent of both $\hat x_o \equiv \hat x(0)$ and $\hat p_o \equiv \hat p(0)$. Then the measurement output contains information about about the force $F(t)$, uncontaminated by any influence of the test-mass initial state. [^5]: Notice that, aside from meter noise, $\xi_r$ is equal to $\hat x(t_r) - \hat p(0) t_r /\mu$ \[Eq. (\[x\_r\])\], which is a QND observable (as M.B. Mensky pointed out long ago). Therefore, the quantity $\hat R$ that we measure can be regarded as a discrete second time derivative of a QND observable — which suggests that it can be the foundation for a QND measurement; see Sec. \[sec:BeatSQL\] below. [^6]: It can be shown that a slightly weaker condition is sufficient: second-order commutation of all these operators, $[\hat x_i,[\hat x_j,\hat x_k]] = 0$ for all $i,j,k$.
--- abstract: 'We report the discovery of a high mass-ratio planet $q=0.012$, i.e., 13 times higher than the Jupiter/Sun ratio. The host mass is not presently measured but can be determined or strongly constrained from adaptive optics imaging. The planet was discovered in a small archival study of high-magnification events in pure-survey microlensing data, which was unbiased by the presence of anomalies. The fact that it was previously unnoticed may indicate that more such planets lie in archival data and could be discovered by similar systematic study. In order to understand the transition from predominantly survey+followup to predominately survey-only planet detections, we conduct the first analysis of these detections in the observational $(s,q)$ plane. Here $s$ is projected separation in units of the Einstein radius. We find some evidence that survey+followup is relatively more sensitive to planets near the Einstein ring, but that there is no statistical difference in sensitivity by mass ratio.' author: - | P. Mr[ó]{}z$^{1}$, C. Han$^{2,3}$,\ and\ A. Udalski$^{1}$, R. Poleski$^{1,4}$, J. Skowron$^{1}$, M.K. Szyma[ń]{}ski$^{1}$, I. Soszy[ń]{}ski$^{1}$, P. Pietrukowicz$^{1}$, S. Koz[ł]{}owski$^{1}$, K. Ulaczyk$^{1,5}$, [Ł]{}. Wyrzykowski$^{1}$, M. Pawlak$^{1}$\ (OGLE group)\ M. D. Albrow$^{6}$, S.-M. Cha$^{7,8}$, S.-J. Chung$^{7}$, Y. K. Jung$^{9}$, D.-J. Kim$^{7}$, S.-L. Kim$^{7,10}$, C.-U. Lee$^{7,10}$, Y. Lee$^{7,8}$, B.-G. Park$^{7,10}$, R. W. Pogge$^{4}$, Y.-H. Ryu$^{7}$, I.-G. Shin$^{9}$, J. C. Yee$^{9,11}$, W. Zhu$^{4}$, A. Gould$^{7,12,4}$\ (KMTNet group)\ title: 'OGLE-2016-BLG-0596L: High-Mass Planet From High-Magnification Pure-Survey Microlensing Event' --- Introduction ============ For the first decade of microlens planet detections, beginning with OGLE-2003-BLG-235Lb [@ob03235], the great majority of detections required a combination of survey and followup data. This is a consequence of two effects. First, the survey coverage was generally too sparse to characterize the planetary anomalies in the detected events [@gouldloeb]. Second, thanks to aggressive alert capability, pioneered by the Optical Gravitational Lensing Experiment (OGLE) Early Warning System (EWS, @ews1 [@ews2]), it became possible to organize intensive followup of planet-sensitive events – or even ongoing planetary anomalies – and so obtain sufficient time resolution to detect and characterize planets. However, as surveys have become more powerful over the past decade, they have become increasingly capable of detecting planets without followup observations. That is, making use of larger cameras, the surveys are able to monitor fairly wide areas at cadences of up to several times per hour. While still substantially lower than followup observations of the handful of events that were monitored by followup groups, this is still adequate to detect most planets (provided that the anomalies occur when the survey is observing). Very simple reasoning given below, which is supported by detailed simulations [@Zhu:2014], leads one to expect that the transition from survey+followup to survey-only mode implies a corresponding transition from planets detected primarily in high-magnification events via central and resonant caustics to planets primarily detected in lower magnification events via planetary caustics. High-magnification events are intrinsically sensitive to planets because they probe the so-called “central caustic” that lies close to (or overlays) the position of the host [@griest98]. Planets that are separated from the hosts by substantially more (less) than the Einstein radius generate one (two) other caustics that are typically much larger than the central caustic and thus have a higher cross section for anomalous deviations from a point-lens light curve due to a random source trajectory. However, for high-magnification events, the source is by definition passing close to the host and hence close to or over the central caustic. For planet-host separations that are comparable to the Einstein radius, the two sets of caustics merge into a single (and larger) “resonant caustic”, which is even more likely to generate anomalous deviations of a high-magnification event. For many years, the Microlensing Follow Up Network ($\mu$FUN) employed a strategy based on this high planet sensitivity of high-magnification events. They made detailed analyses of alerts of ongoing events from the OGLE and the Microlensing Observations in Astrophysics (MOA) teams to predict high-magnification events and then mobilized followup observations over the predicted peak. @gould10 showed that $\mu$FUN was able to get substantial data over peak for about 50% of all identified events with maximum magnification $A_\max>200$, but that its success rate dropped off dramatically at lower magnification, i.e., even for $100<A_\max<200$. The reason for this drop off was fundamentally limited observing resources: there are twice as many events $A_\max>100$ compared to $A_\max>200$, and monitoring the full-width half-maximum requires twice as much observing time. Hence, observations grow quadratically with effective magnification cutoff. By contrast, because planetary caustics are typically much larger than central caustics, most planets detected in survey-only mode are expected to be from anomalies generated by the former, which occur primarily in garden-variety (rather than high-mag) events [@Zhu:2014]. For example, @ob120406 detected a large planetary caustic in OGLE-2012-BLG-0406 based purely upon OGLE data, while @moabin1 detected one in MOA-bin-1 based mostly on MOA data. In the latter case it would have been completely impossible to discover the planet by survey+followup mode because the “primary event” (due to the host) was so weak that it was never detected in the data. Nevertheless, there has been a steady stream of survey-only detections of planets in high-magnification events as well. The first of these was MOA-2007-BLG-192Lb, a magnification $A_\max> 200$ event, which required a combination of MOA and OGLE data [@mb07192]. The first planet detected by combining three surveys (MOA, OGLE, Wise), MOA-2011-BLG-322Lb, was also via a central caustic, although in this case the caustic was very large so that the magnification did not have to be extremely large $(A_\max\sim 20)$ [@mb11322]. Similarly, @ob150954 detected a large central caustic due to the large planet OGLE-2015-BLG-0954Lb despite modest peak magnification of the underlying event $A_\max\sim 20$. This case was notable because high-cadence data from the Korea Microlensing Telescope Network (KMTNet) captured the caustic entrance despite the extremely short source self-crossing time, $t_*=16\,$min. There also exist 2 planets MOA-2008-BLG-379Lb [@Suzuki2014] and OGLE-2012-BLG-0724Lb [@Hirao2016] that were detected by the OGLE+MOA surveys through the high-magnification channel. KMTNet is still in the process of testing its reduction pipeline. Motivated by the above experience, the KMTNet team focused its tests on high-magnification events identified as such on the OGLE web page. In addition to exposing the reduction algorithms to a wide range of brightnesses, this testing has the added advantage that there is a high probability to find planets. Here we report on the first planet found by these tests from among the first seven high-mag events that were examined: OGLE-2016-BLG-(0261,0353,0471,0528,0572,0596,0612). These events were chosen to have model point-lens magnifications $A>20$ and modeled peak times $2457439<t_0<2457492$. The lower limit was set by the beginning of the KMTNet observing season and the upper limit was the time of the last OGLE update when the seven events were selected. Observations ============ On 2016 April 8 UT 12:15 (HJD$^\prime=$ HJD$-2450000=7487.0$), OGLE alerted the community to a new microlensing event OGLE-2016-BLG-0596 based on observations with the 1.4 deg$^2$ camera on its 1.3m Warsaw Telescope at the Las Campanas Observatory in Chile [@ogleiv] using its EWS real-time event detection software [@ews1; @ews2]. Most observations were in $I$ band, but with some $V$ band observations that are, in general, taken for source characterization. These $V$-band data are not used in the modeling. At equatorial coordinates $(17^{\rm h} 51^{\rm m} 12^{\rm s}\hskip-2pt .81, -30^\circ 50' 59''\hskip-2pt.4)$ and Galactic coordinates $(-1.01^\circ,-2.03^\circ)$, this event lies in OGLE field BLG534, with an observing cadence during the period of the anomaly of roughly 0.4 per hour[^1]. KMTNet employs three $4.0\,\rm deg^2$ cameras mounted on 1.6m telescopes at CTIO/Chile, SAAO/South Africa, and SSO/Australia [@kmtnet]. In 2015 KMTNet had concentrated observations on 4 fields. However, in 2016, this strategy was radically modified to cover (12, 40, 80) $\rm deg^2$ at cadences of $(4,\ \geq 1,\ \geq 0.4)\,\rm hr^{-1}$. For the three highest-cadence fields, KMTNet observations are alternately offset by about $6^\prime$ in order to ensure coverage of events in gaps between chips. As a result, OGLE-2016-BLG-0596 lies in two slightly offset fields BLG01 and BLG41, which are each observed at a cadence 2 per hour[^2]. KMTNet observes primarily in $I$ band, but 1/11 observations from CTIO and 1/21 observations from SAAO are in $V$-band. Reductions of the primary data were made using difference image analysis (DIA) [@alard98]. However, due to issues discussed in Section 3, special variants of DIA were developed specifically for this event. See below. KMT CTIO $V$ and $I$ images were, in addition, reduced using DoPHOT [@dophot], solely for the purpose of determining the source color. ![ Correction of light curve variability. Top panel shows OGLE online reductions. Variability is roughly periodic, $P\simeq 126.5$ days, and semi-amplitude of $\sim 8.7\%$ of baseline flux, but 57% of source flux. Second panel shows result from simultaneous fit to (1) microlensed source, (2) bright variable at $1.5^{\prime\prime}$, (3) nearby blended star. Periodic variability is removed but annual trend remains. Mean flux drops by 0.5 mag due to fitting out third star. Bottom panel shows measured flux (from second panel) but as a function of sidereal time. This is well fit by a straight line. Note that the full range of this fit to the variation is a factor $\sim 1.5$ larger than the source flux. Final photometry (third panel) is obtained by subtracting this straight-line fit from all flux measurements (not just at baseline). []{data-label="fig:one"}](fig1.eps){width="\columnwidth"} Light Curve Variability ======================= Evidence of Variability ----------------------- The OGLE-2016-BLG-0596 light curve shows clear variability over the course of 6 seasons of OGLE-IV data prior to 2016. This variability is roughly consistent with being sinusoidal at period $P=126.5\,$ days. See Figure \[fig:one\]. While the semi-amplitude of the variability is only $8.7\%$ of the baseline flux $(I_{\rm base}\sim 19)$, this semi-amplitude turns out to be roughly equal to the source flux derived from the model. Importance of Variability ------------------------- Assuming (as proves to be the case) that it is not the source itself that is variable, such low level variability cannot significantly impact characterization of the anomalous features of the lightcurve because they are at relatively high magnification and take place on much shorter timescales. However, if not properly accounted for, such variability can seriously impact the estimate of the source flux and, as a direct consequence of this, the Einstein timescale $$t_\e = {\theta_\e\over\mu}; \qquad \theta_\e^2\equiv \kappa M \pi_\rel; \qquad \kappa\equiv {4 G\over c^2\au}\simeq 8.14{{\rm mas}\over M_\odot}. \label{eqn:tedef}$$ Here $\theta_\e$ is the angular Einstein radius, $\pi_\rel = \au(D_L^{-1}-D_S^{-1})$ is the lens-source relative parallax, and $\mu$ is the lens-source relative proper motion in the Earth frame. Errors in these quantities would propagate into the estimates of the planet-star mass ratio $q$, the Einstein radius $\theta_\e$, and the proper motion $\mu$, all of which are important for assessing the physical implications of the detection. The reason that $t_\e$ is potentially impacted by unmodeled variability is that it is determined primarily from the wings of the light curve where the amplitude of the amplification of flux is comparable to that of the variability. Hence, it is important to track down the source of this variability and correct for it to the extent possible. See, e.g., @mb11293. Removal of Variability I: Variations of Neighbor ------------------------------------------------ In principle, tracking down such a low level of variability in such a crowded field could have been very difficult. However, in the present case, it turns out to be due to a star $1.5^{\prime\prime}$ to the southeast, which is quite bright $I\sim 14.5$ and shows variability with the same period and phase. Within the framework of standard DIA, it is natural that this variable should impact the microlensing light curve because the difference image contains residuals from the variable that overlap the point spread function (PSF) of the microlensed star. Hence, when the difference image is dot-multiplied by the PSF to estimate the flux, it includes a contribution from the residual flux of the variable. It is straightforward to simultaneously fit for two (or $n$) variables with possibly overlapping PSFs. After constructing difference images in the standard way, one simply generalizes the normal procedure by calculating the $n$ flux-difference values $$F_i = \sum_{j=1}^n c_{ij} d_j; \qquad c\equiv b^{-1}, \label{eqn:multifit1}$$ where $$b_{ij} = \sum_k {P_{i,k}P_{j,k}\over \sigma_k^2}; \qquad d_i = \sum_k {P_{i,k}f_k\over \sigma_k^2}, \label{eqn:multifit2}$$ $P_{i,k}$ is the (unit normalized: $\sum_k P_{i,k}=1$) amplitude of the $i$th PSF in the $k$th pixel and $(f_k,\sigma_k)$ are the value and error of the difference flux in the $k$th pixel. We use a variant of this formalism to reduce the OGLE data with $n=3$ stars, including the microlensed source, the bright variable, and one other very nearby (but non-variable) blended star. The result is shown in the second panel of Figure \[fig:one\]. First note that the fluxes have decreased by about 0.5 mag because the non-variable neighboring blend (third star in the fit) has been removed from the baseline flux. The semi-periodic variations are removed. However there remains an annual trend. Removal of Variability II: Annual Variations -------------------------------------------- The bottom panel of Figure \[fig:one\] shows that this annual trend is due to variations with sidereal time, almost certainly due to the impact of the bright red neighbor (even if it were constant) via differential refraction. We fit this variation to polynomials of order $n$, but find that there is no significant improvement beyond $n=1$, for which $f(t) = 0.2415 + 0.1490(t-0.5)$, where $f$ is flux in units of $I=18$. Note that the variation from 0.3 to 0.7 (on the figure) is $0.4\times 0.149\sim 0.06$ which is 1.5 times larger than the source flux derived below. The third panel shows the results of applying this sidereal-time correction. As expected the annual trend is gone. We apply this flux correction to all data, not just the baseline data shown in this figure. We find (as expected) that this corrects the slope of the rising part of the light curve, which indeed impacts the estimate of $t_\e$, though by less than 10%. Note that while, in most cases, it is possible to derive an accurate estimate of the photometric error bars from those reported by the photometric pipeline [@skowron16], in this case we did not apply this simple prescription because the data were reduced using a special customized pipeline. Correction of KMTNet Data ------------------------- We apply the same formalism given by Equations (\[eqn:multifit1\]) and (\[eqn:multifit2\]) to the KMTNet data, but with only two stars, i.e., the microlensed source and the neighboring variable. Note that this difference between OGLE and KMTNet reductions plays no role in the final result because the third star incorporated into the OGLE fit is not variable, and the KMTNet flux scale is ultimately aligned to OGLE through the microlens fit. Thus, in particular, we retain the advantage of resolving out this blend, thus placing better limits on flux from the lens. We note that it is difficult to correct for the annual variation in the KMTNet data. Reliable measurement of the annual variation would require baseline data, which do not exist because the photometry system was not the same for the whole of 2015 compared to 2016. In principle, we could have applied the OGLE-based correction to KMTNet data, but this type of correction is observatory-specific and this would not have been a reliable approach and could easily cause more problem than what was being corrected for. From checking the impact of only correcting the OGLE data for variable-contamination (but not annual variation), however, we found essentially no change. The scatter (hence renormalized error bars) are very slightly smaller, but no change in parameters. The same would be case for KMTNet data. ![ Light curve and geometry of OGLE-2016-BLG-0596. The event is primarily characterized by a strong caustic entrance at HJD$^\prime\sim 7486$ superposed on an otherwise slightly asymmetric point-lens-like light curve. There is a weak caustic exit at HJD$^\prime\sim 7502.5$ which is well covered by KMT SAAO data. This morphology, together with the $\sim 16$ day interval from caustic entrance to exit, is indicative of a resonant caustic (top panel) due to a high-mass planet or low-mass brown dwarf. []{data-label="fig:two"}](fig2.eps){width="\columnwidth"} Guideline for Assessing the Need of Multi-star Fitting ------------------------------------------------------ The formalism introduced in Section 3.3 can also be used to gain intuition about the impact of uncorrected variability, which can then be used to assess whether such corrections are necessary in specific cases. First note that for $n=1$, Equations (\[eqn:multifit1\]) and (\[eqn:multifit2\]) reduce to the standard formula: $$F_1 = {d_1\over b_{11}}; \qquad b_{11} = \sum_k {P_{1,k}^2\over \sigma_k^2}; \qquad d_1 = \sum_k {P_{1,k}f_k\over \sigma_k^2} \label{eqn:singlefit}$$ Equation (\[eqn:singlefit\]) then allows us to express the properly corrected “true” photometry in terms of the “naive” single-source photometry that ignores neighbors. We first “infer” the value, $d_{i,\rm inferred} = b_{ii}*F_{i,\rm naive}$, which then yields, $$F_{i,\rm true} = \sum_{j=1}^2 c_{ij}d_{j,\rm inferred}= {F_{i,\rm naive} - (b_{12}/b_{ii})F_{(3-i),\rm naive}\over 1 - b_{12}^2/(b_{11}b_{22})}. \label{eqn:backout}$$ Hence, the correction is governed by the ratio of the PSF overlap integral $b_{12}$ to the integral of the PSF squared, $b_{11}$. We can evaluate this explicitly for the special case of a Gaussian PSF and below-sky sources ($\sigma_k =$const) $${b_{12}\over b_{11}} = 4^{-(\Delta \theta/{\rm FWHM})^2} \qquad (\rm below-sky\ Gaussian), \label{eqn:gaussbelow}$$ where $\Delta\theta$ is the separation between the two sources. If the two sources are reasonably well separated, $\Delta\theta\ga$FWHM, and (as in the present case) the target (1) is below sky while the contaminating variable (2) is well above sky, then the effect is roughly half of that given by Equation (\[eqn:gaussbelow\]). This is because the squared PSF integral is basically unaffected while the half of the contribution to the overlap integral that is closer to the contaminant is heavily suppressed by the higher flux errors per pixel. We close by re-emphasizing that these order-or-magnitude estimates are not used in the present analysis but are intended as guidance for future cases. ![ $\Delta\chi^2$ map of the MCMC chain in the $\log s$–$\log q$ parameter space obtained from the preliminary grid search. The lower panel shows the entire range where the grid search is conducted. The upper panel show the enlarged view around the best-fit solution. Color coding represents MCMC points with $\leq 1n\sigma$ (red), $2n\sigma$ (yellow), $3n\sigma$ (green), $4n\sigma$ (cyan), $5n\sigma$ (blue), and $6n\sigma$ (magenta) of the best-fit and $n=10$. []{data-label="fig:three"}](fig3.eps){width="\columnwidth"} Light Curve Analysis ==================== The lightcurve, presented in Figure \[fig:two\], has two principal features: a strong caustic entrance near peak at HJD$^\prime = 7486.4$ and a weak caustic exit at HJD$^\prime = 7502.6$. Apart from these caustic crossings, the morphology is that of a slightly distorted point-lens event. This morphology points to a binary lens with very unequal mass ratio $q\ll 1$, i.e., in the brown-dwarf or planetary regime. The long duration of the caustic (16 days) then points to a resonant caustic, and so projected separation (in units of $\theta_\e$) of $s\sim 1$. A thorough search of the parameter space spanning $-1.0\leq \log s \leq 1.0$ and $-5.0\leq \log q \leq 1.0$ leads to only one viable solution, which confirms the above naive reasoning. The uniqueness of the solution is shown in Figure \[fig:three\], where we present the $\Delta\chi^2$ map of the MCMC chain in the $\log s$–$\log q$ parameter space obtained from the preliminary grid search. In fact, initial modeling based on data taken up through HJD$^\prime = 7500.8$ (so, before the caustic exit) already led to essentially this same solution (although the predicted caustic exit was 2.6 days later than the one subsequently observed). The model is described by seven parameters. These include three that are analogous to a point-lens event $(t_0,u_0,t_\e)$, i.e., the time of closest approach to the center of magnification, the impact parameter normalized to $\theta_\e$ and the Einstein crossing time; three to describe the binary companion $(s,q,\alpha)$ where $\alpha$ is the angle of binary axis relative to the source trajectory; and $\rho\equiv \theta_*/\theta_\e$, where $\theta_*$ is the angular radius of the source. ![ Caustic geometry of OGLE-2016-BLG-0596. Top panel shows the causic (red) with the positions of the host (left) and planet (right) represented as blue circles. Zoom in lower panel shows that the source passed close to, but did not cross (because $u_0>\rho$), the small central cusp. In this region, the caustic is very strong, accounting for the sharp jump at HJD$^\prime\sim 7486.4$. On the other hand the caustic exit to the right is very weak, which accounts for the smallness of the corresponding bump in the light curve at HJD$^\prime\sim 7502.6$. []{data-label="fig:four"}](fig4.eps){width="\columnwidth"} [ccrr]{} $t_0$ & day & 7486.464 & 0.010\ $u_0$ & $10^{-2}$ & 1.112 & 0.031\ $t_{\rm E}$ & day & 81.694 & 2.195\ $s$ & & 1.075 & 0.003\ $q$ & $10^{-2}$ & 1.168 & 0.040\ $\alpha$ & radian & 5.886 & 0.009\ $\rho$ & $10^{-2}$ & 0.060 & 0.008\ $I_s$ & & 21.510 & 0.028\ $I_b$ & & 19.739 & 0.028 The best fit parameters and errors (determined from a Markov Chain) are given in Table \[table:one\]. We present the model light curve superposed on the data points in Figure \[fig:two\] and the lens geometry is shown in Figure \[fig:four\]. We also fit the lightcurve for the microlens parallax effect, but found no improvement. We note that compared to other planetary and binary events with well-covered caustic crossings, the parameter $\rho=(6.0\pm 0.8)\times 10^{-4}$ (and the parameter combination $t_* = \rho t_\e = 0.049 \pm 0.007$) have relatively large errors. These parameters are usually better measured because caustic crossings tend to be bright (since the caustic itself is a contour of formally infinite magnification), which means that the photometry over the caustic crossing is relatively precise. Since $t_*$ depends almost entirely on the duration of this crossing, with only weak dependence on other model parameters, it can then be determined quite precisely. In the present case, however, the first crossing was entirely missed simply because it was not visible from any of the five survey telescopes currently in operation (OGLE, MOA, and three from KMTNet: CTIO, SAAO, SSO). The caustic exit was captured by KMTNet SAAO, with 12 points taken over 3.63 hours (i.e., 20 minute cadence). However, since this caustic was quite weak, peaking at $I\sim 18.1$, the photometry has much larger errors than the SAAO photometry near peak. See upper two panels of Figure \[fig:two\]. Adopting a more glass-half-full orientation, we should assess the prior probability that either of the two caustic crossings would have been adequately observed to measure $\rho$. Considering the 20 days between HJD$^\prime$ 7485 and 7505, the three KMTNet observatories each took at least two points on 13 nights, with total durations, (2.11,2.76,2.52) days for SSO, SAAO, and CTIO, i.e., a total 7.39 days. Essentially all of these 39 intervals had approximately continuous coverage. We estimate that the probability that $\rho$ can be measured is the same as the probability that the caustic peak is covered, which may be slightly too conservative. Under this assumption, the probability that the caustics would be observed are each 37%, so that the probability that at least one would be observed is $1-(1-0.37)^2=46\%$. Of course, since the midpoint of the two caustic crossings was 16 April, this probability is adversely affected by the shortness of the bulge observing window relative to microlensing “high season” (21 May – 21 July). At that time the observing window is roughly 2.5 hours longer, and so (assuming comparable weather conditions), the probability for each crossing would be 52% and the probability for at least one would be 77%. Nevertheless, the mid-April values may be considered as a proxy for the microlensing season as a whole. ![ Instrumental CMD for $100^{\prime\prime}$ square around OGLE-2016-BLG-0596 using KMTNet CTIO data. The instrumental source color is measured from model-independent regression and the instrumental magnitude is measured from the fit of the $I$-band data to the model light curve. By measuring the offset of this source position from that of the red clump (red), one can determine the angular source radius $\theta_*$, using standard techniques [@ob03262], as described in the text. []{data-label="fig:five"}](fig5.eps){width="\columnwidth"} Physical Parameters =================== We use KMTNet CTIO DoPHOT reductions to construct an instrumental color magnitude diagram (CMD) that is presented in Figure \[fig:five\].[^3] We find the instrumental source color from model-independent regression and the instrumental source magnitude by fitting the $I$ band light curve to the model. We then find the offset from the clump $\Delta[(V-I),I]=(-0.23,4.06)\pm (0.03,0.10)$, where the error in the color offset is dominated by the regression measurement while the error in the magnitude offset is dominated by fitting for the clump centroid. We then adopt $[(V-I),I]_{0,\rm clump}=(1.06,14.49)$ [@bensby13; @nataf13] to obtain $[(V-I),I]_{0,s}=(0.83,18.55)$. Then using standard techniques [@ob03262], we convert from $V/I$ to $V/K$ using the @bb88 $VIK$ color-color relations and then use the @kervella04 color/surface-brightness relations to derive $$\eqalign{ \theta_* = 0.690\pm 0.065\,\ \muas; \cr \theta_\e = {\theta_*\over\rho} = 1.15\pm 0.18\ \mas; \cr \mu = {\theta_\e\over t_\e} = 5.1\pm 0.8\,\ \masyr .\cr } \label{eqn:thetastar}$$ The error in $\theta_*$ is dominated by the uncertainties in transforming from color to surface brightness (8%), with a significant contribution from the error in $I_s$ (5%). The fractional errors in $\theta_\e$ and $\mu$ are substantially larger than in $\theta_*$ due to the relatively large error in $\rho$. See Section 4. The relatively large value of $\theta_\e$ almost certainly implies that the lens lies in the Galactic disk since the lens-source relative parallax is $$\pi_\rel = {\theta_\e^2\over\kappa M} = (0.16\pm 0.05 \mas) \biggl({M\over M_\odot}\biggr)^{-1}. \label{eqn:mpirel}$$ That is, only if the lens were substantially heavier than $1\,M_\odot$ could it be in the bulge ($\pi_\rel\la 0.03$). However, first, there are almost no such massive stars in the bulge and second, its light would then exceed the blended light $(I_b\sim 19.7)$, even allowing for the $A_I=2.96$ extinction toward this line of sight [@nataf13]. The only exception to this line of reasoning would be if the lens were a black hole. Although the model considering parallax effects does not have improvement compared to the non-parallax model, non-detection of $\pi_{\rm E}$ can give constraints on the mass and distance. In Figure \[fig:six\], we present the $\Delta\chi^2$ map of the MCMC chain in the $\pi_{{\rm E},E}$–$\pi_{E,{\rm N}}$ parameter space obtained from the modeling considering both the lens orbital motion and the microlens parallax effect. The upper limit of the microlens parallax as measured 3$\sigma$ level is $\pi_{\rm E} \lesssim 0.4$. This gives the lower limits of the mass and distance of $M\gtrsim 0.35\ M_\odot$ and $D_{\rm L}\gtrsim 1.7$ kpc. ![ $\Delta\chi^2$ map of the MCMC chain in the $\pi_{{\rm E},E}$–$\pi_{{\rm E},N}$ parameter space. Color coding represents points in the MCMC chain with $\leq 1\sigma$ (red), $2\sigma$ (yellow), $3\sigma$ (green), $4\sigma$ (cyan), and $5\sigma$ (blue) of the best fit. The dotted circles represent the boundaries of $\pi_{\rm E}=0.1$, 0.2, 0.3, 0.4, and 0.5. []{data-label="fig:six"}](fig6.eps){width="\columnwidth"} Resolving the Nature of the Planet ================================== The most notable characteristic of OGLE-2016-BLG-0596 is its high mass ratio $q=0.0117\pm 0.0004$, implying that the mass is $m_p = 12.2\,M_{\rm jup} (M/M_\odot)$. Hence, if the host is one solar mass, this planet would be just below the the deuterium-burning limit (usually regarded as the planet/brown-dwarf boundary). While the host could in principle have arbitrarily low mass (and so, by Equation (\[eqn:mpirel\]), be arbitrarily close), distances closer than $D_L\la 1\,\kpc$ are strongly disfavored by the relatively low proper motion, the parallax constraint, and the paucity of nearby lenses. At this limiting distance, and so $M =\theta_\e^2/\kappa\pi_\rel \sim 0.18\,M_\odot$, the planet would still be $m_p\sim 2\,M_{\rm jup}$, i.e., quite massive for such a low-mass host. Hence, regardless of the host mass, this is a fairly extreme system. To distinguish among these interesting possibilities will require measuring (or strongly constraining) the host mass. This can be accomplished with high resolution imaging, either using the [*Hubble Space Telescope (HST)*]{} or ground-based adaptive optics (AO) imaging on an 8m class telescope. An advantage of [*HST*]{} is that it can observe in the $I$ band for which the source flux is directly measured from the event. Hence, the source light can be most reliably separated from the blended light in $I$. In contrast to many previous cases, there are no $H$ band observations during the event, so ground-based AO observations (which must be in the infrared) cannot be directly compared to an event-derived source flux. Nevertheless, it is probably possible to transform from $V/I$ light-curve measurement to $H_s$ source flux with a precision of 0.2 mag, using a $VIH$ color-color diagram. For definiteness, we will assume that the lens can be reliably detected from [*HST*]{} or AO observations provided that the flux is at least half that of the source, i.e., $I_L<22.3$. For example, if the lens were an $M=0.5\,M_\odot$ early M dwarf (so $D_L\sim 2.2\,\kpc$), then it would have $I_L \sim 18.8 + A_I\leq 21.8$ (and brighter if, as is almost certainly the case, a substantial fraction of the dust is beyond 2.2 kpc). Thus, there is a good chance that AO or [*HST*]{} observations could detect the lens, and even if this failed, the observations would strongly constrain the host to be of very low mass. ![ Log-log plot of planet-star mass ratio $q$ versus separation (normalized to $\theta_\e$) $s$ for 44 previously published or submitted planets and OGLE-2016-BLG-0596Lb (green pentagon). Planets are colored by path to detection: detected and characterized by followup observations (blue), detected by survey but characterized by followup (magenta), and detected and characterized by surveys (green). Their shapes indicate the principal caustic feature giving rise the anomaly: planetary (circles), central (squares), and resonant (triangles). Planets suffering from the close/wide degeneracy are shown by two open symbols, whereas those for which this degeneracy is resolved are shown by a single solid symbol. By this system, OGLE-2016-BLG-0596Lb should be a solid green triangle. []{data-label="fig:seven"}](fig7.eps){width="\columnwidth"} Discussion ========== OGLE-2016-BLG-0596Lb is a very high mass-ratio $(q=0.0117)$ planet that lies projected very close to the Einstein ring $(s=1.075)$, which consequently generated a huge resonant caustic that required 16 days for the source to traverse. The underlying event was of quite high magnification $(A_{\rm point-lens}\sim 100)$, which led to pronounced features at peak. It therefore would seem to be extremely easy to discover. While the data set posted on the OGLE web site are adversely affected by the nearby variable, it is still the case that a free fit to these data leads to a solution qualitatively similar to the one presented here (except that it lacks a measurement of $\rho$). It is therefore striking that none of the automated programs nor active individual investigators that query this site noticed this event (or at least they did not alert the community to what they found as they do for a wide range of other events, many less interesting). This indicate the possibility that there may be many other planets “hidden in plain sight” in existing data. This is also supported by the planet discoveries MOA-2008-BLG-397Lb [@Suzuki2014], OGLE-2008-BLG-355Lb [@Koshimoto2014], and MOA-2010-BLG-353Lb [@Rattenbury2015], for which the planetary signals were not noticed during the progress of events. These three characteristics, high-magnification (which is usually associated with survey+followup rather than survey-only mode), very high mass ratio, and apparent failure of both machine and by-eye recognition of the planetary perturbation, prompt us to address two questions. First, how do the real (as opposed to theoretical) planet sensitivities differ between survey-only and survey+followup modes. Second, why was this planet discovered only based on systematic analysis and what does this imply about the need for such systematic analysis of all events? Summary of Microlens Planet Detections in the Observational $(s,q)$ Plane ------------------------------------------------------------------------- Many papers contain figures that summarize microlensing planet detections in the physical plane of planet-mass versus projected separation (with the latter sometimes normalized by the snow line), e.g., Figure 1 of @mb13605. And there are many studies that show plots of [*planet sensitivity*]{} in the observational $(s,q)$ plane, (e.g., @gaudi02 [@gould10]). But to our knowledge, there are no published figures (or even figures shown at conferences) showing the census of microlensing planet discoveries on this plane. ![ Cumulative microlensing planet detections by log mass ratio $\log(q)$, with top normalized and bottom unnormalized. Green shows the 18 planets that were detected and characterized by surveys, while magenta show the 27 planets that required significant followup observations for detection and/or characterization. Black is total. The green and magenta curves are not statistically distinguishable. []{data-label="fig:eight"}](fig8.eps){width="\columnwidth"} Figure \[fig:seven\] illustrates the position of OGLE-2016-BLG-0596 (green pentagon) among the 44 previously published planets (or, to the extent we have such knowledge, submitted for publication). Discovered bodies are defined to be “planets” if their measured or best-estimated mass $m_p<13\,M_{\rm jup}$ and if they are known to orbit a more massive body[^4]. Planets are color-coded by discovery method: discovered by followup observations (blue), discovered (or discoverable) in survey-only observations but requiring followup for full-characterization (magenta), fully (or essentially fully) characterized by survey observations (green). The shapes of the symbols indicate the type of caustic that gave rise to the planetary perturbation: circles, squares, and triangles for planetary, central, and resonant caustics, respectively. In many cases, solutions with $(q,s)$ and $(q,1/s)$ yield almost equally good fits to the data [@griest98]. In these cases, the two solutions are shown as open symbols in order to diminish their individual visual “weight” relative to the filled symbols used when this degeneracy is broken. Hence, OGLE-2016-BLG-0596Lb would be a green filled triangle if it were not being singled out by making it larger pentagon. ![ Cumulative microlensing planet detections by absolute value of the log projected separation (normalized to $\theta_\e$) $|\log(s)|$, with top normalized and bottom unnormalized. Colors are the same as in Figure \[fig:eight\]. The gap between the green (survey) and magenta (followup) curves has a 8.5% probability of being random. If real, this indicates that followup observation have been relatively more sensitive to planets near the Einstein ring while surveys are more sensitive to those further from the Einstein ring. []{data-label="fig:nine"}](fig9.eps){width="\columnwidth"} The most striking feature of this figure is that, in sharp contrast to the triangular appearance of high-magnification-event planet-sensitivity plots (e.g., @gould10) and to “double pronged” low-magnification sensitivity plots (e.g., @gaudi02), this detection plot looks basically like a cross, with a vertical band of detections near $\log s\sim 0$ and a horizontal band near $\log(q)\sim -2.5$. The part of this structure at high mass ratio $\log(q)>-2$ is easily explained: companions with high mass ratio are, a priori, most likely stars or brown dwarfs (BDs) and can only be claimed as “planets” if the host mass is known to be low. This in turn usually requires a measurement of the microlens parallax, which for ground based observations is much more likely if there is a large caustic and so $s\sim 1$. We note that there are 4 planet detections in the region $(\log(s)>+0.15, \log(q)<-3)$, while there is no detection in the opposite quadrant $(\log(s)<-0.15, \log(q)<-3)$. All the 4 planets derive from planetary caustics and 3 of them are pure survey detections: MOA-2011-BLG-028Lb[^5], OGLE-2008-BLG-092Lb, and MOA-2013-BLG-605Lb [@mb11028; @ob08092; @mb13605]. The remaining planet, OGLE-2005-BLG-390Lb [@ob05390], dates from an era when followup groups intensively monitored the wings of events, primarily due to the paucity of better targets. Thus we may expect that surveys will gradually fill in this quadrant. The difference in the detection rates between the quadrants with $\log(s)>+0.15$ and $\log(s)<-0.15$ can be explained by the difference in the size of the planetary caustics with $s<1$ and $s>1$. In the case of $s>1$, there exist a single planetary caustic. In the case of $s<1$, on the other hand, there exist two sets of planetary caustics and each one is smaller than the planetary caustic with $s>1$. As a result, the planetary caustic with $s>1$ has a larger cross section and thus higher sensitivity. Furthermore, smaller caustic size of planets with $s<1$ makes planetary signals tend to be heavily affected by finite-source effects, which diminish planetary signals, while signals of planets with $s>1$ can survive and show up in the wings of light curves. Actually, all 4 events with planet detections via planetary-caustic perturbations are involved with large source stars, i.e. giant and subgiant stars for which finite-source effects are important. ![ Cumulative microlensing planet detections by year of discovery, with top normalized and bottom unnormalized. Colors are the same as in Figure \[fig:eight\]. Followup discoveries (magenta) have dropped off dramatically since 2013 []{data-label="fig:ten"}](fig10.eps){width="\columnwidth"} Apart from this quadrant, it is not obvious that surveys are probing a different part of parameter space from the previously dominant survey+followup mode. To further investigate this, we show in Figures \[fig:eight\] and \[fig:nine\] the cumulative distributions of planets by log mass ratio $\log(q)$ and (absolute value of) log separation $|\log(s)|$. In this case we distinguish only between events that could be fully characterized by survey observations (green) and those that required significant followup (including auto-followup by surveys). These distributions generally appear quite similar. For the mass ratio distribution, the greatest difference (0.259) is at $\log(q)=-2.319$, which is very typical (Kolmogorov-Smirnoff (KS) probability 40%). The greatest difference for the separation distribution (0.334 at $|\log(s)|=0.124$) has a KS probability of 8.5%. This may be indicative of a real difference. If so, the difference would be that pure-survey is relatively more efficient at finding widely separated lenses, which was already hinted at by inspection of the $(q,s)$ scatter plot. Finally, in Figure \[fig:ten\], we show cumulative distributions by year of discovery. One might expect that with the massive ramp-up of surveys, survey-only discoveries would move strongly ahead of survey+followup. This expectation is confirmed in its sign but not its magnitude by Figure \[fig:ten\]. It shows that in (2014, 2015, 2016) there have been (2,2,1) and (0,1,0) discoveries by survey-only and survey+followup, respectively. This is certainly not a complete accounting, in part because 2016 has just begun and in part because historically there has been a considerable delay in microlensing planet publications for a variety of reasons. For example, of the 28 planets discovered prior to 2012, the number with delays (publication year minus discovery year) of $(0,1,\ldots,9)$ years was $N=(1,5,9,5,1,2,4,0,0,1)$. In the history of microlensing, there has been only one planet published during the discovery year, OGLE-2005-BLG-071Lb [@ob05071]. Hence, we will only get a full picture of this transition after a few years. Challenges to the By-Eye and By-Machine discovery of OGLE-2016-BLG-0596 ----------------------------------------------------------------------- There are three interrelated reasons why OGLE-2016-BLG-0596 may have escaped notice as a potentially planetary event until the KMTNet data for this event were examined (for reasons unrelated to any apparent anomaly). First, it is relatively faint at peak. Second, it has a variable baseline. Third, it was not announced as a microlensing event until one day after the peak. As a general rule, high-magnification events are singled out for intensive followup observations only if they are still rising. When such intensive observations would have been conducted, they would have immediately revealed the anomalous nature of the event, probably triggering additional observations. This is how many of the planets discovered by $\mu$FUN were found. While $\mu$FUN itself is now semi-dormant, its protocols are directly relevant here because what is of interest is whether there is prima facie evidence for a population of missed planets during past years, during most of which $\mu$FUN was active. Now, in fact, OGLE-2016-BLG-0596 met the criteria for an OGLE alert 24 hours previously, but no alert was issued because of caution due to the variable baseline. Nevertheless, even if such an alert had been issued, it would not have triggered any followup observations because (due to the anomaly) the event would have appeared to have already peaked at that time. Finally, the variability of the baseline may have influenced modelers and followup groups to discount the evident irregularities in the light curve near peak as being due to data artifacts. This could have been exacerbated by the faintness of the event, which increases both the formal error bars and the probability of centroiding errors (hence irregular photometry) due to bright blends. Both of these effects reduce the confidence of modelers that apparent anomalies in online “quick look” photometry are due to physical effects. It is nevertheless a fact that when the original OGLE data are modeled, they show a clear signal for a massive planet or low-mass BD, which would trigger a re-reduction of the data, such as the one we report here. We therefore conclude that while OGLE-2016-BLG-0596 has some near-unique features that increased the difficulty of recognizing it as a planetary event, such recognition was clearly feasible. Hence, we do indeed regard this event as prima facie evidence for more such events in archival data, particularly OGLE-IV data 2010-2015. The OGLE project has received funding from the National Science Centre, Poland, grant MAESTRO 2014/14/A/ST9/00121 to AU. Work by C.H. was supported by Creative Research Initiative Program (2009-0081561) of National Research Foundation of Korea. The OGLE Team thanks Profs. M. Kubiak and G. Pietrzy[ń]{}ski, former members of the OGLE team, for their contribution to the collection of the OGLE photometric data over the past years. WZ and AG were supported by NSF grant AST-1516842. Work by JCY was performed under contract with the California Institute of Technology (Caltech)/Jet Propulsion Laboratory (JPL) funded by NASA through the Sagan Fellowship Program executed by the NASA Exoplanet Science Institute. This research has made the telescopes of KMTNet operated by the Korea Astronomy and Space Science Institute (KASI). [99]{} Alard, C. & Lupton, R. H. 1998, , 503, 325 Beaulieu, J.-P. Bennett, D. P., Fouqué, P., et al. 2006, Nature, 439, 437 Bennett, D. P., Bond, I.A., Udalski, A., et al. 2008, , 684, 663 Bennett, D. P., Sumi, T., Bond, I. A., et al. 2012, , 757, 119 Bensby, T. Yee, J. C., Feltzing, S., et al. 2013, , 549A, 147 Bessell, M. S., & Brett, J. M. 1988, , 100, 1134 Bond, I. A., Udalski, A., Jaroszyński, M, et al. 2004, , 606, L155 Gaudi, B. S., Albrow, M. D., An, J. 2002, , 566, 463 Gould, A. & Loeb, A. 1992, , 396, 104 Gould, A., Dong, S., Gaudi, B. S., et al. 2010, , 720, 1073 Griest, K. & Safizadeh, N. 1998, , 500, 37 Hirao, Y., Udalski, A., Sumi, T., et al. 2016, , 824, 139 Kervella, P., Th[é]{}venin, F., Di Folco, E., & S[é]{}gransan, D. 2004, , 426, 297 Kim, S.-L., Lee, C.-U., Park, B.-G., et al. 2016, JKAS, 49, 37 Koshimoto, N., Udalski, A., Sumi, T., et al. 2014, , 788, 128 Nataf, D. M., Gould, A., Fouqué, P., et al. 2013, , 769, 88 Poleski, R., Skowron, J., Udalski, A., et al. 2014a, , 755, 42 Poleski, R., Udalski, A., Dong, S., et al. 2014b, , 782, 47 Rattenbury, N. J., Bennett, D. P., Sumi, T., et al. 2015, , 454, 946 Schechter, P. L., Mateo, M., & Saha, A. 1993, , 105, 1342 Shin, I.-G., Ryu, Y. H, Udalski, A., et al. 2016, JKAS, 49, 73 Shvartzvald, Y., Maoz, D., Kaspi, S., et al. 2014, , 439, 604 Skowron, J., Udalski, A., Poleski, R., et al. 2016, , 820, 4 Skowron, J., Udalski, A., Koz[ł]{}owski, S., et al. 2016, Acta Astron., 66, 1 Sumi, T., Udalski, A., Bennett, D. P., et al. 2016 , in press arXiv:1512.00134 Suzuki, D., Udalski, A., Sumi, T., at al. 2014, , 780, 123 Udalski, A. 2003, Acta Astron., 53, 291 Udalski, A.,Szymanski, M., Kaluzny, J., Kubiak, M., Mateo, M., Krzeminski, W., & Paczyński, B. 1994, Acta Astron., 44, 317 Udalski, A., Jaroszyński, M., Paczyński, B, et al. 2005, , 628, L109. Udalski, A., Szymański, M.K. & Szymański, G. 2015b, Acta Astronom., 65, 1 Yee, J.C., Shvartzvald, Y., Gal-Yam, A., et al. 2012, , 755, 102 Yoo, J., DePoy, D. L., Gal-Yam, A., et al., 2004, , 603, 139 Zhu, W., Penny, M., Mao, S., Gould, A., & Gendron, R. 2014, , 788, 73 [^1]: OGLE cadences were significantly adjusted at the time of the peak and planetary anomaly of this event, due to the [*Kepler*]{} K2 Campaign 9 microlensing campaign. The five fields covering the K2 field were observed 3 times per hour, while other fields (including BLG534) were observed somewhat less frequently (very roughly 2/3) compared to their usual rates. [^2]: Like OGLE, KMTNet also adjusted its schedule for the K2 campaign, but in a different way. First, CTIO observations were not adjusted. Second, KMTNet only began “K2 mode” on 2016 April 23. This was after the event peak and caustic entrance but before the exit. Therefore, in particular, the caustic exit observations from SAAO were at the lower cadence (reduced by a factor 0.75) [^3]: Correction of the DoPHOT data for variation is not done, but this would have little effect on the result. The variable is extraordinarily red, $\sim 1.2$ magnitude redder than the clump, whereas the source is $\sim 0.2$ magnitude bluer than the clump. Hence, by a naive estimate, the variations would be fractionally smaller by a factor 4. The full amplitude of these variations in $I$ band is of order the source flux, whereas the color measurement is made when the source is magnified 60 to 100 times. The color measurement is differential over short timescales of a few days, whereas the period is a large fraction of a year. Combining these very small factors, we expect the color measurement to be impacted at the level $(1/4)\times (1/80)\times (3/(126/\pi)) \sim 2\times 10^{-4}$. It is general practice to ignore such small errors, which in this case are more than hundred times smaller than the measurement error. We also note that the dependence of the color measurement on the choice of the $V$-band data set (OGLE or KMTNet) is small considering that the offset from the clump has an accuracy of 0.03 magnitude whereas the precision of the color measurement is 0.05 magnitude. Furthermore, the SAAO $V$-band data are taken for redundancy, primarily in a case there is no CTIO data due to bad weather when the event is well magnified or for very short, highly magnified events that peak of South Africa. [^4]: To facilitate comparison with future compilations, we list here the 45 planets used to construct this figure and those that follow. We compress, e.g., OGLE-2003-BLG-235Lb to OB03235 for compactness and only use “b,c” for multiple planets: OB03235, OB05071, OB05169, OB05390, MB06bin1, OB06109b, OB06109c, MB07192, MB07400, OB07349, OB07368, MB08310, MB08379, OB08092, OB08355, MB09266, MB09319, MB09387, MB10073, MB10328, MB10353, MB10477, MB11028, MB11262, MB11293, MB11322, OB110251, OB110265, OB120026b, OB120026c, OB120358, OB120406, OB120455, OB120563, OB120724, MB13220, MB13605, OB130102, OB130341, OB140124, OB141760, OB150051, OB150954, OB150966, OB160596. [^5]: We note that this event’s light curve does contain some followup data, but it is not essential for characterizing the planet.
--- author: - | $^1$, Henrik Beuther$^2$, Clive Dickinson$^{3}$, Joseph C. Mottram$^{5}$ Pamela Klaassen$^{5,6}$ Adam Ginsburg$^{7}$ Steve Longmore$^{4}$ Anthony Remijan$^{8}$, Karl Menten$^{9}$\ $^1$University of Hertfordshire; $^2$MPIA Heidelberg; $^3$University of Manchester; $^4$Liverpool John Moores University; $^5$Leiden University; $^6$UK Astronomy Technology Centre; $^7$European Southern Observatory; $^8$National Radio Astronomy Observatory; $^9$MPIfR Bonn;\ E-mail: bibliography: - 'thompson\_ska\_chapter.bib' title: 'The ionised, radical and molecular Milky Way: spectroscopic surveys with the SKA' --- Introduction ============ Over the last decade there has been a renaissance in multiwavelength surveys of the Milky Way, exploiting new facilities and instrumentation to conduct wide area surveys that are over an order of magnitude deeper and at much higher angular resolution than their predecessors. The combination of this wealth of survey data is beginning to revolutionise our understanding of the complex cycle that relates the interstellar medium (ISM) to star formation. Two factors play a part in the process: firstly the different wavelengths covered by each survey trace (very) different components of the ISM; and secondly the surveys have essentially close to matching angular resolution ($\sim$10–20[$^{\prime\prime}$]{}) over the bulk of their combined wavelength range. Spectroscopic surveys are a key piece of the puzzle, as they trace the detailed kinematics of the ISM and reveal the complex 3-dimensional structure of the Milky Way, providing the crucial third dimension to continuum surveys. The extra complexity and depth required by spectroscopic surveys means that they are in general one “generation” behind the most recent corresponding continuum surveys. For example, the current state of the art in surveys of the atomic and molecular components of the ISM are the $\sim$ 1[$^{\prime}$]{} resolution surveys of HI and CO (e.g. the International Galactic Plane Survey, the Galactic Ring Survey & the FCRAO Outer Galaxy Survey). By the time of the SKA these projects are likely to be superseded by higher resolution and more sensitive HI/CO surveys (with JVLA, ASKAP, MeerKAT, JCMT and possibly CCAT) that will match the resolution of the current far-infrared-millimetre wave continuum surveys. However, there are a number of areas that will *not* be addressed by forthcoming spectroscopic surveys, where the sheer potential of the SKA in mapping faint radio-wavelength lines will play an important role across all the components of the ISM. As one of the principal raisons d’être of the SKA is radio spectroscopy, the compact $\sim$1 km cores of SKA1-MID and SKA1-SUR (leading to the $\sim$ 4 km core of the full SKA) are optimised for brightness temperature sensitivity. Combined with the relatively large FOV of the 15m dishes, this makes the SKA an incredibly powerful facility for wide area spectroscopic surveys of not just HI but all radio-wavelength lines. Within the wavelength ranges available in SKA Bands 1–5 there are many lines that trace multiple components of the ISM, including thousands of radio recombination lines (RRLs), several lines from light hydride radicals (OH and CH), the $^{3}$He$^{+}$ hyperfine line and the two anomalous absorption lines of o-H$_{2}$CO. These faint radio phenomena which were all discovered in the 1960s can be deployed as standard tools in the SKA era to study the ionised, radical and molecular components of the ISM in unprecedented detail. In this chapter we will describe the ISM science that can be achieved with the SKA by mapping (classical) RRLs, hydride radical lines and anomalous absorption formaldehyde lines. These studies have tremendous scope for improving our understanding of a range of processes within the Milky Way, from accretion processes in massive star formation, stellar feedback into the ISM and the origin of the warm diffuse ionised ISM. We refer readers interested in HI, diffuse RRLs and OH masers to the chapters by @mcclure-griffiths2014, @oonk2014 and @etoka2014 respectively. We will outline a series of strawman projects that could be accomplished by the SKA in “early science” mode, showing how the expanding capabilities of SKA1-MID and SKA1-SUR naturally lead to enhanced science capabilities. Finally we briefly dwell upon the potential that a fully frequency-capable SKA ($\nu\le$ 24 GHz) has for large area surveys of NH$_{3}$. Radio Recombination lines: the kinematics of the ionised ISM ============================================================ Radio recombination lines (RRLs) arise from atomic transitions between large principal quantum numbers (typically $n\ge40$), where the small difference between energy levels means that the emitted photons are of radio wavelength. The usual convention to describe the transition $n+m\,\rightarrow\,n$ is $n\alpha$, $n\beta$, $n\gamma$…for $m = 1, 2, 3, \ldots$, hence $\alpha$ transitions (which are the most probable) correspond to a change in the quantum number of one. Despite the intrinsic faintness of their high quantum number transitions RRLs are one of the best tracers of astrophysical plasmas due to their well-understood physics [@gordon2002], their immunity to extinction (unlike the Balmer H$\alpha$ or OIII fine structure lines) and the sheer line density of their spectra (many lines of H, He and C are found close together in frequency). From RRL measurements one can determine many physical properties of the ionised gas, e.g. temperature and electron density [@gordon2002; @brocklehurst1972], metallicity [@balser2001], the hardness of the illuminating UV spectrum [@roshi2012], and potentially magnetic field strengths from C RRL linewidths [@roshi2007]. By careful selection of various lines it is also possible to use RRLs to trace the kinematics of ionised gas from the extended low density medium to the densest parts of young compact HII regions and planetary nebulae. The main limitation of current RRL studies is the tradeoff between brightness temperature sensitivity and angular resolution. Single dish studies like SIGGMA on Arecibo [@liu2013] and HIPASS on Parkes [@alves2012] offer mK sensitivity and trace electron densities down to $n_{e}\sim$ 10cm$^{-3}$ but with an angular resolution of several arcminutes. Interferometric surveys like the JVLA THOR survey [@bihr2013], which is mapping a 52[$^{\circ}$]{} long strip of the northern Galactic Plane, reach angular resolutions of 10[$^{\prime\prime}$]{} but are only sensitive to $\sim$ 1 K lines which corresponds to $n_{e} \stackrel{>}{\sim} 1000$cm$^{-3}$ (and moreover, only for the brightest n$\alpha$ lines). Obviously, single-dish apertures such as FAST are more suited to trace the truly extended diffuse medium, but the SKA has the unique potential for high resolution studies of the intermediate density gas ($n_{e} \sim 100$ cm$^{-3}$) at the interface between HII regions and the diffuse ionised interstellar medium. This would enable the morphology and kinematics of the HII region boundaries to be explored at comparable resolution to HI & CO observations, plus determine the spectrum of the escaping radiation via simultaneous detections of H and He RRLs. The sheer sensitivity, broadband feeds and highly capable correlator make the SKA an ideal instrument to observe RRLs. The widths of the lines are a few 10s of kms$^{-1}$, so they can be comfortably observed in the continuum mode of the SKA correlator. Coupled with the wide bandwidths and density of RRL spectra, this means that there are literally thousands of RRLs available to be observed in SKA1-MID and SUR Bands 1, 2 & 3 — falling to hundreds of lines in SKA1-MID Band 5. Secondly, for detection experiments the well understood physics governing the line frequencies (essentially the Rydberg equation) implies that the lines can easily be stacked to improve the signal to noise. As the line separations are fixed, adaptive stacking at different input V$_{\rm lsr}$ can be used to retrieve line detections even if the line of sight velocity of the gas is not initially known. The larger instantaneous bandwidths of SKA1-MID over SKA1-SUR make SKA1-MID competitive in mapping speed once line stacking is taken into consideration. Indeed, assuming that half the lines in the passband are free from RFI, a line-to-continuum ratio of 0.02 and channel width of 10 kms$^{-1}$, SKA1-SUR and SKA1-MID have a stacked RRL spectroscopic survey speed for H$n\alpha$ lines that is within a factor 2 of the VLA (not JVLA!) *continuum* survey speed. It will be well within the capability of the Phase 1 SKA to perform spectroscopic RRL surveys that have the same sensitivity to ionised gas that present day surveys, e.g. the VLA Galactic Plane Survey [@stil2006], achieve in continuum. Thirdly, the broadband feeds imply the simultaneous detection of multiple $n$ and $m$ lines from the same atomic species. This enables detailed radiative transfer models to be created that constrain the effects of stimulated emission and departures from LTE. In addition, multiple lines at different $n$ values can be used to estimate the pressure broadening effects and determine the velocity and density structure of the ionised gas. Finally, the SKA will also observe RRLs from multiple atomic species, principally Helium and Carbon, as these lines are found within 2 MHz of corresponding H RRLs. Helium lines are typically $\sim$8 percent of the brightness of Hydrogen RRLs, but are valuable tracers of both the metallicity of the ionised gas [e.g. @balser2001] and the hardness of the illuminating UV spectrum [@roshi2012]. More than 90% of the observed $^{4}$He in the ISM of the Milky Way was produced via primordial nucleosynthesis [@wilson1994]. Observations of He RRLs at high angular resolution are a unique window into this process, allowing the excitation and total abundance to be properly modelled. Carbon RRLs are produced in the PDRs surrounding HII regions and so these lines are a valuable tracer of the physical conditions in this gas, particularly when combined with observations of the 158 $\mu$m CII line [@natta1994]. Their line brightnesses are typically $\sim$30% of H RRLs. The non-thermal linewidths of C RRLs may also allow magnetic field strengths to be measured [@roshi2007]. Anomalous formaldehyde absorption: tracing the volume density of H$_{2}$ via silhouettes on the CMB =================================================================================================== The lowest two transitions of ortho formaldehyde (at 4.8 and 14.4 GHz) have a curious property, in that collisional excitation of these transitions drives an “anti-inversion” which cools the lines with respect to the CMB temperature. The transitions can then absorb CMB photons and appear in absorption against the CMB. This phenomenon is known as anomalous absorption and was first observed by @palmer1969 and theoretically explained by @townes1969. Anomalous formaldehyde absorption is an incredibly useful tool for studying molecular clouds. As the illuminating source is isotropic and fills the Universe, anomalous absorption is distance-independent and wholly dependent on the number of absorbing molecules. Moreover, the ratio of the 4.8 and 14.4 GHz lines is highly sensitive to the H$_{2}$ *volume density* [@mangum2008; @ginsburg2011], making the combination of these transitions an effective molecular densitometer analogous with the molecular thermometer of the 218 GHz para formaldehyde K-doublet. This densitometer is unaffected by the sub-thermal excitation, line trapping or optical depth effects that plague CO observations, allowing the H$_{2}$ mass of the absorbing source to be determined within 0.3 dex. So with sufficient sensitivity and angular resolution (to couple the synthesised beam to the absorbing source) one can use anomalous absorption to accurately measure the density and mass of molecular gas clouds from the Milky Way [@ginsburg2011] to local galaxies [@mangum2008] and beyond to the high-redshift Universe [@zeiger2010; @darling2012]. However, these absorption lines are faint and narrow, and thus require long integration times to adequately detect. This means that they cannot currently be used in wide area surveys — unlike the bright CO rotational transitions which have been widely used to map molecular emission in the Milky Way. For example, to detect the absorption lines from Galactic clouds requires a sensitivity of $\sim$0.1 K, which takes at least a full 12-hour track with the VLA [@evans1987]. But with SKA1-MID it will become possible to detect these lines in only 1–2 hours (and minutes with the SKA) making it feasible to survey the Milky Way’s molecular clouds at much better angular resolution than existing single-dish CO surveys, and with none of the excitation or optical depth issues that affect CO surveys. In addition to anomalous absorption, a wide area SKA survey will also observe “non-anomalous” absorption against hundreds to thousands of bright continuum sources within and without the Milky Way (e.g. radio galaxies and HII regions). Many of the compact HII regions will have accurately measured trigonometric parallaxes from associated maser sources [see the chapter by @green2014]. This offers the potential to conduct 3D tomography of the molecular ISM by using a network of hundreds of known illuminating sources with well-characterised heliocentric distances. It must be stressed that the current SKA baseline design for SKA1-MID Band 5 does not cover the 14.4 GHz H$_{2}$CO line, but a modest extension to the upper frequency limit of Band 5 from 13.8 GHz to 14.4 GHz would enable the line ratios to be measured and *uniquely* permit molecular gas volume densities to be determined across the Milky Way and beyond. Hydride radicals: thermal OH and CH =================================== Here we discuss thermal emission from the two main hydride radical species in the SKA bands: OH at 1.7 GHz in SKA1-MID band 2 (band 3 in SKA1-SUR) and the 0.7 & 3.3 GHz CH lines in SKA1-MID bands 1 and 4 (band 3 in SKA1-SUR). There are a number of current OH surveys, the single-dish SPLASH in the Southern Hemisphere [@dawson2014] and the interferometric THOR survey in the Northern Hemisphere [@bihr2013], with a planned deeper ASKAP survey (GASKAP: @dickey2013). Thermal emission from OH is often overlooked in favour of the much brighter non-thermal OH masers [see the Chapter by @etoka2014] — nevertheless as OH has a largely constant abundance across diffuse and translucent molecular clouds, has four closely spaced transitions, and is thermalised even at low densities ($n_{\rm crit}\le 4$ cm$^{-3}$) this molecule is a useful tracer of the temperature and density of the neutral ISM. Thermal OH may even allow the detection of CO-dark gas hinted at by gamma ray emission [e.g. @abdo2010], *Herschel* CII observations [@langer2014] and single-dish OH pencil beam studies [@allen2012]. Again, the SKA will bring the benefit of its brightness temperature sensitivity to the study of thermal OH, reaching and order of magnitude greater sensitivity and spatial resolution over GASKAP. CH was one of the first molecular radicals detected in the interstellar medium via optical absorption spectroscopy, and is a very good tracer of H$_{2}$ column density in UV-dominated regions [@sheffer2008] and the diffuse ISM [@Qin2010]. The 0.7 and 3.3 GHz lines available to the SKA have also been postulated to be a sensitive probe of changes in fundamental constants, although as these lines are subject to non-LTE effects caution must be taken to model the lines carefully, possibly also using higher frequency data from *Herschel* or SOFIA. The SKA enables wide area and sensitive surveys of CH, and in combination with OH and H$_{2}$CO anomalous absorption allows the entire dynamic range of the molecular ISM to be traced from PDRs, to diffuse and dense H$_{2}$ clouds. Potential SKA spectroscopic surveys =================================== In this section we describe potential RRL survey projects that the SKA could carry out, paying particular attention to the science outcomes that are enabled by the different components of the SKA and the possibilities for Early Science. In the following we concentrate on SKA1-MID and -SUR Band 2 and SKA1-MID Band 5, as these offer the greatest potential for studies of the diffuse and dense ionised ISM, hydrides and anomalous formaldehyde absorption, but as mentioned earlier there are many RRLs present in the other SKA bands that could be part of commensal studies. In the following calculations we have assumed the noise and imaging performance values for the Baseline Design of SKA1-MID and SKA1-SUR [@dewdney2013] given in the SKA1 Imaging Science Performance memo [@braun2014]. We have also conservatively assumed that in SKA1-MID and -SUR Band 2 we will be able to stack 25 RRLs using the 1 GHz bandwidth of SKA1-MID and 12 RRLs using the 500 MHz bandwidth of SKA1-SUR. Thus, in a 1 hour integration SKA1-MID is able to reach an rms flux of 106 $\mu$Jy per 10 kms$^{-1}$ channel, which translates into a stacked RRL sensitivity for $\alpha$ lines of 21 $\mu$Jy per 10 kms$^{-1}$ channel. Similarly SKA1-SUR is able to achieve rms fluxes of 430 $\mu$Jy and 124 $\mu$Jy per 10 kms$^{-1}$ channel in unstacked and stacked data respectively. Comparing these values to the relative fields-of-view of SKA1-MID and SKA1-SUR, it can be seen that SKA1-MID is highly competitive for RRL mapping over small areas in Band 2 due to its larger instantaneous bandwidth and low SEFD, although SKA1-SUR has a faster mapping speed over areas larger than a single PAF tile. Early Science: the structure and kinematics of the most luminous HII regions and their impact on the ISM -------------------------------------------------------------------------------------------------------- The most luminous HII region complexes in the Milky Way have been identified using a combination of WMAP, Spitzer and MSX data by @murray2010. The 18 most luminous of these complexes are responsible for just over half the total Galactic ionising flux, and it is thought to be the UV photons “leaking” from the HII region complexes that are responsible for the diffuse warm ionised medium [e.g.  @liu2013]. This hypothesis is supported by wide area RRL mapping at $\sim$15[$^{\prime}$]{} resolution [@alves2012] which finds that the distribution of the diffuse medium is correlated with HII regions. However, constraints on the ionising spectrum from He RRLs are not fully consistent with this hypothesis [@roshi2012]. Thus higher resolution observations of H RRLs and more sensitive observations to detect He RRLs are required. A better theoretical understanding of the interplay between HII region boundaries and the diffuse surrounding medium would also complement further observations, particularly at the smaller physical scales available to the SKA. The SKA has the unique potential for deep, high resolution studies of the intermediate density gas ($n_{e} \sim 100$ cm$^{-3}$) at the interface between HII regions and the diffuse medium. This would enable the morphology and kinematics of the HII region boundaries to be explored at comparable resolution to HI & CO observations, plus determine the spectrum of the escaping radiation via simultaneous detections of H and He RRLs. Additionally, as the RRLs in SKA1-MID and -SUR Band 2 do not become appreciably pressure broadened until electron densities reach $n_{e} \simeq 1400$ cm$^{-3}$, it will also be possible to produce velocity-resolved maps of the electron density (strictly $n_{e}^{2}$) and temperature of the ionised gas within all but the densest parts of the giant HII complexes. A detailed picture of the density distribution and kinematics of HII regions is needed to understand their evolution — in particular the relative role of radiation pressure and stellar winds, which has serious implications for the interpretation of galaxy population synthesis models [@verdolini2013]. To detect H RRL emission from gas with $n_{e}$ a few 100 cm$^{-3}$ requires a brightness temperature sensitivity of $\sim$ 0.1 K, integrating over a 30 kms$^{-1}$ line and assuming a 1 pc column [@alves2012]. With Early Science SEFDs of 14.2 Jy for SKA1-SUR and 3.4 Jy for SKA1-MID it is possible to achieve this sensitivity over a 30[$^{\prime\prime}$]{} beam on the order of 50 hours and 4 hours integration time respectively. By stacking the He$n\alpha$ and C$n\alpha$ lines it will be possible to achieve a detection to the same (or slightly deeper) gas densities as the (unstacked) Hydrogen lines. By further stacking the Hydrogen lines it is possible to reach densities as low as 50 cm$^{-3}$ in the same integration times. These observations will reveal the full gamut of $\alpha$, $\beta$, $\gamma$ lines at higher densities, probing deeper into the HII regions and allowing finer velocity and spatial resolution (as the RRL brightness temperatures go as $n_{e}^{2}$) to study the distribution and kinematics of the ionised gas on smaller spatial and velocity scales. A project to map most of the Murray & Rahman complexes is eminently feasible in SKA Early Science, taking of the order of 200 hours to map the top 10 most luminous complexes (small complexes with MID and larger ones with SUR). These observations can be done “out of the box” at the workhorse SKA1-MID and -SUR Band 2 frequencies without the need to commission spectral zoom modes, effectively demonstrating the potential of the SKA for deeper and wider line surveys. SKA Phase 1: Galactic Plane surveys in Bands 2 and 5 ---------------------------------------------------- With Phase 1 capability it becomes feasible to extend the Early Science Band 2 studies into a much deeper full Galactic Plane Survey for recombination lines. A Band 2 RRL and thermal OH survey of the Galactic Plane can be carried out commensally with the HI survey described in the chapter by @mcclure-griffiths2014 — rendering simultaneous maps of the atomic, ionised and molecular radical ISM in breathtaking detail. If we assume a survey of adjoining 4[$^{\circ}$]{} wide SKA1-SUR tiles along the Galactic Plane and a nominal dwell time of 50 hours per tile [see @mcclure-griffiths2014] then a survey of the full Galactic Plane visible to SKA would take on the order of $\sim$ 2000 hours. With 50 hours per tile and an SEFD of 7.1 Jy for SKA1-SUR it would be possible to reach electron densities of $\sim$70 cm$^{-3}$ for H lines without line stacking and 40 cm$^{-3}$ with line stacking. It is worth noting that the stacked RRL spectral datacube would allow H RRL emission to be detected to twice as deep a continuum depth as the 1.4 GHz International Galactic Plane Survey (assuming a line-to-continuum ratio of 0.02). Such a survey would allow the creation of a detailed position-velocity map of the ionised ISM, separating the synchrotron contribution from that of the free-free emission. The RRL Galactic Plane survey will result in two main products: a continuum map from line-free channels (which will have approximately 1.5 $\mu$Jy rms, assuming a 300 MHz continuum bandwidth) and RRL spectral datacubes for individual and stacked lines. By combining these two maps the synchrotron contribution to the continuum can be determined [e.g. @alves2012], resulting in a position-velocity map of the free-free emission that matches the resolution of atomic, molecular and dust surveys. In addition to HII regions, many Planetary Nebulae (PNe) would be revealed. PNe have been mainly overlooked in existing RRL surveys due to the limited angular resolution and resulting beam dilution. RRL observations offer a novel means of determining the distance to the Planetary Nebulae via the determination of their expansion rate [e.g.  @gomez1989; @gulyaev2003]. Multiple RRLs observed simultaneously will be the key to disentangling pressure broadening effects from the observed lines, which is one of the reasons why this approach has not been more commonly used. The main aim of a Band 5 SKA1-MID survey would be to map the molecular ISM of the Milky Way using anomalous formaldehyde absorption but, as in the Band 2 survey described above, significant additional benefits can also be obtained simultaneously. The 2$\times$2.5 GHz bandwidth in Band 5 allows the simultaneous detection of 75 H$n\alpha$, He$n\alpha$ and C$n\alpha$ RRLs. These lines are not appreciably pressure broadened until $n_{e}\sim 10^{5}$ cm$^{-3}$ and so they can be used to study the kinematics of compact HII regions and young dense PNe. As yet unexplained highly turbulent motions are observed towards a number of compact and ultracompact HII regions [@keto2008], which may be due to trapping of ionised accretion flows [@galvan-madrid2011] or ionised outflows [@klaassen2013]. The $^{3}$He$^{+}$ hyperfine line also lies within Band 5 at a rest frequency of 8.7 GHz. A large survey of HII regions and PNe in this line would enable constraints to be placed on the primordial abundance of $^{3}$He via Big Bang nucleosynthesis [@bania2007] and perhaps provide a solution to the “$^{3}$He problem” [@guzman-ramirez2013]. Moreover, a survey to the depth required to detect anomalous absorption would also result in a very deep 5 GHz continuum survey ($\sim$ 0.4 $\mu$Jy), which would revolutionise the study of radio stars in the Milky Way [see the chapter by @umana2014]. The necessary sensitivity for an anomalous absorption survey is $\sim$0.1–0.2 K rms over a channel width of 0.1 kms$^{-1}$ (required to detect the narrow H$_{2}$CO lines. With SKA1-MID it is possible to reach this sensitivity over a 15[$^{\prime\prime}$]{} beam in on the order of 1 hour integration time, which implies a roughly 100 deg$^{2}$ survey could be achieved in 1000 hours at 4.8 GHz. To enable the H$_{2}$ volume density to be determined requires further observations of the 14.4 GHz line and the most efficient way to achieve these would be targeted followups of regions where 4.8 GHz absorption is observed. These observations would also map out the high frequency RRL emission around these regions. These followups would take on the order of 500 hours to complete, leading to a total survey time of $\sim$1500 hours. Without the followup of the 14.4 GHz H$_{2}$CO line, the 4.8 GHz observations can only constrain the H$_{2}$CO column density rather than the H$_{2}$ volume density, hence the extension of SKA1-MID Band 5 to higher frequency is crucial to add this important capability. This project would lead to the most comprehensive map of the molecular gas in the southern Milky Way to date, with uniform sensitivity to gas from volume densities $n(\rm{H}_{2})$ of 10$^{2.5}$–10$^{6}$ cm$^{-3}$ and accuracy of $\sim$0.3 dex. The full SKA ------------ With the deployment of the full SKA it will become possible to conduct wide-area maps of RRLs from ionised gas of below 50 cm$^{-3}$, routinely detecting transitions from $n\alpha$, $\beta$ and $\gamma$ transitions from H, He and C. The deployment of frequency bands up to 25 GHz will permit lines up to H64$\alpha$ to be observed, which extend the gas densities that can be traced up to 10$^{6}$ cm$^{-3}$ and allow the study of the bulk of ionised jets and winds from low mass and high mass stars [@hoare2004]. There is particular synergy here with ALMA studies of higher frequency millimetre-wave recombination lines and SKA2 observations of 10–25 GHz lines. Millimetre wave RRLs are more subject to non-LTE effects than microwave lines and the comparison of their velocity resolved spectra can illustrate inflow and outflow within optically thick ultra and hyper-compact HII regions [@peters2012]. Only the SKA will have the surface brightness sensitivity to match ALMA high resolution observations. In addition, the 24 GHz inversion transitions of ammonia (including both $^{14}$NH$_{3}$ and $^{15}$NH$_{3}$ isotopologues) will also become available. Using the antennas from the 4 km compact core would permit a deep Galactic Plane survey of ammonia to be carried out at $\sim$2[$^{\prime\prime}$]{}resolution, i.e. comparable to ALMA Bands 6/7 in compact configuration. Ammonia is a molecular thermometer *par excellence*, and there are tremendous synergies between ALMA molecular observations and the accurate kinetic temperatures that would result from wide area SKA surveys. The sensitivity of the full SKA implies that to detect NH$_{3}$ lines of a few 0.1 K brightness temperature would take on the order of five minutes per pointing. Although the primary beam at 24 GHz is only 0.08 deg$^{2}$, such a short integration time implies a survey speed of $\sim$ 1 deg$^{2}$hr$^{-1}$, making it feasible to carry out a 2[$^{\prime\prime}$]{}resolution survey of the Galactic Plane in only a few hundred hours.
--- author: - 'Pinlei Lu\*' - 'Tzu-Chiao Chien' - Xi Cao - Olivia Lanes - Chao Zhou - 'Saeed Khan\*, Hakan E. Türeci' - 'Michael J. Hatridge' bibliography: - 'refs.bib' title: 'Nearly quantum-limited Josephson-junction Frequency Comb synthesizer' --- fnsymbolfnsymbol@latex booleanfalse@sw **Coherently-driven Kerr microresonators have rapidly emerged as the leading platform for frequency comb generation in the optical domain [@Haye2007; @Haye2008; @Levy2010; @herr_universal_2012; @Herr2013; @kippenberg_dissipative_2018]. These highly multimode devices generate stable broadband combs that have found varied applications, from spectroscopy [@suh_microresonator_2016; @picque_frequency_2019; @stern_direct_2020] and metrology [@papp_microresonator_2014] to ultrashort pulse generation [@saha_modelocking_2013] and cluster state formation for continuous variable quantum information [@kues_quantum_2019]. However, optical microresonators generally possess weak Kerr coefficients [@gaeta_photonic-chip-based_2019]; consequently, triggering comb generation requires millions of photons to be circulating inside the cavity [@kues_quantum_2019], thus suppressing the role of quantum fluctuations in the comb’s dynamics [@newbury_noise_2007]. In this paper, we realize a version of coherently-driven Kerr-mediated microwave frequency combs based on a recent theoretical proposal[@Saeed2018], where the quantum vacuum’s fluctuations are the primary limitation on comb coherence. Our minimal realization within the circuit QED (cQED) architecture[@Blais2004; @Wallraff2004; @Haroche2020; @Blais2020; @Clerk2020; @Girvin2009; @GirvinNote; @Blais2007] consists of just two coupled modes, of which only one possesses a Kerr nonlinearity furnished by Josephson junctions, as shown in Fig. \[fig:schematic\]. We achieve a comb phase coherence of up to 35 $\mu$s, of the same order as most superconducting qubits and approaching the theoretical device quantum limit of 55 $\mu$s. This is vastly longer than the modes’ inherent lifetimes of tens of nanoseconds. The ability within cQED to engineer stronger nonlinearities [@pappas_frequency_2014] than optical microresonators, together with operation at cryogenic temperatures, and the excellent agreement of comb dynamics with quantum theory indicates a promising platform for the study of complex quantum nonlinear dynamics.** Although our device is based on familiar cQED components, it operates in a distinct regime within the cQED landscape: while strongly-coupled transmon-cavity systems [@Koch2007], its nonlinearity is in fact weaker and is operated under much stronger driving. On the other hand, the device exhibits stronger couplings yet smaller detunings and weaker drives than Kerr-mediated bifurcation [@siddiqi_rf-driven_2004; @Vijay2009] and parametric amplifiers [@Eichler2014; @Kamal2012; @roy_introduction_2016]. This allows us to explore quantum dynamics in a novel unstable regime, where a single frequency drive tone incident on the system generates coherent frequency combs over a large parameter space. Combs generated by this remarkably simple device stand in interesting contrast to their multimode resonator counterparts: while their spectral bandwidth is limited, the comb spacing is not entirely restricted by the underlying normal mode resonances, and the required few pW operating power corresponds to thousands of circulating photons instead of millions [@kippenberg_dissipative_2018; @stern_battery-operated_2018]. Most importantly, our implementation realizes a truly quantum device, allowing us to observe how the quantum nature of the nonlinearity impacts the very combs it generates. In particular, amplified quantum fluctuations [@lax_quantum_1966] place a fundamental limit on frequency comb linewidths, or equivalently on their phase coherence time [@haus_noise_1993; @newbury_noise_2007]. A microscopic nonlinear quantum theory of our two-mode device, in addition to providing precise operating parameters for the comb-generating regime, enables us to quantify this quantum limit on phase coherence. By also characterizing and explaining the dependence of coherence on operating parameters like detuning and drive power, we provide a detailed quantitative study of the phase coherence of frequency combs near the quantum limit. This work points towards a highly engineerable platform both for fundamental studies of complex nonlinear dynamics in the quantum regime, as well as for generating coherent, broadband microwave light sources. The Hamiltonian of our device consists of a linear mode $\hat{a}$ with uncoupled resonant frequency $\omega_a$, linearly coupled with strength ${{\fontfamily{ptm}\selectfont \textit{g}} }$ to a nonlinear mode $\hat{b}$ with uncoupled resonant frequency $\omega_b$; see Fig. \[fig:schematic\](a). The linear mode is driven by a coherent tone with frequency $\omega_d$ and amplitude $\eta$, and the system Hamiltonian in the frame rotating with this drive takes the form: $$\begin{split} \hat{\mathcal{H}}/\hbar = &\ -\Delta_{da} {\hat{a}^\dagger\hat{a}}-\Delta_{db} {\hat{b}^\dagger\hat{b}}- \frac{\Lambda}{2} \hat{b}^{\dagger}\hat{b}^{\dagger}\hat{b}\hat{b} \\ & + {{\fontfamily{ptm}\selectfont \textit{g}} }({\hat{a}^\dagger\hat{b} + \hat{a}\hat{b}^\dagger}) + \eta (\hat{a} + \hat{a}^\dagger) \end{split}$$ where $\Delta_{da/db} = \omega_d - \omega_{a/b}$ and $\Lambda > 0$ is the strength of the Kerr nonlinearity. In our experiment (Fig. \[fig:schematic\](c)), the nonlinear mode is realized as a Superconducting QUantum Interference Device (SQUID)[@squidHandBook] array (Device A: 25 SQUIDs; Device B: 5 SQUIDs). The SQUIDs act together as a flux-tunable, nonlinear inductor, which is shunted with a planar interdigitated capacitor/antenna to form a nonlinear microwave mode. Weakly asymmetric SQUIDs (with critical current ratio of 1.2:1) are used to build up the array, alleviating otherwise large hysteresis effects at the cost of a reduction in tunability of the nonlinear mode frequency[@hutchings_tunable_2017]. The device is deposited on a sapphire substrate and capacitively coupled to a single linear mode of a 3-D copper cavity [@paik_transmon_2011]. This driven-dissipative system is then described by the master equation: $\dot{\hat{\rho}} = -i[\hat{\mathcal{H}},\hat{\rho}] + \kappa\mathcal{D}[\hat{a}]\hat{\rho} + \gamma\mathcal{D}[\hat{b}]\hat{\rho} + \gamma_{\varphi}\mathcal{D}[\hat{b}^{\dagger}\hat{b}]\hat{\rho}$, which includes linear damping rates $\kappa$ ($\gamma$) for modes $\hat{a}$ ($\hat{b}$), and pure dephasing ($\gamma_{\varphi}$) for the flux-tunable nonlinear mode; thermal fluctuations are neglected. By sweeping the flux through the SQUIDs to tune the nonlinear mode frequency, and making a measurement of the reflection coefficient $|S_{11}(\omega)|$, we extract (Fig. \[fig:schematic\](c)) a coupling strength of ${{\fontfamily{ptm}\selectfont \textit{g}} }/2\pi = 87.6956$ [$\text{MHz}$]{} between the modes, and linear mode damping rate $\kappa/2\pi = 10.9308$ [$\text{MHz}$]{}. Via pump-probe measurements [@SI] we also extract a Kerr nonlinearity of $\Lambda/2\pi = 5.96~$kHz, such that $\Lambda/\kappa \sim 10^{-3}$, stronger than typical values of $\sim 10^{-5}$ for optical microresonators [@gaeta_photonic-chip-based_2019; @SI]. Analysis of this system in Ref. [@Saeed2018] showed that the linear mode effectively equips the nonlinear mode with a delayed self-interaction (see Fig. \[fig:schematic\](a)), whose influence is dictated by the coupling ${{\fontfamily{ptm}\selectfont \textit{g}} }$ and the linear mode susceptibility $\chi_a = (-i\Delta_{da}+\frac{\kappa}{2})^{-1}$. Under suitable coupling, drive, and detuning conditions, this two-mode system can go beyond typical bifurcation dynamics associated with Kerr nonlinear devices to exhibit frequency comb formation. This is clearly seen in the classical phase diagram as a function of drive detunings $\Delta_{da}, \Delta_{db}$, calculated here for measured Device A parameters (Fig. \[fig:varyingWQ\](a)). For large $|\Delta_{da}|$ (small $|\chi_a|$) relative to ${{\fontfamily{ptm}\selectfont \textit{g}} }$, the effective coupling ${{\fontfamily{ptm}\selectfont \textit{g}} }|\chi_a|$ is weak. Then, the mediated interaction may be treated within a Markov approximation, which leads to dynamics reminiscent of the standard Kerr bistability: the system admits phases with either one (blank) or three (hatched) fixed points, of which at least one is always stable [@Saeed2018]. However, for intermediate $|\Delta_{da}|$ such that ${{\fontfamily{ptm}\selectfont \textit{g}} }|\chi_a| \gtrsim 1$ (on resonance, we require ${{\fontfamily{ptm}\selectfont \textit{g}} }> \kappa/2$, comfortably satisfied by Device A), the non-Markovian nature of the interaction manifests in a qualitative change of the nonlinear mode’s stability, marked by regions (shaded red) where no stable fixed points exist. Here, classical Lyapunov analysis [@SI] reveals the possibility of our device exhibiting stable limit cycles with period $T = \frac{2\pi}{\Delta}$ and comb-like frequency spectra with spacing $\Delta$, and even chaotic dynamics deeper into the unstable regime. To observe the response of our quantum device in this rich dynamical regime, we enter the unstable region along the green arrow in Fig. \[fig:varyingWQ\] (a), by fixing the drive frequency so that $\Delta_{da}/2\pi = -47.8$ [$\text{MHz}$]{}, and flux tuning the nonlinear mode frequency. In search of the frequency domain signature of comb formation, we measure the frequency response in drive-$\Delta_{db}$ parameter space using a spectrum analyzer, with typical results at fixed $\Delta_{db}$ shown in Fig. \[fig:varyingWQ\](b). At low powers (1), the system exhibits a single frequency response at the drive frequency, corresponding to the stable fixed point. However, as the power is increased, a multifrequency spectrum emerges with equidistant peaks (2 and 3). The spacings $\Delta$ extracted from these power spectra are used to construct the experimental phase diagram in Fig. \[fig:varyingWQ\](c), with the theoretical result over the same parameter space provided for comparison. We find remarkable agreement between theory and experiment; only a single fitting offset is used to account for scaling factors along the drive power axis. Power spectrum measurements provide a key signature of comb formation but are insensitive to the nontrivial phase dynamics of these complex nonlinear solutions. While the central comb peak has a definite phase set by the incident coherent tone, the relative phase $\theta(t)$ of generated comb sidebands is free to diffuse [@ablowitz_noise-induced_2006; @navarrete-benlloch_general_2017]. This diffusion sets the comb linewidth and thus provides the ultimate limit to any precision measurements made using the comb in question [@coluccelli_frequency-noise_2015]. To quantify the phase coherence, we first obtain the time-domain cavity output $I(t)$, using a single side band (SSB) mixer to downconvert the dominant sideband peak to around the 100 MHz regime, followed by homodyne detection via a 500 MSample/s digitizer to demodulate the output signal. We then calculate the *steady-state* first-order temporal coherence function $G^{(1)}(\tau)$, defined as [@da_silva_schemes_2010]: $$\begin{aligned} G^{(1)}(\tau) = \lim_{t\to\infty}\frac{{\langle I(t)I(t+\tau) \rangle} - {\langle I(t) \rangle}^2}{{\langle I(t)^2 \rangle}-{\langle I(t) \rangle}^2} \label{eq:g1}\end{aligned}$$ This normalized coherence function decays from its maximum value of unity (at $\tau = 0$) towards $G^{(1)}(\tau) = 0$ over a time scale $T_{\rm coh}$ determined by the loss mechanisms affecting the system dynamics. We measure $G^{(1)}(\tau)$ in the parameter space explored in Fig. \[fig:varyingWQ\](c), and extract $T_{\rm coh}$ as the decay constant of the observed function envelopes; the results are plotted in Fig. \[fig:coherence\](a). Focusing in particular on the indicated cross-section at $\Delta_{db}/2\pi = 25.2~$MHz, we plot the measured $G^{(1)}(\tau)$ functions at positions $\{1,2,3\}$ in the top panel of Fig. \[fig:coherence\](c). Outside the comb regime (1), $G^{(1)}(\tau)$ decays on a timescale of $\sim 13~$ns, set by the fastest decay rate, namely the bare cavity loss $\kappa$. However, a qualitative change is observed in $G^{(1)}(\tau)$ when the system transitions into the comb regime (2), with a sharp increase in coherence time to a maximum of $36.7~\mu$s, significantly longer than the timescale set by $\kappa$. This observation, together with the decrease in $T_{\rm coh}$ with increasing drive power (3), highlights a key feature of the self-oscillating regime: the intrinsic energy loss of the system is overcome and coherence is therefore no longer determined by the bare energy loss rates. This naturally raises the question: what limits the observed phase coherence? The answer lies in the full quantum description of the strongly-driven, weakly nonlinear two-mode system. In this regime, we employ a phase-space approach based on the Positive-$P$ representation [@Saeed2018; @carmichael_statistical_2002], obtaining a set of stochastic differential equations (SDEs) for phase space variables $\vec{\zeta} = (\alpha,\alpha^{\dagger},\beta,\beta^{\dagger})^T$ associated with operators $(\hat{a},\hat{a}^{\dagger},\hat{b},\hat{b}^{\dagger})^T$. The SDEs take the general form: $$\begin{aligned} d\vec{\zeta}(t) = \vec{A}_{\rm c}(\vec{\zeta})~dt + \mathbf{B}_{\rm st}(\vec{\zeta},\Lambda,\gamma_{\varphi}) d\vec{W}(t) \label{eq:sdes}\end{aligned}$$ The deterministic contribution ($\propto \vec{A}_{\rm c}$) describes noise-free classical dynamics of the two-mode system, which yields perfectly coherent combs. The remaining stochastic terms $\propto d\vec{W}(t)$ (vector of independent Wiener increments) then describe deviations from classical dynamics, here including fluctuations due to the quantum nonlinearity $\Lambda$ and pure dephasing $\gamma_{\varphi}$. These fluctuations are ultimately responsible for phase diffusion that limits comb coherence. The stochastic terms take the explicit form $\mathbf{B}_{\rm st}(\vec{\zeta},\Lambda,\gamma_{\varphi})d\vec{W}(t) = \sqrt{\Gamma} \mathbf{B}_1(\vec{\zeta}) d\vec{W}_1(t) + \sqrt{\gamma_{\varphi}} \mathbf{B}_2(\vec{\zeta}) d\vec{W}_2(t)$, where $\Gamma = \sqrt{\Lambda^2+\gamma_{\varphi}^2}$. Crucially, we note that even in the absence of pure dephasing, $\gamma_{\varphi} \to 0$, the stochastic terms do not vanish: a contribution due to the intrinsic nonlinearity of the system always remains, setting a fundamental limit on comb coherence. This is verified by simulating Eqs. (\[eq:sdes\]) for $\gamma_{\varphi} = 0$ and the experimentally measured nonlinearity of $\Lambda/2\pi = 5.96~$kHz, and obtaining $T_{\rm coh}$; the results are shown by the blue curve in Fig. \[fig:coherence\](b), with the blue shaded region being a 95% confidence bound accounting for uncertainty in $\Lambda$. The maximum $T_{\rm coh}$ is thus limited to around 55 $\mu$s by amplified quantum fluctuations due to the device nonlinearity alone. This of course exceeds the maximum observed $T_{\rm coh}$ since $\gamma_{\varphi} \neq 0$. For $\gamma_{\varphi}/2\pi \simeq 2.0~$kHz (orange) we find good agreement with experiment (gray); simulated $G^{(1)}(\tau)$ at positions $\{1,2,3\}$ are shown (Fig. \[fig:coherence\] c, black) for comparison. The relatively small $\gamma_{\varphi}$ is not unexpected given both the narrow modulation range of the asymmetric SQUID array [@hutchings_tunable_2017] and operation at $\Phi \lesssim 0.12$, close to the flux noise sweet spot (see Fig. \[fig:schematic\] (a)). Since $\Lambda$ cannot be varied $\textit{in-situ}$ while holding other parameters fixed, we confirm its influence on $T_{\rm coh}$ by employing Device B; this 5-SQUID device is engineered to have the same total inductance as Device A, while possessing a 25-fold stronger nonlinearity [@Eichler2014] of $\Lambda/2\pi = 152.6~$kHz. While we obtain similar multifrequency behaviour (full results in SI [@SI]), coherence times for this device are much shorter, $T_{\rm coh} \lesssim 1.5~\mu$s (see Fig. \[fig:coherence\](c) for measured and simulated $G^{(1)}(\tau)$ at typical operating parameters). Although Device B is operated away from the flux-noise sweet spot [@SI], and thus experiences a larger estimated $\gamma_{\varphi}/2\pi \simeq 20.0~$kHz, we find that its much stronger nonlinearity is dominant in limiting comb coherence. To confirm this, we study the relative dependence of $T_{\rm coh}$ on $\Lambda$ and $\gamma_{\varphi}$ numerically, by simulating $T_{\rm coh}$ at fixed positions on the phase diagrams of both devices, while varying $\Lambda$. The results are plotted in Fig. \[fig:coherence\](d), in purple (green) for Device A (Device B) parameters, with the experimental result indicated by the square (diamond). They are well described by fits to $T_{\rm coh} = a(\gamma_{\varphi} + b\Lambda)^{-1}$ (curves); we find $b =({\rm A\!:\ }0.40,{\rm B\!:\ }0.55) \neq 1$, consistent with $\Lambda$ and $\gamma_{\varphi}$-contributions to dephasing originating from different stochastic terms in Eqs. (\[eq:sdes\]). More importantly, both devices clearly operate in the regime where $b \Lambda \gtrsim \gamma_{\varphi}$, and thus $T_{\rm coh}$ is predominantly set by the nonlinearity. However, as observed in Fig. \[fig:coherence\](a), $T_{\rm coh}$ also depends nontrivially on *operating* parameters (e.g. drive power, detuning), even if $\Lambda$, $\gamma_{\varphi}$ are held fixed. This dependence is intimately related to the nature of the dynamical comb regime, where the system traverses a periodic trajectory in phase space. The shape of this trajectory, which changes with operating parameters, controls its susceptibility to noise, as well as the noise itself when the latter is *multiplicative* (dependent on $\vec{\zeta}(t)$, as $\mathbf{B}_{\rm st}$ is). This connection can be made precise via a linearized Floquet analysis [@demir_phase_2000; @navarrete-benlloch_general_2017] of the SDEs around the *classical* limit cycle trajectory $\vec{\zeta}_{\rm c}(t)$. In this weak-fluctuations approach [@SI], the phase $\theta(t)$ of the limit cycle solution is governed by the SDE: $r_{\rm eff} \dot{\theta} = n(t)$. Here $r_{\rm eff}$ is the effective limit cycle radius, defined via $r_{\rm eff}\Delta = \sqrt{\frac{1}{T}\int_0^T dt~||\vec{v}(t)||^2}$ where $\vec{v}(t) = \dot{\vec{\zeta}}_{\rm c}(t)$ is the tangential velocity of limit cycle traversal. Secondly, $n(t)$ is the projection of stochastic terms $\mathbf{B}_{\rm st}(\vec{\zeta}_{\rm c}(t)) d\vec{W}$ onto the limit cycle trajectory. Noise projected onto the limit cycle therefore provides an impulse that causes $\theta(t)$ to diffuse, while $r_{\rm eff}$ provides an inertial term: the larger the radius, the more $\theta(t)$ resists diffusion. We plot the average projected noise standard deviation, $\delta n = \sqrt{\frac{1}{T}\int_0^T dt~{\langle n(t)^2 \rangle}}$ and the effective limit cycle radius $r_{\rm eff}$ along the indicated cross-section of Fig. \[fig:coherence\](a), scaled by their values at the threshold of comb formation. The limit cycle radius (blue) decreases with increasing power; this is also seen experimentally in $I$-$Q$ traces (top panel), positions 2 to 3, which can be viewed as a 2-D Poincaré section of the limit cycle trajectory. Additionally, the noise strength $\delta n$ (red, right hand axis) increases, in a clear manifestation of its multiplicative nature. Both effects tend to reduce $T_{\rm coh}$, as captured by both the linearized analysis (Fig. \[fig:coherence\](a), inset) and full SDE simulations (Fig. \[fig:coherence\](b)). While we have demonstrated the formation of stable frequency combs with this minimal two-mode Kerr system, even more complex dynamical phenomena may be observed deeper in the regime with no stable fixed points. We explore this region by fixing $\omega_b = 4.91~{\rm GHz}$ and varying $\omega_d$ instead, now entering the unstable region along the purple arrow in Fig. \[fig:varyingWQ\](a). The experimental phase diagram in Fig. \[fig:varyingWD\](a) plots spacings $\Delta$ where combs are observed, together with a dark gray region where the spectrum no longer exhibits a comb. The typical variation in spectrum is shown in Fig. \[fig:varyingWD\](b). For $\Delta_{db}/2\pi \gtrsim -30~$MHz, a clear comb spectrum is observed with a spacing that varies with $\omega_d$; the system polariton frequencies $\nu_a, \nu_b$ (unchanged with $\omega_d$) are marked in dashed pink, confirming that comb peaks do not always coincide with passive modes of the two-mode system. For $\Delta_{db}/2\pi \lesssim -30~$MHz, the spectrum abruptly changes, exhibiting a single broad peak and an increased noise background. Analyzing $I$-$Q$ traces in Fig. \[fig:varyingWD\](c), dynamics in this region (2) show large deviations with time and while recurringly confined to a region of phase space do not follow a regular trajectory, even on short timescales (inset), in stark contrast to regular periodic dynamics for stable comb operation (1). While missing from the theoretical phase diagram (Fig. \[fig:varyingWD\](a), inset) in this particular parameter regime, such temporal instabilities are predicted for the system under more negative detunings [@SI], pointing towards a promising platform for the study of complex quantum nonlinear dynamics. In conclusion, we have realized a minimal two-mode Kerr system for generating coherent frequency combs under excitation by a single coherent tone. The phase coherence of the generated combs is fundamentally limited by the intrinsic nonlinearity strength. The excellent agreement between theory and experiment points toward a highly controllable experimental platform for both the generation of coherent microwave frequency combs and the study of such combs in the quantum regime. We believe this comb can be an important coherent multifrequency source for future quantum information experiments. ACKNOWLEDGMENTS {#acknowledgments .unnumbered} =============== This work was supported by the Charles E. Kaufman Foundation of the Pittsburgh Foundation, by NSF Grant No. PIRE-1743717, and by the Army Research Office under Grant No. W911NF-18-1-0144. The work of S. K. and H. E. T. was additionally supported by the US Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering under Award No. $\text{DE-SC0016011}$. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the US Government. The US Government is authorized to reproduce and distribute reprints for government purposes notwithstanding any copyright notation herein. ${{\fontfamily{ptm}\selectfont \textit{g}} }/2\pi (\text{MHz})$ $\Lambda/2\pi (\text{MHz})$ $\kappa/2\pi (\text{MHz})$ $\omega_{b}/2\pi (\text{GHz})$ ---------------------- ----------------------------------------------------------------- ----------------------------- ---------------------------- -------------------------------- Device A (25 SQUIDs) 87.6956 $5.96 \times 10^{-3}$ 10.9308 4.956806 Device B (5 SQUIDs) 89.25 $152.6\times 10^{-3}$ 22.84 4.951073 [**Supplementary Material for “Nearly quantum-limited Josephson-junction Frequency Comb synthesizer"** ]{}\ ------------------------------------------------------------------------ Classical steady states and linear stability analysis {#sec:stability} ===================================================== The derivation of the system Hamiltonian and master equation we consider in this paper is quite standard in circuit QED (cQED); in particular, it may be found in the SI of our previous work [@Saeed2018], and we thus do not repeat the details here. Instead, in this appendix section we begin with the master equation description, derive its classical description, and analyze the stability of the resulting system. For convenience, we reproduce here the master equation describing the dynamics of the two-mode system: $$\begin{aligned} \dot{\hat{\rho}} = -i[\hat{\mathcal{H}},\hat{\rho}] + \kappa \mathcal{D}[\hat{a}]\hat{\rho} + \gamma \mathcal{D}[\hat{b}]\hat{\rho} + \gamma_{\varphi}\mathcal{D}[\hat{b}^{\dagger}\hat{b}]\hat{\rho} \label{eq:master}\end{aligned}$$ where the system Hamiltonian in the frame rotating with the drive takes the form: $$\begin{aligned} \hat{\mathcal{H}} = -\Delta_{da}\hat{a}^{\dagger}\hat{a} - \Delta_{db} \hat{b}^{\dagger}\hat{b} -\frac{\Lambda}{2} \hat{b}^{\dagger}\hat{b}^{\dagger}\hat{b}\hat{b} + g (\hat{a}^{\dagger}\hat{b} + \hat{a}\hat{b}^{\dagger} ) + \eta (\hat{a} + \hat{a}^{\dagger} ) \label{eq:hsys}\end{aligned}$$ as defined in the main text. The instability of the coupled-mode system can be readily accessed at the level of the classical equations of motion. These are obtained by writing down the equations of motion for operator averages $\{{\langle \hat{a} \rangle}, {\langle \hat{b} \rangle}\}$, neglecting correlations (namely performing replacements of the form ${\langle \hat{b}^{\dagger}\hat{b}\hat{b} \rangle} \to {\langle \hat{b}^{\dagger} \rangle}{\langle \hat{b} \rangle}{\langle \hat{b} \rangle}$), and finally replacing operator expectation values by complex amplitudes, $\{ {\langle \hat{a} \rangle},{\langle \hat{b} \rangle} \} \to \{\alpha, \beta\}$. The resulting classical system simply becomes: $$\begin{aligned} \dot{\alpha} &= \left( i\Delta_{da} - \frac{\kappa}{2}\right)\alpha - i {{\fontfamily{ptm}\selectfont \textit{g}} }\beta - i \eta \label{eq:alphaEq} \\ \dot{\beta} &= \left( i\Delta_{db} - \frac{\gamma+\gamma_{\varphi}}{2}\right)\beta + i \Lambda |\beta|^2\beta - i {{\fontfamily{ptm}\selectfont \textit{g}} }\alpha \label{eq:betaEq}\end{aligned}$$ The linearity of both mode $\hat{a}$ and the coupling $\propto {{\fontfamily{ptm}\selectfont \textit{g}} }$ enables the linear mode to be integrated out, leading to a single effective dynamical equation for the nonlinear mode amplitude [@Saeed2018]: $$\begin{aligned} \dot{\beta} = \left(i\Delta_{db}-\frac{\gamma+\gamma_{\varphi}}{2} \right)\beta + i \Lambda |\beta|^2\beta -i{{\fontfamily{ptm}\selectfont \textit{g}} }\chi_a \eta - {{\fontfamily{ptm}\selectfont \textit{g}} }^2 \! \int_0^t d\tau~F(\tau)\beta(t-\tau) \label{eq:effNL}\end{aligned}$$ where we have introduced the linear mode susceptibility $\chi_a = (-i\Delta_{da} + \frac{\kappa}{2})^{-1}$, and where the memory kernel for the self-interaction is given by: $$\begin{aligned} F(\tau) = e^{(i\Delta_{da} - \kappa/2)\tau}\end{aligned}$$ The classical steady-state of the two-mode system $(\bar{\alpha},\bar{\beta})$ may be obtained by setting $\dot{\bar{\beta}} = 0$ in Eq. (\[eq:effNL\]). This requirement simplifies the self-interaction term and is exactly equivalent to performing a Markov regime reduction of the same. The result is a cubic polynomial in $|\bar{\beta}|^2$ that can be solved exactly for the steady-state nonlinear mode amplitude $\bar{\beta}$: $$\begin{aligned} \left[ \left( i\widetilde{\Delta}_{db} + i\Lambda|\bar{\beta}|^2 \right) - \frac{\widetilde{\gamma}}{2} \right] \bar{\beta} = i {{\fontfamily{ptm}\selectfont \textit{g}} }\chi_a \eta \implies \left[ \left( \widetilde{\Delta}_{db} + \Lambda|\bar{\beta}|^2 \right)^2 + \frac{\widetilde{\gamma}^2}{4} \right] |\bar{\beta}|^2 = {{\fontfamily{ptm}\selectfont \textit{g}} }^2 |\chi_a|^2 \eta^2 \label{eq:ssb}\end{aligned}$$ where we have introduced the renormalized nonlinear mode detuning and damping parameters respectively: $$\begin{aligned} \widetilde{\Delta}_{db} &= \omega_d - (\omega_b + {{\fontfamily{ptm}\selectfont \textit{g}} }^2 |\chi_a|^2 \Delta_{da}) \nonumber \\ \widetilde{\gamma} &= \gamma + \gamma_{\varphi} + {{\fontfamily{ptm}\selectfont \textit{g}} }^2 |\chi_a|^2 \kappa\end{aligned}$$ The steady-state linear mode amplitude may then be determined by requiring $\dot{\bar{\alpha}} = 0$ in Eq. (\[eq:alphaEq\]), which simply relates $\bar{\alpha}$ to $\bar{\beta}$: $$\begin{aligned} \bar{\alpha} = - \chi_a \left( i {{\fontfamily{ptm}\selectfont \textit{g}} }\bar{\beta} + i \eta \right) \label{eq:ssa}\end{aligned}$$ Once the steady-state amplitudes $(\bar{\alpha},\bar{\beta})$ have been determined, we perform a stability analysis for small fluctuations around these steady-state(s). Formally, such an analysis can be performed on the linearized version of the effective nonlinear mode dynamical equation, which can be studied analytically *exactly* in the Laplace domain, and is particularly tractable for the special case where $\Delta_{da} = 0$. Full details of such an analysis are provided in Ref. [@Saeed2018]. However, the current experiment explores more general operating conditions where $\Delta_{da} \neq 0$ in general. In this case, it proves most convenient to simply perform a numerical stability analysis based on the Jacobian matrix of the original two-mode system. To this end, we begin by writing the classical equations of motion for the two-mode system in the 4-by-4 vector form for the dynamics of variables $\vec{\zeta} = (\alpha,\alpha^*,\beta,\beta^*)$: $$\begin{aligned} \frac{d\vec{\zeta}}{dt} = \vec{A}_{\rm cl}(\vec{\zeta}) \label{eq:cl}\end{aligned}$$ where we have defined the nonlinear drift vector $\vec{A}_{\rm cl}(\vec{\zeta})$ as: $$\begin{aligned} \vec{A}_{\rm cl}(\vec{\zeta}) = \begin{pmatrix} (+i\Delta_{da} - \frac{\kappa}{2})\alpha -i{{\fontfamily{ptm}\selectfont \textit{g}} }\beta -i\eta \\ (-i\Delta_{da} - \frac{\kappa}{2})\alpha^* +i{{\fontfamily{ptm}\selectfont \textit{g}} }\beta^* + i\eta \\ (+i\Delta_{db} - \frac{\gamma+\gamma_{\varphi}}{2})\beta + i\Lambda|\beta|^2\beta -i{{\fontfamily{ptm}\selectfont \textit{g}} }\alpha \\ (-i\Delta_{db} - \frac{\gamma+\gamma_{\varphi}}{2})\beta^* - i\Lambda|\beta|^2\beta^* + i{{\fontfamily{ptm}\selectfont \textit{g}} }\alpha^* \end{pmatrix} \label{eq:AVecCl}\end{aligned}$$ Clearly the above system is identical to Eqs. (\[eq:alphaEq\]), (\[eq:betaEq\]). Performing the linearized stability analysis then requires expanding the above equations around the classical steady state $(\bar{\alpha},\bar{\beta})$. For notational convenience, we define the vector of steady-state amplitudes $\vec{Z}$ and small fluctuations $\vec{z}(t)$ respectively: $$\begin{aligned} \vec{Z} = (\bar{\alpha},\bar{\alpha}^*,\bar{\beta},\bar{\beta}^*)^T,~\vec{z}(t) = (\delta\alpha(t),\delta\alpha^*(t),\delta\beta(t),\delta\beta^*(t))^T \label{eq:ZDef}\end{aligned}$$ Then, we expand the variables $\vec{\zeta}(t)$ around the steady-state $\vec{Z}$: $$\begin{aligned} \vec{\zeta}(t) = \vec{Z} + \vec{z}(t)\end{aligned}$$ and linearize Eqs. (\[eq:AVecCl\]) in small fluctuations $\vec{z}(t)$, obtaining the set of equations: $$\begin{aligned} \frac{d\vec{z}}{dt} = \mathbf{J}[\vec{Z}] \cdot \vec{z}(t) \label{eq:linEq}\end{aligned}$$ where $\mathbf{J}[\vec{Z}]$ defines the Jacobian matrix of the two-mode system evaluated at the classical steady-state; its entries are given by $J_{ij} = \partial_j A_{\rm cl}^i$, where $A_{\rm cl}^i$ is the $i$th element of $\vec{A}_{\rm cl}$; more explicitly the Jacobian matrix takes the form: $$\begin{aligned} \mathbf{J}[\vec{Z}] = \begin{pmatrix} +i\Delta_{da} - \frac{\kappa}{2} & 0 & -i{{\fontfamily{ptm}\selectfont \textit{g}} }& 0 \\ 0 & -i\Delta_{da} - \frac{\kappa}{2} & 0 & i{{\fontfamily{ptm}\selectfont \textit{g}} }\\ -i{{\fontfamily{ptm}\selectfont \textit{g}} }& 0 & +i\Delta_{db} - \frac{1}{2}(\gamma+\gamma_{\varphi}) + i 2\Lambda |\bar{\beta}|^2 & i\Lambda (\bar{\beta}^2) \\ 0 & i{{\fontfamily{ptm}\selectfont \textit{g}} }& -i\Lambda (\bar{\beta}^*)^2 & -i\Delta_{db} - \frac{1}{2}(\gamma+\gamma_{\varphi}) - i 2\Lambda |\bar{\beta}|^2 \end{pmatrix} \label{eq:ssJac}\end{aligned}$$ The stability of Eqs. (\[eq:linEq\]) is determined by the eigenvalues of the above Jacobian matrix; these are used to determine the stability boundaries obtained in the main text, and in Fig. \[fig:lyapunov\] in Section \[sec:lyapunov\]. Numerical phase diagram and Lyapunov stability {#sec:lyapunov} ============================================== Regions in the classical phase diagram with no stable fixed points can give rise to a rich class of dynamics. Amongst various metrics to characterize such dynamics, we employ a standard technique of computing the maximal Lyapunov exponent $\lambda_{\rm M}$. This exponent describes the sensitivity of trajectories to small perturbations in the long-time limit. Employing the notation from Section \[sec:stability\], consider a deterministic trajectory $\vec{\zeta}(t)$ evolving according to the nonlinear equations of motion describing the classical system, Eqs. (\[eq:cl\]). Linearized fluctuations about this trajectory, $\vec{z}(t)$, are then propagated by the time-dependent Jacobian matrix evaluated along the deterministic trajectory: $$\begin{aligned} \frac{d\vec{z}}{dt} = \mathbf{J}[\vec{\zeta}(t)] \cdot \vec{z}(t) \label{eq:lyEq}\end{aligned}$$ which is simply the system given by Eqs. (\[eq:linEq\]) from the previous section, but with the Jacobian now evaluated along a general time-dependent trajectory $\vec{\zeta}(t)$. Such trajectories may exhibit complicated dynamics, but being governed by a linear system, this evolution is ultimately always related to characteristic exponents obtained from the dynamical matrix. The maximal Lyapunov exponent $\lambda_{\rm M}$ is the largest such exponent (accounting for sign, not magnitude); it plays the role that the largest eigenvalue would play in the case of a dynamical system governed by a time-*independent* dynamical matrix. The maximal Lyapunov exponent may be computed by studying the long-time dynamics of trajectories governed by Eq. (\[eq:lyEq\]): $$\begin{aligned} \lambda_{\rm M} = \lim_{t\to \infty} \frac{1}{t} \log \frac{||\vec{z}(t)||}{||\vec{z}(0)||} \label{eq:maxL}\end{aligned}$$ However, since the evolution of trajectories is governed by a linear dynamical equation, these trajectories may grow unbounded exponentially with time if the Lyapunov exponent is positive. In practice this exponential growth renders the above expression intractable for numerical computations. An alternative procedure to circumvent this issue begins by separating the time evolution from $[0,t]$ into a series of $N_p$ consecutive short-time intervals, $[\tau_0,\tau_1,\ldots,\tau_{N_p}]$, where $\tau_0 = 0, \tau_{N_p} = t$, and $\tau_p - \tau_{p-1} = \Delta\tau$ is the short-time interval spacing. We then solve for $\vec{z}^{(p)}(\tau)$, $\tau \in [\tau_{p-1},\tau_p]$ with $p = 1,\ldots, N_p$, obtaining $N_p$ individual trajectory vectors $\{\vec{z}^{(p)}(\tau)\}$. Such an evolution would be identical to the entire evolution over $[0,t]$ *if* we imposed $\vec{z}^{(p)}(\tau_{p-1}) = \vec{z}^{(p-1)}(\tau_{p-1})$, requiring continuity of the solution at the endpoints of each time interval. Consequently, the process would do nothing to alleviate the problem of unbounded growth. To guard against the latter, we additionally require the initial trajectory at the beginning of every evolution interval to be normalized to one: $$\begin{aligned} \vec{z}^{(p)}(\tau_{p-1}) = \frac{\vec{z}^{(p-1)}(\tau_{p-1})}{||\vec{z}^{(p-1)}(\tau_{p-1})||} \implies ||\vec{z}^{(p)}(\tau_{p-1})|| = 1 \label{eq:iterateNorm}\end{aligned}$$ Then, for the $p$th iterate, we can estimate the maximal Lyapunov exponent as: $$\begin{aligned} \lambda_{\rm M}^{(p)} = \frac{1}{\Delta\tau} \log \frac{||\vec{z}^{(p)}(\tau_{p})||}{||\vec{z}^{(p)}(\tau_{p-1})||} = \frac{1}{\Delta\tau} \log ||\vec{z}^{(p)}(\tau_{p})||\end{aligned}$$ which simply measures the growth of the norm of the $p$th trajectory $\vec{z}^{(p)}(\tau)$ for $\tau \in [\tau_{p-1},\tau_p]$. An estimate for the actual maximal Lyapunov exponent is then finally computed by averaging over the iterations: $$\begin{aligned} \lambda_{\rm M} \approx \frac{1}{N_p} \sum_{p=1}^{N_p} \lambda_{\rm M}^{(p)} \end{aligned}$$ The above procedure ensures a faithful and *bounded* evolution of a single trajectory over a time $N_p \Delta\tau$, *provided* $\Delta\tau$ is chosen judiciously relative to $|1/\lambda_{\rm M}|$; the latter sets the characteristic evolution timescale for the trajectories and is of course *a priori* unknown. If $\Delta\tau \gg |1/\lambda_{\rm M}|$, trajectories may grow too substantially during the evolution if $\lambda_{\rm M} > 0$, leading to the same numerical errors present in the original formulation, Eq. (\[eq:maxL\]). If instead $\Delta\tau \ll |1/\lambda_{\rm M}|$, the normalized trajectories will remain mostly unchanged from their initial values, leading to an estimate of $\lambda_{\rm M}^{(p)} \to 0$ regardless of its actual value. Therefore an intermediate $\Delta\tau$ value must be employed, keeping in mind also that smaller $\Delta\tau$ values necessarily require the use of more iterations $N_p$ for convergence. In practice, the convergence of the estimate may be evaluated by calculating $\lambda_{\rm M}$ as a function of increasing $N_p$ for a fixed $\Delta\tau$, until $\lambda_{\rm M}$ remains approximately unchanged with further iterations. Comparing a set of such estimates for a range of $\Delta\tau$ values then ensures that a consistent estimate is obtained. The maximal Lyapunov exponent $\lambda_{\rm M}$ obtained with this approach is plotted for Device A parameters in drive-$\Delta_{db}$ space in Fig. \[fig:lyapunov\]. The blank regions indicate regions where $\lambda_{\rm M} < 0$, indicating a stable fixed point; perturbations near this point decay over time, settling back towards the fixed point. This is visible in the projection of the steady-state dynamics onto the nonlinear mode phase space, plotted in Fig. \[fig:lyapunov\] (c); in the long time limit the system has returned to the stable fixed point indicated by the orange cross. The gray regions indicate $\lambda_{\rm M} \approx 0$, signifying a stable limit cycle attractor [@haken_at_1983]. Steady-state dynamics here follow a stable phase space orbit, as shown in Fig. \[fig:lyapunov\] (b), around a classically unstable fixed point (green square). The periodic orbits yield combs in the frequency domain, as observed in the main text. Finally, the dark regions indicate $\lambda_{\rm M} > 0$. Here perturbations grow without bound over time, manifesting in dynamical chaos observed in numerical simulations of the classical system. The steady-state dynamics plotted in Fig. \[fig:lyapunov\] (a) show how over time a single fixed orbit does not emerge and the system explores a large region of phase space in an irregular manner. The region framed in blue in the phase diagram describes the detuning range explored in the experiment, Figs. 2, 3 of the main text, where the system exhibits stable limit cycle dynamics, consistent with observations in the main text. However, for much more negative $\Delta_{db}$ it is possible to observe chaos with the same system. This indicates the potential of the two-mode system for controlled studies of chaos in the quantum regime; hints of this dynamics are seen in Fig. 4 of the main text, as well as for Device B (see Section. \[ssec:5JJ\]). Supplementary experimental details and results {#sec:suppExpt} ============================================== In this appendix section, we include details of additional experimental measurements, including the measured phase diagram for Device B, as well as the measurement of the Kerr nonlinearity strength. Phase diagram for Device B (5 SQUIDs) {#ssec:5JJ} ------------------------------------- In addition to Device A, which employs a 25 SQUID array, we explore the impact of nonlinearity on comb dynamics by fabricating Device B, which employs a 5 SQUID array and therefore possesses an approximately 25-fold stronger nonlinearity. In Fig. \[fig:5JJ\] (a), we show the flux sweep of this device, indicating the polariton resonances of the two-mode system. Fitting to the avoided crossing reveals a coupling strength of ${{\fontfamily{ptm}\selectfont \textit{g}} }/2\pi = 89.25$ MHz, similar to Device A (by design), and a bare linear mode linewidth of $\kappa/2\pi = 22.84$ MHz. Having verified that the device satisfies the strong coupling condition ${{\fontfamily{ptm}\selectfont \textit{g}} }> \frac{\kappa}{2}$ at resonant driving ($\Delta_{da} = 0$, see main text), we can explore the classically predicted unstable regime as was done for Device A. Fixing the driving frequency at $\omega_d/2\pi = 4.9085~$GHz, we change the external flux through the SQUIDs to sweep the nonlinear mode frequency, as shown schematically in the top panel of Fig. \[fig:5JJ\] (a). Device B has a much larger flux modulation range than Device A, enabling us to explore a wider range of drive-nonlinear mode detunings $\Delta_{db}$. The resulting phase diagram in drive power-$\Delta_{db}$ space is shown in Fig. \[fig:5JJ\] (b), with the theoretically predicted phase diagram shown in the right panel, both plotted with the same axes. The light gray regions indicate the single frequency regime, which gives way to a multifrequency comb regime at appropriate drive strengths for small $|\Delta_{db}|$. The experiment and theory agree quite well both in terms of the critical detuning where the combs emerge, as well as the observed comb spacings. Finally the orange diamond in the theory plot indicates the position on the phase diagram for which coherence function results are plotted in Fig. 3 of the main text. Note that for more negative $\Delta_{db}$, a region (dark gray) emerges where the system exhibits temporal instabilities similar to Device A, in both the experimental and theoretical phase diagrams. Numerical simulations here indicate that the system exhibits chaotic dynamics (maximal Lyapunov exponent $\lambda_{\rm M} > 0$). We also find greater disparity between experiment and theory here; in addition to possible deviations from classical predictions due to quantum effects, the dynamics exhibit temporal instabilities that require careful processing. Such dynamical regimes therefore merit further detailed investigation. The white contour in both figures depicts the analytically predicted unstable region, as determined by the analysis of Eqs. (\[eq:linEq\]); it agrees well with both experiment and numerical simulations, in particular for small $|\Delta_{db}|$. Kerr nonlinearity measurement {#ssec:kerr} ----------------------------- To demonstrate the dependence of comb coherence on the quantum nature of the device nonlinearity, knowledge of this engineered Kerr nonlinearity strength is of crucial importance. Typically, one would do so via a standard pump-probe measurement that measures the Kerr-induced frequency shift of the nonlinear mode as the pump power incident on it increases. However, for the two-mode system such a measurement accesses the frequency shift of the renormalized *polariton* modes of the system, which of course depends on the degree of hybridization between linear and nonlinear modes. In this section we clarify how measured polariton mode frequency shifts can be used to extract the bare nonlinear mode Kerr interaction strength. We begin by considering the linear Hamiltonian $\hat{\mathcal{H}}_{\rm L}$ that determines the polariton modes: $$\begin{aligned} \hat{\mathcal{H}}_{\rm L} = \omega_{a}\hat{a}^{\dagger}\hat{a} + \omega_{b}\hat{b}^{\dagger}\hat{b} + {{\fontfamily{ptm}\selectfont \textit{g}} }(\hat{a}^{\dagger}\hat{b} + \hat{a}\hat{b}^{\dagger} ) \equiv \begin{pmatrix} \hat{a}^{\dagger} & \hat{b}^{\dagger} \\ \end{pmatrix} \underbrace{ \begin{pmatrix} \omega_{a} & {{\fontfamily{ptm}\selectfont \textit{g}} }\\ {{\fontfamily{ptm}\selectfont \textit{g}} }& \omega_{b} \end{pmatrix} }_{\mathbf{H}_{\rm L}} \begin{pmatrix} \hat{a} \\ \hat{b} \end{pmatrix}\end{aligned}$$ which is obtained from Eq. (\[eq:hsys\]) by neglecting the nonlinearity and drive terms, and returning to the lab frame. The above Hamiltonian may be diagonalized by introducing the matrix of eigenvectors $\mathbf{P}$ and diagonal matrix of eigenvalues $\mathbf{D}$ for the matrix $\mathbf{H}_{\rm L}$, such that $\mathbf{H}_{\rm L} = \mathbf{P}\mathbf{D}\mathbf{P}^{-1}$. The Hamiltonian then becomes: $$\begin{aligned} \hat{\mathcal{H}}_{\rm L} = \nu_a \hat{c}_a^{\dagger}\hat{c}^{\ }_a + \nu_b \hat{c}_b^{\dagger}\hat{c}_b^{\ },~ \begin{pmatrix} \hat{c}^{\ }_a \\ \hat{c}^{\ }_b \end{pmatrix} = \mathbf{P}^{-1} \begin{pmatrix} \hat{a} \\ \hat{b} \end{pmatrix},~ \mathbf{D} = \begin{pmatrix} \nu_a & 0 \\ 0 & \nu_b \end{pmatrix} \label{eq:polDef}\end{aligned}$$ which serves to define the polariton modes $\hat{c}_a$, $\hat{c}_b$, and corresponding frequencies $\nu_a$, $\nu_b$. We can now rewrite the Kerr nonlinear term of the full Hamiltonian, Eq. (\[eq:hsys\]), in the polariton basis. Writing the nonlinear term $\hat{\mathcal{H}}_{\Lambda}$ as: $$\begin{aligned} \hat{\mathcal{H}}_{\Lambda} = -\frac{\Lambda}{2}\hat{b}^{\dagger}\hat{b}^{\dagger}\hat{b}\hat{b}\end{aligned}$$ and noting from Eq. (\[eq:polDef\]) that: $$\begin{aligned} \hat{b} = \mathbf{P}_{21} \hat{c}^{\ }_a + \mathbf{P}_{22} \hat{c}^{\ }_b = \sum_{n} \mathbf{P}_{2n}\hat{c}^{\ }_n\end{aligned}$$ the nonlinear Hamiltonian in terms of polariton modes takes the form: $$\begin{aligned} \hat{\mathcal{H}}_{\Lambda} = -\frac{\Lambda}{2}\sum_{nmrs} \mathbf{P}^*_{2n}\mathbf{P}^*_{2m}\mathbf{P}^{\ }_{2r}\mathbf{P}^{\ }_{2s} \hat{c}_n^{\dagger}\hat{c}_m^{\dagger}\hat{c}^{\ }_r\hat{c}^{\ }_s \equiv -\frac{\Lambda}{2}\sum_{nmrs} \mathcal{A}^{\ }_{nmrs} \hat{c}_n^{\dagger}\hat{c}_m^{\dagger}\hat{c}^{\ }_r\hat{c}^{\ }_s \label{eq:hlambda}\end{aligned}$$ Therefore, the coupling transforms the localized nonlinearity of mode $\hat{b}$ into self- and cross-Kerr interactions between the polariton modes of the system. The Kerr-induced frequency shift observed for either polariton mode will be a combination of these terms, making it complicated to determine in general. However, we can obtain a simplified expression by assuming operation near a stable fixed point and assuming a strong polariton mode occupation, both conditions that are expected to be valid for the typical pump-probe measurement scheme. The experimental scheme proceeds similarly to the case for a single nonlinear mode: a strong pump tone is applied to the system at a positive detuning of five linewidths away from *polariton* mode $\hat{c}^{\ }_b$, predominantly pumping this mode, although also residually (weakly) pumping mode $\hat{c}_a$ (see schematic in Fig. \[fig:kerr\] (a)). The resulting steady-state polariton amplitudes, and therefore occupations, can be conveniently determined by first obtaining the nonlinear and linear mode amplitudes $\bar{\beta}$, $\bar{\alpha}$ by solving Eqs. (\[eq:ssb\]) and (\[eq:ssa\]) respectively. Then, the steady-state polariton amplitudes, $\bar{c}^{\ }_a$, $\bar{c}^{\ }_b$, are easily determined via the transformation matrix introduced in Eq. (\[eq:polDef\]): $$\begin{aligned} \begin{pmatrix} \bar{c}^{\ }_a \\ \bar{c}^{\ }_b \end{pmatrix} = \mathbf{P}^{-1} \begin{pmatrix} \bar{\alpha} \\ \bar{\beta} \end{pmatrix} \label{eq:polSS}\end{aligned}$$ Finally, the application of a weak probe determines Kerr-mediated frequency shifts, as dictated by the nonlinear Hamiltonian, Eq. (\[eq:hlambda\]). We are only interested in shifts to the polariton mode $\hat{c}^{\ }_b$; the corresponding terms of the nonlinear Hamiltonian are given by: $$\begin{aligned} \hat{\mathcal{H}}_{\Lambda} \approx -\frac{\Lambda}{2}\left[\mathcal{A}_{2222}\hat{c}^{\dagger}_b\hat{c}^{\ }_b + 4\mathcal{A}^{\ }_{2121}\hat{c}^{\dagger}_a\hat{c}^{\ }_a + 2\mathcal{A}^{\ }_{2221}\hat{c}^{\dagger}_b\hat{c}^{\ }_a + 2\mathcal{A}^{\ }_{2122}\hat{c}^{\dagger}_a\hat{c}^{\ }_b\right]\hat{c}_b^{\dagger}\hat{c}^{\ }_b + (\hat{c}^{\dagger}_a\hat{c}^{\ }_a~{\rm -only~and~non~Kerr~shift~terms})\end{aligned}$$ We now perform a semiclassical approximation, linearizing the above Hamiltonian around the fixed point defined by Eqs. (\[eq:polSS\]), under which the effective Kerr-mediated shift $\Delta\nu_b$ of the polariton frequency $\nu_b$ is given by: $$\begin{aligned} \Delta\nu_b = -\Lambda\left[\mathcal{A}_{2222}|\bar{c}_b|^2 + 2\mathcal{A}^{\ }_{2121}|\bar{c}_a|^2 + 2\mathcal{A}^{\ }_{2221}\bar{c}_b^*\bar{c}_a + \mathcal{A}^{\ }_{2122}\bar{c}_a^{*}\bar{c}^{\ }_b\right]\end{aligned}$$ Finally, the effective measured Kerr constant $\Lambda_b$ is obtained by determining the frequency shift per photon occupying the polariton mode, $\bar{n}_b = |\bar{c}_b|^2$: $$\begin{aligned} \Lambda_b = -\frac{\Delta\nu_b}{\bar{n}_b} = \Lambda\left[\mathcal{A}_{2222} + 2\mathcal{A}^{\ }_{2121}\frac{|\bar{c}_a|^2}{|\bar{c}_b|^2} + 2\mathcal{A}^{\ }_{2221}\frac{\bar{c}_a}{\bar{c}_b} + \mathcal{A}^{\ }_{2122}\frac{\bar{c}_a^{*}}{\bar{c}_b^*}\right] \label{eq:lambdab}\end{aligned}$$ Clearly, $\Delta\nu_b$ and the measured Kerr constant $\Lambda_b$ depend on $\mathcal{A}_{nmrs}$ and consequently on the detuning between the bare linear and nonlinear modes, $\Delta_{ab} = \omega_a-\omega_b$, as well as the strength of their coupling ${{\fontfamily{ptm}\selectfont \textit{g}} }$. As a result, both will vary as the nonlinear mode frequency $\omega_b$ is swept, even though the bare nonlinear mode Kerr constant $\Lambda$ remains unchanged. In addition to this dependence on $\omega_b$, Eq. (\[eq:lambdab\]) also accounts for the small but nonzero occupation of polariton mode $\hat{c}_a$ due to this mode being weakly driven, and the corresponding cross-Kerr shifts this mediates. Experimentally, a single pump-probe measurement with pump frequency $\omega_P$ at a fixed nonlinear mode frequency populates the polariton mode $\hat{c}_b$ as the pump power $P$ is increased. We first calibrate the polariton mode occupation with the applied pump power via $\bar{n}_{b}=|\bar{c}_b|^2=\frac{\kappa_b}{\Delta_P^2+(\kappa_b/2)^2}\frac{P}{\hbar{\omega_P}}$, where $\kappa_b$ is the linewidth of polariton mode $\hat{c}_b$, and $\Delta_P = 5\kappa_b$ is the detuning between the pump frequency and the bare polariton mode frequency [@aspelmeyer_cavity_2014]. The observed frequency shift $\Delta\nu_b$ as a function of $\bar{n}_b$ is shown in Fig. \[fig:kerr\] (a) for various detunings between the bare linear and nonlinear modes $\Delta_{ab}$. By fitting the observed frequency shift to $\bar{n}_b$, we obtain the measured polariton mode Kerr constant $\Lambda_b$. Each such measurement yields $\Lambda_b$ at the given $\Delta_{ab}$. By sweeping the nonlinear mode frequency, we obtain $\Lambda_b$ as a function of $\Delta_{ab}$, with the results plotted in red in Fig. \[fig:kerr\] (b). Note that as the detuning $\Delta_{ab}$ decreases, the measured Kerr nonlinearity strength also decreases, since increased hybridization dilutes the nonlinearity of the originally nonlinear mode. By fitting the experimental results to Eq. (\[eq:lambdab\]) with the bare nonlinearity $\Lambda$ as the only fitting parameter, we obtain the solid blue curve in Fig. \[fig:kerr\] (b), with the fit value $\Lambda/2\pi = 5.96$ kHz. The shaded blue region indicates the $2\sigma$ confidence interval of the fit, which finally yields the bare nonlinearity of $\Lambda/2\pi = 5.96\pm 0.2$ kHz for Device A. Typical Kerr nonlinearity strength of optical microresonators {#subsec:optKerr} ------------------------------------------------------------- In this subsection we calculate the typical Kerr nonlinearity strength, or equivalently the Kerr-mediated frequency shift per photon, for nonlinear optical microresonators. For an optical microresonator with center frequency $\omega_{\rm op}$, refractive index $n$, second-order nonlinear refractive index $n_2$, and mode volume $V_0$, the Kerr shift per photon, $\Lambda_{\rm op}$ is given by [@kippenberg_dissipative_2018]: $$\begin{aligned} \Lambda_{\rm op} = \frac{\hbar\omega_0^2 c n_2}{n^2 V_0}\end{aligned}$$ where $c$ is the speed of light in vacuum. Using parameter values for silicon nitride optical microresonators [@gaeta_photonic-chip-based_2019] - a popular and successful material choice - we have: $\omega_{\rm op}/(2\pi) = 100~{\rm THz}$ (equivalently, wavelength $\lambda \simeq 1.55~\mu{\rm m}$), $n = 2$, $n_2 = 2.5 \times 10^{-19}~{\rm m^2~W^{-1}}$, and $V_0 = (\lambda/n)^3$, we obtain: $$\begin{aligned} \Lambda_{\rm o}/(2\pi) \simeq 100~{\rm Hz}\end{aligned}$$ which is about two orders of magnitude lower than the realized $\Lambda$ for Device A. Optical microresonators are engineered to have high quality factors; we consider a large value of $Q \simeq 10^7$. For $\omega_{\rm op}/(2\pi) = 100~{\rm THz}$, this implies microresonator loss rates of $\kappa_{\rm op}/(2\pi) \simeq 10~{\rm MHz}$. As a result, the ratio of $\Lambda_{\rm op}$ to the loss rate is $\Lambda_{\rm op}/\kappa_{\rm op} \simeq 10^{-5}$, again about two orders of magnitude smaller than the smallest value realized by devices in our experiment. Quantum regime: Positive-$P$ representation and stochastic differential equations {#sec:sdes} ================================================================================= In the weakly nonlinear regime relevant to the experiment, $\Lambda/\kappa \sim O(10^{-2})-O(10^{-3})$, strong driving leads to large mode occupations $\sim O(10^2)-O(10^3)$, rendering standard master equation and even stochastic wavefunction approaches intractable. Such operating regimes are particularly suited to analysis using a phase-space approach to the dynamics of the density operator $\hat{\rho}$. In this appendix section, we describe the approach used in this work, that of the Positive-$P$ representation of the density operator, and the resulting stochastic differential equations (SDEs) it yields. We also describe how the SDEs may be solved numerically to obtain quantities of interest, namely temporal coherence functions. Fokker-Planck equation and mapping to SDEs ------------------------------------------ We employ a representation of the density operator in a non-diagonal coherent state basis over both modes $\hat{a}$ and $\hat{b}$: $$\begin{aligned} \hat{\rho}(t) = \int d^2\zeta~P(\vec{\zeta},t)~\hat{\Xi}_{\alpha} \otimes \hat{\Xi}_{\beta} \equiv \int d^2\zeta~P(\vec{\zeta},t) \cdot \frac{ {| \alpha \rangle}{\langle \alpha^{\dagger *} |} }{e^{\alpha\alpha^{\dagger}} } \otimes \frac{ {| \beta \rangle}{\langle \beta^{\dagger *} |} }{e^{\beta\beta^{\dagger}} } \label{rhoP}\end{aligned}$$ where $\vec{\zeta} = (\alpha,\alpha^{\dagger},\beta,\beta^{\dagger})$ are complex variables describing a classical phase space, $\vec{\zeta} \in \mathbb{C}^4$. For convenience of notation, we use $\zeta_i$ to refer to the $i$th element of the vector $\vec{\zeta}$, for $i = 1,\ldots 4$, and define $d^2\zeta \equiv \prod_i d^2\zeta_i$ as the integration measure over the entire phase space. Eq. (\[rhoP\]) is simply an expansion of $\hat{\rho}(t)$ in terms of non-diagonal projection operators $\hat{\Xi}_{\alpha}\otimes\hat{\Xi}_{\beta}$, with weights given by the time-dependent function $P(\vec{\zeta},t)$. For the above definition of $\hat{\Xi}_{\alpha}\otimes\hat{\Xi}_{\beta}$, $P(\vec{\zeta},t)$ is positive-definite function that satisfies a Fokker-Planck equation, and therefore may be meaningfully thought of as a classical distribution function; in particular, $P(\vec{\zeta},t)$ is referred to as the Positive-$P$ distribution[@drummond_generalised_1980; @carmichael_statistical_2002]. The above expansion casts the study of the dynamics of $\hat{\rho}(t)$ and operator averages ${\langle \hat{o} \rangle} = {\rm tr}\{\hat{o}\hat{\rho}(t)\}$ into an equivalent study of the dynamics of the distribution function $P(\vec{\zeta},t)$ and of probabilistic variables sampled from this distribution function. Phase space approaches therefore first require obtaining the dynamical equation for the distribution function $P(\vec{\zeta},t)$, which as mentioned earlier is a Fokker-Planck equation. It may be obtained from the master equation by substituting Eq. (\[rhoP\]) into Eq. (\[eq:master\]). This standard analysis requires knowledge of the action of the mode creation and annihilation operators on the expansion basis $\hat{\Xi}_{\alpha}\otimes\hat{\Xi}_{\beta}$: $$\begin{aligned} \hat{a}\hat{\Xi}_{\alpha} &= \alpha\hat{\Xi}_{\alpha},~\hat{\Xi}_{\alpha}\hat{a} = \left( \partial_{\alpha^{\dagger}} + \alpha \right)\hat{\Xi}_{\alpha} \nonumber \\ \hat{\Xi}_{\alpha}\hat{a}^{\dagger} &= \alpha^{\dagger}\hat{\Xi}_{\alpha},~\hat{a}^{\dagger}\hat{\Xi}_{\alpha} = \left( \partial_{\alpha} + \alpha^{\dagger} \right)\hat{\Xi}_{\alpha}\end{aligned}$$ with the expressions for the nonlinear mode operators obtained via the substitution $\{\hat{a},\hat{a}^{\dagger}\} \to \{\hat{b}, \hat{b}^{\dagger}\}$, $\{\alpha,\alpha^{\dagger}\} \to \{\beta, \beta^{\dagger}\}$ respectively. Using the above results and carrying out an integration-by-parts, one obtains a Fokker-Planck equation for the distribution function: $$\begin{aligned} \partial_t P(\vec{\zeta},t) = \left( -\partial_i A_{\rm cl}^i + \frac{1}{2}\partial_i\partial_j D_{\rm st}^{ij} \right) P(\vec{\zeta},t) \end{aligned}$$ where $\partial_i \equiv \frac{\partial}{\partial \zeta_i}$ and repeated indices are summed over. Here $A_{\rm cl}^i$ is the $i$th element of the drift vector $\vec{A}_{\rm cl}$ that defines deterministic nonlinear dynamics: $$\begin{aligned} \vec{A}_{\rm cl} = \begin{pmatrix} \left( +i\Delta_{da} - \frac{\kappa}{2} \right)\alpha -i {{\fontfamily{ptm}\selectfont \textit{g}} }\beta -i\eta \\ \left( -i\Delta_{da} - \frac{\kappa}{2} \right)\alpha^{\dagger} +i {{\fontfamily{ptm}\selectfont \textit{g}} }\beta^{\dagger} + i\eta \\ \left( +i\Delta_{db} - \frac{\gamma+\gamma_{\varphi}}{2} \right)\beta + i \Lambda (\beta^{\dagger}\beta)\beta - i {{\fontfamily{ptm}\selectfont \textit{g}} }\alpha \\ \left( -i\Delta_{db} - \frac{\gamma+\gamma_{\varphi}}{2} \right)\beta^{\dagger} - i \Lambda (\beta^{\dagger}\beta)\beta + i {{\fontfamily{ptm}\selectfont \textit{g}} }\alpha^{\dagger} \end{pmatrix}\end{aligned}$$ Note that if we make the substitution $\{\alpha^{\dagger},\beta^{\dagger}\to\alpha^*,\beta^*\}$, the above is identical to the drift vector describing classical dynamics, Eq. (\[eq:AVecCl\]). On the other hand, $D_{\rm st}^{ij}$ is the $(i,j)$th element of the diffusion matrix $\mathbf{D}_{\rm st}$ that lends ‘width’ to the distribution function. Here it takes the simple form: $$\begin{aligned} \mathbf{D}_{\rm st} = \begin{pmatrix} \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{D}_{\beta} \end{pmatrix},~ \mathbf{D}_{\beta} = \begin{pmatrix} (i\Lambda - \gamma_{\varphi})\beta^2 & \gamma_{\varphi}\beta^{\dagger}\beta \\ \gamma_{\varphi}\beta^{\dagger}\beta & (-i\Lambda - \gamma_{\varphi})(\beta^{\dagger})^2 \end{pmatrix}\end{aligned}$$ where $\mathbf{0}$ is the 2-by-2 matrix of zeros. Note that the diffusion includes contributions arising from the nonlinearity $\Lambda$ as well as from the the dephasing term $\gamma_{\varphi}$. In general, the multi-dimensional Fokker-Planck equation cannot be analytically solved for the distribution function $P(\vec{\zeta},t)$; exceptions include situations where the Fokker-Planck equation is linear or where certain potential conditions are satisfied. The current system falls under neither category. However, the utility of the Fokker-Planck equation extends beyond the equation itself; one can also obtain a set of equivalent stochastic differential equations (SDEs) describing the dynamics of phase space variables $\vec{\zeta}(t)$ sampled from the Positive-$P$ distribution satisfying the governing Fokker-Planck equation. The set of SDEs takes the form [@carmichael_statistical_2002]: $$\begin{aligned} d\vec{\zeta} = \vec{A}_{\rm cl}(\vec{\zeta})dt + \mathbf{B}_{\rm st}(\vec{\zeta},\Lambda,\gamma_{\varphi})d\vec{W}(t) \label{eq:sdes}\end{aligned}$$ where $\mathbf{B}_{\rm st}$ is the matrix square root of the diffusion matrix, defined via $\mathbf{D}_{\rm st} = \mathbf{B}_{\rm st} \mathbf{B}_{\rm st}^T$. For a 4-by-4 diffusion matrix $\mathbf{D}$, the noise matrix $\mathbf{B}$ is not unique; it is in general a 4-by-$k$ non-square matrix, with $d\vec{W}(t)$ then being a $k$-by-1 vector of independent Wiener increments. While this freedom of choice in the noise matrix can be used to improve SDE convergence properties [@drummond_quantum_2003], we find that here a square matrix ($k=4$) suffices. We write it in the form: $$\begin{aligned} \mathbf{B}_{\rm st} = \sqrt{\Gamma}~\mathbf{B}_1 + \sqrt{\gamma_{\varphi}}~\mathbf{B}_2 = \sqrt{\Gamma} \begin{pmatrix} \mathbf{0} & \mathbf{0} \\ \mathbf{b}_1 & \mathbf{0} \\ \end{pmatrix} + \sqrt{\gamma_{\varphi}} \begin{pmatrix} \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{b}_2 \\ \end{pmatrix}\end{aligned}$$ where $\mathbf{0}$ is the 2-by-2 matrix of zeros as before, and $\mathbf{B}_1$, $\mathbf{B}_2$ are the noise matrices introduced in the main text. Here we also provide their explicit forms in terms of the 2-by-2 component matrices $\mathbf{b}_1$ and $\mathbf{b}_2$: $$\begin{aligned} \mathbf{b}_1 = \begin{pmatrix} e^{i\theta/2}\beta & 0 \\ 0 & e^{-i\theta/2}\beta^{\dagger} \end{pmatrix},~ \mathbf{b}_2 = \sqrt{\frac{\beta^{\dagger}\beta}{2}} \begin{pmatrix} e^{i\pi/4} & e^{-i\pi/4} \\ e^{-i\pi/4} & e^{i\pi/4} \end{pmatrix}\end{aligned}$$ Finally, we have defined the parameters $\Gamma$ and $\theta$ via: $$\begin{aligned} \Gamma e^{i\theta} \equiv i\Lambda -\gamma_{\varphi} \implies \Gamma = \sqrt{\Lambda^2 + \gamma_{\varphi}^2},~\theta = \arctan\left( -\frac{\Lambda}{\gamma_{\varphi}} \right).\end{aligned}$$ The validity of the noise matrix $\mathbf{B}_{\rm st}$ as the square root of the diffusion matrix $\mathbf{D}_{\rm st}$ may be easily verified by direct multiplication. Practical computation of steady-state operator moments and correlation functions using SDEs ------------------------------------------------------------------------------------------- Simulations of the SDEs in Eq. (\[eq:sdes\]) yield individual stochastic trajectories of the stochastic variables $\vec{\zeta}(t)$, which may then be used to compile *normal-ordered* moments and correlation functions. In what follows, we use expressions for moments and correlation functions for the linear mode as examples, since these are directly accessible via experiment. However the expressions hold equally for nonlinear mode operators by appropriate substitutions. Suppose Eqs. (\[eq:sdes\]) are solved to obtain $N_s$ stochastic trajectories, yielding a set of stochastic trajectories $\{\vec{\zeta}_i(t)\}$ for $i = 1,\ldots, N_s$. Then, first-order moments for the linear mode may be determined via stochastic averaging (indicated by notation ${\langle \cdot \rangle}_s$) as follows: $$\begin{aligned} {\langle \hat{a}(t) \rangle} &= \langle \alpha(t) \rangle_s = \lim_{N_s \to \infty} \frac{1}{N_s}\sum_{i=1}^{N_s} \alpha_i(t) \end{aligned}$$ Normal-ordered two-time correlation functions follow similarly: $$\begin{aligned} {\langle \hat{a}^{\dagger}(t+\tau)\hat{a}(t) \rangle} &= \langle \alpha^{\dagger}(t+\tau)\alpha(t) \rangle_s = \lim_{N_s \to \infty} \frac{1}{N_s}\sum_{i=1}^{N_s} \alpha_i^{\dagger}(t+\tau)\alpha_i(t) \\ {\langle \hat{a}(t+\tau)\hat{a}(t) \rangle} &= \langle \alpha(t+\tau)\alpha(t) \rangle_s = \lim_{N_s \to \infty} \frac{1}{N_s}\sum_{i=1}^{N_s} \alpha_i(t+\tau)\alpha_i(t)\end{aligned}$$ Note that the other normal-ordered, time anti-ordered correlation functions ${\langle \hat{a}^{\dagger}(t)\hat{a}(t+\tau) \rangle}$ and ${\langle \hat{a}^{\dagger}(t)\hat{a}^{\dagger}(t+\tau) \rangle}$ may be obtained from the two expressions above respectively by conjugation. While the above expressions allow access to moments and correlation functions at arbitrary times, when analyzing the long time coherence of the emergent frequency combs we will ultimately be interested in steady state quantities. The requirement of a steady state allows an alternative calculation of the above quantities. To directly acquire steady state quantities, we simulate Eqs. (\[eq:sdes\]) for times $t \in [0, t_{\rm ss}+T]$ with simulation time step $\Delta t$, and retain solutions in the time window $t \in [t_{\rm ss},t_{\rm ss}+T]$. The time $t_{\rm ss}$ is chosen long enough that the solutions $\{\vec{\zeta}_i(t)\}$ within the stored window are extracted when initial transients have decayed away; this initial $t_{\rm ss}$ value is verified self-consistently, as discussed at the end of this section. For simplicity, we now index the solutions in this time window by times $t_j \in [t_{\rm ss}, t_{\rm ss} + T]$, such that $t_j = t_{\rm ss} + j \Delta t$ for $j = 0,\ldots,M_1$, where $M_1 = T/\Delta t$. Since in the steady state first order moments should become stationary in time, we can equivalently average moments over time, with the results being equivalent to ensemble averaging if a true steady state has been achieved. In practice, to take advantage of parallelization available with modern computing clusters, we compute moments by averaging over *both* trajectories and time: $$\begin{aligned} {\langle \hat{a} \rangle} \approx \frac{1}{N_s}\sum_{i=1}^{N_s} \left[ \frac{1}{M_1+1}\sum_{j=0}^{M_1} \alpha_i(t_j) \right]\end{aligned}$$ where the term in square bracket implements time-averaging. Similarly, two-time correlation functions reduce to single time quantities in the steady state. Suppose we wish to compute correlation functions such as ${\langle \hat{a}^{\dagger}(\tau)\hat{a}(0) \rangle}$ for $\tau \in [0, T_A]$ where $T_A \leq T$. The time average in this case is now performed over a subset of the total window of length $T$, namely over $t_j$ where $j = 0,\ldots,M_2$ where $M_2 = (T-T_A)/\Delta t \leq M_1$. Then, steady state correlation functions may be obtained by ensemble and time averaging via: $$\begin{aligned} {\langle \hat{a}^{\dagger}(\tau)\hat{a}(0) \rangle} \approx \frac{1}{N_s}\sum_{i=1}^{N_s} \left[ \frac{1}{M_2+1 }\sum_{j=0}^{M_2} \alpha_i^{\dagger}(t_j+\tau)\alpha_i(t_j) \right]\end{aligned}$$ Note that if $T_A = T$, the required correlation function spans the entire retained window of length $T$; only a single correlation function is obtained $(M_2 = 0)$ and thus time averaging has no effect. In practice, we retain solutions for a time window $T$ that is larger than the required length of the correlation function $T_A$, so that time-averaging can be performed. Finally, to verify that all averaged results computed above are truly steady state quantities, we increase the value of $t_{\rm ss}$ beyond its chosen initial value and recompute the results, checking to see whether the averaged quantities are unchanged. If so, they are independent of $t_{\rm ss}$ and we can be confident of having computed steady state quantities. Otherwise, the procedure is repeated for increasing $t_{\rm ss}$ values until this condition is met. Calculating the filtered output temporal coherence function {#sec:calcCorr} ----------------------------------------------------------- For calculations of comb coherence, we introduce in the main text the first-order temporal coherence function $G^{(1)}(\tau)$; here we rewrite it in a slightly different but ultimately equivalent form: $$\begin{aligned} G^{(1)}(\tau) = \frac{{\langle \Delta\hat{I}(0)\Delta\hat{I}(\tau) \rangle}}{{\langle \Delta\hat{I}(0)\Delta\hat{I}(0) \rangle}} = \frac{{\langle \hat{I}(0)\hat{I}(\tau) \rangle} - {\langle \hat{I}(0) \rangle}^2}{ {\langle \hat{I}(0)\hat{I}(0) \rangle} - {\langle \hat{I}(0) \rangle}^2 } \label{eq:G1}\end{aligned}$$ where $\hat{I}(t)$ is the *measured* cavity output quadrature, and we have introduced *reduced* steady-state correlation functions for arbitrary operators $\hat{o}_1, \hat{o}_2$ as: $$\begin{aligned} {\langle \Delta\hat{o}_1(0)\Delta\hat{o}_2(\tau) \rangle} = {\langle (\hat{o}_1(0)-{\langle \hat{o} \rangle}_1)(\hat{o}_2(\tau)-{\langle \hat{o} \rangle}_2) \rangle} = {\langle \hat{o}_1(0)\hat{o}_2(\tau) \rangle} - {\langle \hat{o}_1 \rangle}{\langle \hat{o}_2 \rangle}\end{aligned}$$ Note, however, that simulating the SDEs of Eqs. (\[eq:sdes\]) only yields *intra*cavity quantities, while we require measured cavity *output* quantities to compute $G^{(1)}(\tau)$. In this section, we show how the two can be related using quantum input-output theory [@gardiner_input_1985]. We begin by analyzing the steady state output quadrature correlation function: $$\begin{aligned} {\langle \hat{i}(0)\hat{i}(\tau) \rangle} = {\langle \hat{i}(\tau)\hat{i}(0) \rangle}^*\end{aligned}$$ where $\hat{i}(t)$ is the cavity mode output quadrature *prior to any post-processing* (in particular downconversion and demodulation) carried out in the experiment. It can be written in terms of the output field non-Hermitian operators $\hat{a}_{\rm out}(t)$: $$\begin{aligned} \hat{i}(t) = \frac{1}{\sqrt{2}}\left( \hat{a}_{\rm out}(t) + \hat{a}_{\rm out}^{\dagger}(t) \right)\end{aligned}$$ In terms of the non-Hermitian output operators, the output quadrature correlation function takes the form: $$\begin{aligned} {\langle \hat{i}(0)\hat{i}(\tau) \rangle} = \frac{1}{2}\left( {\langle \hat{a}_{\rm out}(0)\hat{a}_{\rm out}(\tau) \rangle} + {\langle \hat{a}^{\dagger}_{\rm out}(0)\hat{a}^{\dagger}_{\rm out}(\tau) \rangle} + {\langle \hat{a}^{\dagger}_{\rm out}(0)\hat{a}_{\rm out}(\tau) \rangle} + {\langle \hat{a}_{\rm out}(0)\hat{a}_{\rm out}^{\dagger}(\tau) \rangle} \right)\end{aligned}$$ It now proves useful to normal-order and time anti-order the individual correlation functions. This requires use of the commutation relationships between the non-Hermitian output operators [@gardiner_input_1985]: $$\begin{aligned} [\hat{a}_{\rm out}(t),\hat{a}_{\rm out}(t')] = 0 = [\hat{a}^{\dagger}_{\rm out}(t),\hat{a}^{\dagger}_{\rm out}(t')],~[\hat{a}_{\rm out}(t),\hat{a}_{\rm out}(t')] = \delta(t-t')\end{aligned}$$ Then, the output quadrature correlation function becomes: $$\begin{aligned} {\langle \hat{i}(0)\hat{i}(\tau) \rangle} = \frac{1}{2}\left( {\langle \hat{a}_{\rm out}(\tau)\hat{a}_{\rm out}(0) \rangle} + {\langle \hat{a}^{\dagger}_{\rm out}(0)\hat{a}^{\dagger}_{\rm out}(\tau) \rangle} + {\langle \hat{a}^{\dagger}_{\rm out}(0)\hat{a}_{\rm out}(\tau) \rangle} + {\langle \hat{a}_{\rm out}^{\dagger}(\tau)\hat{a}_{\rm out}(0) \rangle} + \delta(\tau) \right)\end{aligned}$$ Note that the second and fourth terms in the expressions are simply conjugates of the first and third terms respectively. To calculate the reduced correlation function, we require the steady state quantity: $$\begin{aligned} {\langle \hat{i}(0) \rangle}{\langle \hat{i}(\tau) \rangle} = \frac{1}{2}\left( {\langle \hat{a}_{\rm out}(0) \rangle}{\langle \hat{a}_{\rm out}(\tau) \rangle} + {\langle \hat{a}^{\dagger}_{\rm out}(0) \rangle}{\langle \hat{a}^{\dagger}_{\rm out}(\tau) \rangle} + {\langle \hat{a}^{\dagger}_{\rm out}(0) \rangle}{\langle \hat{a}_{\rm out}(\tau) \rangle} + {\langle \hat{a}_{\rm out}^{\dagger}(\tau) \rangle}{\langle \hat{a}_{\rm out}(0) \rangle} \right)\end{aligned}$$ Using the above, we can finally write the reduced output quadrature correlation function as: $$\begin{aligned} {\langle \Delta\hat{i}(0)\Delta\hat{i}(\tau) \rangle} = \frac{1}{2}\left( {\langle \Delta\hat{a}_{\rm out}(\tau)\Delta\hat{a}_{\rm out}(0) \rangle} + {\langle \Delta\hat{a}^{\dagger}_{\rm out}(0)\Delta\hat{a}^{\dagger}_{\rm out}(\tau) \rangle} + {\langle \Delta\hat{a}^{\dagger}_{\rm out}(0)\Delta\hat{a}_{\rm out}(\tau) \rangle} + {\langle \Delta\hat{a}_{\rm out}^{\dagger}(\tau)\Delta\hat{a}_{\rm out}(0) \rangle} + \delta(\tau) \right)\end{aligned}$$ Now we can relate the output field operators to intracavity operators via input-output theory: $$\begin{aligned} \hat{a}_{\rm out}(t) = \hat{a}_{\rm in}(t) + \sqrt{\kappa}\hat{a}(t)\end{aligned}$$ The two independent reduced output field correlation functions above can be related to the intracavity field reduced correlation functions (assuming zero temperature): $$\begin{aligned} {\langle \Delta\hat{a}_{\rm out}(\tau)\Delta\hat{a}_{\rm out}(0) \rangle} &= \kappa {\langle \Delta\hat{a}(\tau)\Delta\hat{a}(0) \rangle} \\ {\langle \Delta\hat{a}^{\dagger}_{\rm out}(\tau)\Delta\hat{a}_{\rm out}(0) \rangle} &= \kappa {\langle \Delta\hat{a}^{\dagger}(\tau)\Delta\hat{a}(0) \rangle}\end{aligned}$$ Finally, the reduced *output quadrature* correlation function can be related to *normal-ordered intracavity* correlation functions as: $$\begin{aligned} {\langle \Delta\hat{i}(0)\Delta\hat{i}(\tau) \rangle} = \frac{1}{2}\delta(\tau) + \frac{\kappa}{2}\Big( {\langle \Delta\hat{a}(\tau)\Delta\hat{a}(0) \rangle} + {\langle \Delta\hat{a}^{\dagger}(\tau)\Delta\hat{a}(0) \rangle} + c.c. \Big) \label{eq:cavCorr}\end{aligned}$$ Recall that we are ultimately interested in the correlation function for the *measured* cavity output quadrature $\hat{I}(t)$, as defined in the main text, which is related to $\hat{i}(t)$ by a downconversion and demodulation step. Fortunately, it is possible to relate measured correlation functions post-filtering directly to output correlation functions prior to filtering [@da_silva_schemes_2010]: $$\begin{aligned} {\langle \Delta\hat{I}(0)\Delta\hat{I}(\tau) \rangle} = \mathcal{F}(\tau) \ast {\langle \Delta\hat{i}(0)\Delta\hat{i}(\tau) \rangle} \label{eq:filtCorr}\end{aligned}$$ where $\mathcal{F}(\tau)$ is the composite filter function describing both downconversion and demodulation of the cavity output in the process of measurement, and $\ast$ indicates the convolution operation. Therefore, to calculate the measured $G^{(1)}(\tau)$ numerically, we first simulate Eqs. (\[eq:sdes\]) and calculate the reduced intracavity correlation functions on the right hand side of Eq. (\[eq:cavCorr\]), as described in the previous section. This enables us to obtain the output correlation function ${\langle \Delta\hat{i}(0)\Delta\hat{i}(\tau) \rangle}$. The resulting function is then passed through (i.e. convolved with) the composite filter $\mathcal{F}(\tau)$ to obtain the filtered correlation function, Eq. (\[eq:filtCorr\]). Finally, employing Eq. (\[eq:G1\]) yields the required temporal coherence function numerically. Accounting for the filtering process is important to obtain agreement between the calculated and measured coherence functions, in particular the oscillation frequency which would otherwise differ from the experiment by $\sim 100~$MHz, the downconversion offset implemented as part of the post-processing. This is particularly evident in comparisons of the measured and numerically calculated $G^{(1)}(\tau)$ shown in Fig. 3 (c) of the main text, We also note that the filtering process replaces the somewhat unphysical $\delta$-function contribution in Eq. (\[eq:cavCorr\]) - arising from the abstract construct of white noise in the cavity output field - with a finite quantity, as is expected for any real detection scheme which possesses a finite bandwidth. Estimating pure dephasing rate $\gamma_{\varphi}$ via comb coherence {#subsec:nlDephasing} -------------------------------------------------------------------- Simulating Eqs. (\[eq:sdes\]) and calculating the output coherence function as described in the previous section allows us to extract the coherence time $T_{\rm coh}$, as discussed in the main text. The only parameter required to simulate the SDEs that we are unable to directly measure is the pure dephasing rate $\gamma_{\varphi}$; the weak nonlinearity of the nonlinear mode prevents standard Ramsay measurement of the pure dephasing rate, and indirect methods based on cavity measurement are limited by the large disparity between the dephasing rate and the cavity linewidth $\kappa$. These difficulties are discussed in Section \[sec:deph\]. However, the coherence of frequency combs is affected by the known nonlinearity and the unknown pure dephasing rate; as a result, by simulating Eqs. (\[eq:sdes\]) for various values of $\gamma_{varphi}$ and comparing with experimental observations, we can estimate $\gamma_{\varphi}$. In Fig. \[fig:gammaPhiFit\], we show the numerically obtained value of $T_{\rm coh}$ across the same cross-section of the phase diagram included in the main text, Fig. 2(b), for $\gamma_{\varphi}/(2\pi) \in [0.0,1.0,2.0,3.0]~\text{kHz}$. Also shown is the experiment result. From these results we conclude that the pure dephasing rate may be well approximated to lie within $\gamma_{\varphi}/(2\pi) \in [1.0,3.0]~\text{kHz}$. Furthermore, the best fit appears to be found for $\gamma_{\varphi}/(2\pi) \simeq 2.0~\text{kHz}$. Linearized Floquet Analysis of SDEs {#sec:floquet} =================================== The influence of quantum noise on system dynamics as described by the stochastic terms in Eqs. (\[eq:sdes\]) is well understood when considering dynamics near a classically stable fixed point. Here one linearizes the system around the stable fixed point and studies weak fluctuations due to stochastic terms. However, in the frequency comb regime the system exhibits no classically stable fixed points, instead settling into a stable attractor describing a limit cycle. The study of linearized fluctuations around such stable attractors has gained much interest recently and can be performed by linearizing the dynamics around the periodic classical solution [@demir_phase_2000; @navarrete-benlloch_general_2017]. Linearization and phase dynamics -------------------------------- To begin, we rewrite the system of SDEs, Eqs. (\[eq:sdes\]), below: $$\begin{aligned} \frac{d \vec{\zeta}}{dt} = \vec{A}_{\rm cl} + \mathbf{B}_{\rm st}\cdot\frac{d\vec{W}(t)}{dt} \end{aligned}$$ where we have suppressed the dependence of $\vec{A}_{\rm cl}, \mathbf{B}_{\rm st}$ on $\vec{\zeta}$ and system parameters for notational convenience. In the frequency comb regime, the classical (noise-free) system admits the periodic solution $\vec{\zeta}_{\rm cl}(t)$: $$\begin{aligned} \frac{d\vec{\zeta}_{\rm cl}}{dt} = \vec{A}_{\rm cl} \label{eq:Z}\end{aligned}$$ For frequency combs with spacing $\Delta$, $\vec{\zeta}_{\rm cl}(t)$ is periodic with period $T = \frac{2\pi}{\Delta}$. We can then consider fluctuations $\vec{z}(t)$ around this classical periodic solution: $$\begin{aligned} \vec{\zeta}(t+\theta) = \vec{\zeta}_{\rm cl}(t+\theta) + \vec{z}(t+\theta) \label{eq:zExp}\end{aligned}$$ where we have introduced the additional phase parameter $\theta(t)$ which is not fixed by the classical dynamical equations of motion, and is therefore susceptible to perturbations due to noise (or other external stimuli) [@demir_phase_2000; @navarrete-benlloch_general_2017]. We are now interested in the linearized dynamics of the fluctuations $\vec{z}(t+\theta)$. Substituting the expansion, Eq. (\[eq:zExp\]), into the system of SDEs, Eq. (\[eq:sdes\]), and retaining only terms linear in $\vec{z}(t)$, we find: $$\begin{aligned} \frac{d\vec{z}}{dt} + \frac{d\vec{\zeta}_{\rm cl}}{dt}\dot{\theta} = \mathbf{J}[\vec{\zeta}_{\rm cl}(t)]\cdot\vec{z} + \mathbf{B}_{\rm st}[\vec{\zeta}_{\rm cl}(t)]\cdot\frac{d\vec{W}}{dt} \label{eq:linSDEs}\end{aligned}$$ where $\mathbf{J}[\vec{\zeta}_{\rm cl}(t)]$ is the Jacobian matrix evaluated along the *periodic* classical solution, and is therefore a periodic matrix itself. Similarly $\mathbf{B}_{\rm st}[\vec{\zeta}_{\rm cl}(t)]$ is the noise matrix also evaluated along the periodic classical solution. Finally, $\frac{d\vec{\zeta}_{\rm cl}}{dt} \equiv \vec{v}$ is the velocity vector and is tangential to the limit cycle trajectory. This term clearly vanishes if $\vec{\zeta}_{\rm cl}(t)$ is time *independent*, as in the case of a stable fixed point where $\vec{\zeta}_{\rm cl}(t) \to \vec{Z}$ defined in Eq. (\[eq:ZDef\]); then the above equation simply describes the linearized dynamics of fluctuations around the fixed point, governed by a static Jacobian and driven by noise terms. Here, however, the velocity term does not vanish and in addition to the dynamics of $\vec{z}(t)$, we are also interested in the evolution of the free phase $\theta(t)$ under the influence of stochastic terms. To solve for the dynamics of a system governed by a time-periodic dynamical matrix, it proves useful to express the linearized fluctuations $\vec{z}(t)$ in terms of the Floquet eigenvectors of the periodic system, Eq. (\[eq:Z\]). Details of the Floquet eigensystem analysis are provided in Section \[ssec:floquet\]; here for clarity we restrict our discussion to understanding how the main results can be used to analyze limit cycle phase diffusion. For convenience we define the periodic dynamical matrix $\mathbf{J}[\vec{\zeta}_{\rm cl}(t)] \equiv \mathbf{J}(t)$ and the periodic noise matrix $\mathbf{B}_{\rm st}[\vec{\zeta}_{\rm cl}(t)] \equiv \mathbf{B}_{\rm st}(t)$. The Floquet eigenvectors $\{\vec{p}_i(t),\vec{q}_i(t)\}$ for $i = 0,\ldots,N-1$ where $N$ is the dimension of the system of ODEs ($N=4$ for the present system), are periodic with the period of the stable classical limit cycle, $T$. They themselves satisfy the linear systems of equations: $$\begin{aligned} \dot{\vec{p}}_i(t) &= \left[\mathbf{J}(t)-\mu_i \right]\vec{p}_i(t) \nonumber \\ \dot{\vec{q}}^{\dagger}_i(t) &= \vec{q}_i^{\dagger}(t)\left[\mu_i-\mathbf{J}(t) \right]\end{aligned}$$ The $\{\mu_i\}$ are Floquet exponents determined by the eigenvalues of the fundamental matrix of the Floquet system. For systems with a periodic stable attractor, at least one of the Floquet exponents, which we label $\mu_0$ here, vanishes [@haken_at_1983]. The corresponding Floquet eigenvector $\vec{p}_0(t)$ can be shown to be proportional to the tangential velocity vector $\vec{v}$ (see Section. \[ssec:floquet\]). Finally, the Floquet eigenvectors satisfy the following orthogonality relation: $$\begin{aligned} \vec{q}_j^{\dagger}(t)\vec{p}_i(t) = \delta_{ij}~\forall~t \in [0,T]\end{aligned}$$ To proceed, we expand the weak fluctuations around the stable limit cycle in terms of the Floquet eigenvectors: $$\begin{aligned} \vec{z}(t) = \sum_{n=1}^{N-1} c_n(t) \vec{p}_n(t)\end{aligned}$$ Note that the above expansion does not include the Floquet eigenvector $\vec{p}_0(t)$ corresponding to $\mu_0 = 0$, which as mentioned before is proportional to the tangent vector to the classical limit cycle [@navarrete-benlloch_general_2017]. Substituting the above expansion into the linearized set of SDEs, Eqs. (\[eq:linSDEs\]), we find: $$\begin{aligned} \sum_{n=1}^{N-1} \left[\dot{c}_n(t)\vec{p}_n(t) + c_n(t)\dot{\vec{p}}_n(t)\right] + \vec{v}\dot{\theta} = \mathbf{J}(t)\cdot \sum_{n=1}^{N-1} c_n(t) \vec{p}_n(t) + \mathbf{B}_{\rm st}(t)\cdot \frac{d\vec{W}}{dt}\end{aligned}$$ which simplifies to: $$\begin{aligned} \sum_{n=1}^{N-1} \left[\dot{c}_n(t)\vec{p}_n(t) + c_n(t)\dot{\vec{p}}_n(t)\right] + \vec{v}\dot{\theta} &= \sum_{n=1}^{N-1} c_n(t) \left[ \dot{\vec{p}}_n(t)+ \mu_n\vec{p}_n(t) \right] + \mathbf{B}_{\rm st}(t)\cdot \frac{d\vec{W}}{dt} \nonumber \\ \implies \sum_{n=1}^{N-1} \dot{c}_n(t)\vec{p}_n(t) + \vec{v}\dot{\theta} &= \sum_{n=1}^{N-1} \mu_n c_n(t)\vec{p}_n(t) + \mathbf{B}_{\rm st}(t)\cdot \frac{d\vec{W}}{dt} \end{aligned}$$ The terms corresponding to time derivatives of the right Floquet eigenvectors simply cancel. The remaining terms can be used to obtain equations of motion for the expansion coefficients. However, we are primarily interested in the diffusion of the phase variable $\theta(t)$. We can use the fact that $\vec{v}(t) \propto \vec{p}_0(t)$ to isolate the equation of motion for the phase variable: multiplying by the Floquet left eigenvector $\vec{q}_0^{\dagger}(t)$ and using the orthogonality of the Floquet eigenvectors, the above system simplifies to: $$\begin{aligned} \left(\vec{q}_0^{\dagger}(t)\vec{v}(t) \right) \dot{\theta}(t) = \vec{q}_0^{\dagger}(t) \left( \mathbf{B}_{\rm st}(t)\cdot \frac{d\vec{W}}{dt} \right)\end{aligned}$$ For notational simplicity, we can normalize $\vec{q}_0(t)$ (and therefore $\vec{p}_0(t)$) such that $\vec{q}_0^{\dagger}(t)\vec{v} = v_T$ where $v_T$ is the average velocity over the limit cycle period $T$. Then, defining the time dependent noise projection of the noise vector in parenthesis onto $\vec{q}_0(t)$: $$\begin{aligned} n(t) = \vec{q}_0^{\dagger}(t) \left( \mathbf{B}_{\rm st}(t)\cdot \frac{d\vec{W}}{dt} \right)\end{aligned}$$ we obtain the dynamical equation for $\theta(t)$: $$\begin{aligned} v_T \dot{\theta}(t) = n(t)\end{aligned}$$ which is the equation introduced in the main text. However, not that as introduced, the phase variable is a perturbation to the time $t$ and therefore has dimensions of time. It can be made dimensionless by multiplying by the relevant frequency scale for frequency comb, namely the comb spacing $\Delta$. We then have the equation of motion: $$\begin{aligned} r_T \left[\Delta\dot{\theta}(t)\right] = n(t)\end{aligned}$$ where we introduce the effective limit cycle radius $r_T$ via $v_T = r_T \Delta$. The simplified notation does require some caution; the noise term $n(t)$ is a stochastic term and solutions to the above equation must ultimately be determined by calculating moments of the phase variable. In particular, we can obtain the variance: $$\begin{aligned} \Delta^2{\langle \theta^2(t) \rangle} = \frac{1}{r_T^2}\int_0^t \int_0^t d\tau~d\tau'~{\langle n(\tau) n(\tau') \rangle}\end{aligned}$$ where we have assumed $\theta(0) = 0$. The double integral above simplifies once the noise correlation functions for $\frac{d\vec{W}}{dt}$ are substituted. Finally, we can use phase variance over a period $T$ to introduce the coherence time $T_{\rm coh}$ as: $$\begin{aligned} \Delta^2{\langle \theta^2(T) \rangle} \equiv 2\left(\frac{T}{T_{\rm coh}}\right)\end{aligned}$$ The inset of the phase diagram in Fig. 2 of the main text plots $T_{\rm coh}$ as the limit cycle coherence time. Derivation of Floquet eigensystem {#ssec:floquet} --------------------------------- In this subsection we provide a details derivation of the Floquet eigensystem, consisting of the Floquet exponents and left/right eigenvectors, which are employed in the analysis of limit cycle diffusion. We consider the system of $N$ linear first order ODEs: $$\begin{aligned} \dot{\vec{z}} = \mathbf{J}(t)\vec{z} \label{eq:floquetSys}\end{aligned}$$ A key constraint of the problem is that the dynamical matrix $\mathbf{J}(t)$ is periodic: $\mathbf{J}(t) = \mathbf{J}(t+T)$, as is the case for Eqs. (\[eq:linSDEs\]) in the previous subsection. Being a system of $N$ ODEs, it admits $N$ linearly independent solutions which we label $\{\vec{z}_1(t),\vec{z}_2(t),\ldots,\vec{z}_N(t)\}$. We can construct a matrix $\mathbf{R}(t)$ with linearly independent columns $\{\vec{z}_i(t)\}$; the resulting matrix also satisfies: $$\begin{aligned} \dot{\mathbf{R}}(t) = \mathbf{J}(t)\mathbf{R}(t) \label{eq:floquetR}\end{aligned}$$ Being a linear system, multiplying $\mathbf{R}(t)$ by a constant matrix $\mathbf{K}$ also satisfies the system. In particular, if we define the matrix $\mathbf{V}(t)$ as: $$\begin{aligned} \mathbf{V}(t) = \mathbf{R}(t)\mathbf{K}\end{aligned}$$ then: $$\begin{aligned} \dot{\mathbf{V}}(t) = \dot{\mathbf{R}}(t)\mathbf{K} = \mathbf{J}(t)\mathbf{R}(t)\mathbf{K} = \mathbf{J}(t)\mathbf{V}(t) \end{aligned}$$ so that $\mathbf{V}(t)$ is also a solution of the Floquet system. Since $\mathbf{J}(t+T) = \mathbf{J}(t)$, we find: $$\begin{aligned} \dot{\mathbf{R}}(t+T) = \mathbf{J}(t)\mathbf{R}(t+T)\end{aligned}$$ so that the matrix $\mathbf{R}(t+T)$ also solves the linear system. Combining the above two results, we can relate $\mathbf{R}(t+T)$ to $\mathbf{R}(t)$: $$\begin{aligned} \mathbf{R}\left( t+T \right) = \mathbf{R}(t)\mathbf{K}\end{aligned}$$ Since $\mathbf{K}$ is a constant matrix, it can be obtained from the above relation by setting $t=0$: $$\begin{aligned} \mathbf{K} = \mathbf{R}^{-1}(0)\mathbf{R}(T)\end{aligned}$$ By choosing initial conditions such that $\mathbf{R}(0) = \mathbf{I}$, then we simply obtain $\mathbf{K} = \mathbf{R}(T)$, which is a constant matrix referred to as the *fundamental matrix* of the Floquet system. It is obtained by solving Eq. (\[eq:floquetR\]) for $\mathbf{R}(t)$ as a function of time $t \in [0, T]$ over a single period $T$ of the classical solution, with the aforementioned initial condition. Note that the fundamental matrix is in general non-Hermitian; as such we need to consider its complex eigenvalues $\rho_i$ and left/right eigenvectors $\vec{b}_i$, $\vec{c}_i$ respectively: $$\begin{aligned} \mathbf{K} \vec{b}_i &= \rho_i \vec{b}_i \nonumber \\ \vec{c}_i^{\dagger} \mathbf{K} &= \rho_i \vec{c}_i^{\dagger}\end{aligned}$$ which satisfy the orthogonality relation: $$\begin{aligned} \vec{c}_i^{\dagger}\vec{b}_j = \delta_{ij}\end{aligned}$$ If we now define the set of vectors $\{\vec{y}_i(t)\}$: $$\begin{aligned} \vec{y}_i(t) = \mathbf{R}(t)\vec{b}_i\end{aligned}$$ Substituting the above into Eq. (\[eq:floquetSys\]), we find: $$\begin{aligned} \dot{\vec{y}}_i(t) = \dot{\mathbf{R}}(t)\vec{b}_i = \mathbf{J}(t)\mathbf{R}(t)\vec{b}_i = \mathbf{J}\vec{y}_i(t)\end{aligned}$$ which means $\{\vec{y}_i(t)\}$ are solutions to the Floquet system, Eq. (\[eq:floquetSys\]), as well. This decomposition of the solutions in terms of the eigenvectors of the fundamental matrix also implies: $$\begin{aligned} \vec{y}_i(t+T) &= \mathbf{R}(t+T)\vec{b}_i = \mathbf{R}(t)\mathbf{K}\vec{b}_i = \rho_i \mathbf{R}(t)\vec{b}_i \nonumber \\ &= \rho_i \vec{y}_i(t) \end{aligned}$$ Therefore solutions to the Floquet system are in general *not* periodic, unless $\rho_i = 0$. The set of $\{\rho_i\}$ are referred to as *Floquet multipliers*. However, the solutions separated by a period are simply related by a constant. In particular, this enables writing them in the form: $$\begin{aligned} \vec{y}_i(t) = e^{\mu_i t}\vec{p}_i(t)\end{aligned}$$ where we introduce a set of periodic vectors $\{\vec{p}_i(t)\}$, such that: $$\begin{aligned} \vec{p}_i(t+T) = \vec{p}_i(t)\end{aligned}$$ Then, $$\begin{aligned} \vec{y}_i(t+T) = e^{\mu_i T} e^{\mu_i t} \vec{p}_i(t+T) = e^{\mu_i T} e^{\mu_i t} \vec{p}_i(t) = e^{\mu_i T} \vec{y}_i (t) \equiv \rho_i \vec{y}_i(t)\end{aligned}$$ This enables a parameterization of the Floquet multipliers in terms of *Floquet exponents* $\{\mu_i\}$: $$\begin{aligned} \rho_i = e^{\mu_i T}\end{aligned}$$ We define the periodic vectors $\{\vec{p}_i(t)\}$ as the Floquet right eigenvectors. They are completely determined by the eigenvectors $\{\vec{b}_i\}$ and eigenvalues $\{\rho_i\}$ of the fundamental matrix via: $$\begin{aligned} \vec{y}_i(t) = \mathbf{R}(t)\vec{b}_i = e^{\mu_i t} \vec{p}_i(t) \implies \vec{p}_i(t) = e^{-\mu_i t}\mathbf{R}(t)\vec{b}_i \label{eq:pEq}\end{aligned}$$ To find the equation of motion for the Floquet right eigenvectors $\vec{p}_i(t)$, we can simply take the time derivative of the above, which then yields the equation of motion: $$\begin{aligned} \dot{\vec{p}}_i(t) &= \left[ -\mu_i + \mathbf{J}(t) \right] e^{-\mu_i t}\mathbf{R}(t) \vec{b}_i \nonumber \\ \implies \dot{\vec{p}}_i(t) &= \left[ \mathbf{J}(t) - \mu_i \right] \vec{p}_i(t)\end{aligned}$$ Similar to the definition of $\vec{y}_i(t)$, we can define solutions $\{\vec{w}_i(t)\}$ in terms of the left eigenvectors of the fundamental matrix $\mathbf{R}(t)$: $$\begin{aligned} \vec{w}_i^{\dagger}(t) = \vec{c}_i^{\dagger}\mathbf{R}^{-1}(t)\end{aligned}$$ Clearly, we have: $$\begin{aligned} \vec{w}_i(t+T) = \vec{c}_i^{\dagger} \mathbf{R}^{-1}(t+T) = \vec{c}_i^{\dagger} \mathbf{K}^{-1}\mathbf{R}^{-1}(t) = \rho_i^{-1}\vec{c}_i^{\dagger}\mathbf{R}^{-1}(t) = \rho_i^{-1}\vec{w}_i(t)\end{aligned}$$ where we have used the relationship: $$\begin{aligned} \vec{c}_i^{\dagger}\mathbf{K} = \rho_i \vec{c}_i^{\dagger} \implies \vec{c}_i^{\dagger} = \rho_i \vec{c}_i^{\dagger}\mathbf{K}^{-1} \implies \vec{c}_i^{\dagger}\mathbf{K}^{-1} = \rho_i^{-1}\vec{c}_i^{\dagger}\end{aligned}$$ Then, since $\rho_i = e^{\mu_i t}$, we can write $\vec{w}_i(t)$ in terms of a periodic vector $\vec{q}_i(t) = \vec{q}_i(t+T)$: $$\begin{aligned} \vec{w}_i(t) = e^{-\mu_i t}\vec{q}_i^{\dagger}(t)\end{aligned}$$ We analogously define the set of periodic vectors $\{\vec{q}_i(t)\}$ as the left Floquet eigenvectors, which are again completely determined by the eigenvalues and eigenvectors of the fundamental matrix as: $$\begin{aligned} \vec{q}_i^{\dagger}(t) = e^{\mu_i t}\vec{c}_i^{\dagger}\mathbf{R}^{-1}(t) \label{eq:qEq}\end{aligned}$$ We can also determine an equation of motion for the Floquet left eigenvectors by taking the time derivative of the above relation. This requires the time derivative of the inverse of $\mathbf{R}(t)$: $$\begin{aligned} \frac{d}{dt}(\mathbf{R}\mathbf{R}^{-1}) &= \frac{d}{dt}\mathbf{I} = 0 = \dot{\mathbf{R}}\mathbf{R}^{-1} + \mathbf{R}\dot{\mathbf{R}}^{-1} \nonumber \\ \implies \dot{\mathbf{R}}^{-1} &= - \mathbf{R}^{-1}\dot{\mathbf{R}}\mathbf{R}^{-1} \nonumber \\ \implies \dot{\mathbf{R}}^{-1} &= -\mathbf{R}^{-1}\mathbf{J}\end{aligned}$$ which then yields the equation of motion: $$\begin{aligned} \dot{\vec{q}}_i^{\dagger}(t) &= e^{\mu_i t}\vec{c}_i^{\dagger}\mathbf{R}^{-1}(t)\left[ \mu_i - \mathbf{J} \right] \nonumber \\ \implies \dot{\vec{q}}_i^{\dagger}(t) &= \vec{q}_i^{\dagger}(t)\left[ \mu_i - \mathbf{J}(t) \right]\end{aligned}$$ Finally, we note that the right and left Floquet eigenvectors satisfy the orthogonality relationship: $$\begin{aligned} \vec{q}_j^{\dagger}(t)\vec{p}_i(t) = e^{(\mu_j-\mu_i)t}\vec{c}_j^{\dagger}\vec{b}_i = \delta_{ij}\end{aligned}$$ at all times $t \in [0, T]$, as can be easily found from the definitions of the Floquet eigenvectors, Eqs. (\[eq:pEq\]), (\[eq:qEq\]). Finally, we show here that provided the Floquet system admits a periodic solution, at least one of the Floquet exponents vanishes [@haken_at_1983]. The corresponding Floquet eigenvector is then proportional to the velocity vector of the limit cycle solution. To do so, we begin with Eq. (\[eq:Z\]): $$\begin{aligned} \vec{v} = \vec{A}_{\rm cl}[\vec{Z}(t)]\end{aligned}$$ where we have used the notation introduced earlier, $\frac{d\vec{Z}}{dt} \equiv \vec{v}$. Differentiating the above with respect to time, we obtain: $$\begin{aligned} \dot{\vec{v}} = \mathbf{J}[\vec{Z}(t)]\cdot \vec{v} \implies \dot{\vec{v}} = \left[\mathbf{J}(t) - 0\right]\vec{v}\end{aligned}$$ where we have used the chain rule since $\vec{A}_{\rm cl}[\vec{Z}(t)]$ depends on time only via its dependence on $\vec{Z}(t)$. When written as the second term, it becomes clear that $\vec{v}$ satisfies the equation of motion for the Floquet eigenvector $\vec{p}_i(t)$ with $\mu_i = 0$, Eq. (\[eq:pEq\]). We label this Floquet exponent with index 0, $\mu_0 = 0$. Clearly, the corresponding eigenvector $\vec{p}_0$ is then proportional to the tangential velocity $\vec{v}$, differing only by a constant that is set by the normalization requirement for the left and right eigenvectors. Dephasing in the weak-driving regime {#sec:deph} ==================================== In this work we include the effects of flux noise on the tunable nonlinear mode via the pure dephasing term $\propto \gamma_{\varphi}$ in the system master equation. In the main text, the impact of pure dephasing on frequency comb coherence was assessed. In this appendix section we consider the influence of pure dephasing in the regime of *weak* driving, far from the instability regions where frequency combs emerge. The qualitative features of this regime can be seen by neglecting the nonlinearity, which then enables an exact analysis of the dynamics. However, being able to access the dynamics in this weak driving regime is not straightforward, as we discuss in the following sections. Exact dynamics --------------- In this linear regime, we find that the full quantum two-mode model can be reduced to a closed set of linear equations for the first and second order moments of the two modes. In particular, the linear system becomes: $$\begin{aligned} \frac{d}{dt}\vec{v} = \mathbf{M} \vec{v} + \vec{d} \label{eq:dephTD}\end{aligned}$$ where $\vec{v}$ is the vector of first and second order moments, and $\vec{d}$ describes the drive on the linear mode: $$\begin{aligned} \vec{v} = \begin{pmatrix} {\langle \hat{a} \rangle} \\ {\langle \hat{a}^{\dagger} \rangle} \\ {\langle \hat{b} \rangle} \\ {\langle \hat{b}^{\dagger} \rangle} \\ {\langle \hat{a}^{\dagger}\hat{a} \rangle} \\ {\langle \hat{b}^{\dagger}\hat{b} \rangle} \\ {\langle \hat{a}^{\dagger}\hat{b} \rangle} \\ {\langle \hat{b}^{\dagger}\hat{a} \rangle} \end{pmatrix},~ \vec{d} = \begin{pmatrix} -i \eta \\ +i\eta \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \end{pmatrix}\end{aligned}$$ The dynamical matrix $\mathbf{M}$ takes the block form: $$\begin{aligned} \mathbf{M} = \begin{bmatrix} \mathbf{M}_1 & \mathbf{0} \\ \mathbf{N} & \mathbf{M}_2 \end{bmatrix}\end{aligned}$$ where: $$\begin{aligned} \mathbf{M}_1 &= \begin{pmatrix} i\Delta_{da} - \frac{\kappa}{2} & 0 & -i{{\fontfamily{ptm}\selectfont \textit{g}} }& 0 \\ 0 & -i\Delta_{da} - \frac{\kappa}{2} & 0 & i{{\fontfamily{ptm}\selectfont \textit{g}} }\\ -i{{\fontfamily{ptm}\selectfont \textit{g}} }& 0 & i\Delta_{db} - \frac{1}{2}\left(\gamma+\gamma_{\varphi}\right) & 0 \\ 0 & i{{\fontfamily{ptm}\selectfont \textit{g}} }& 0 & -i\Delta_{db} - \frac{1}{2}\left(\gamma+\gamma_{\varphi}\right) \\ \end{pmatrix} \\ \mathbf{M}_2 &= \begin{pmatrix} -\kappa & 0 & i{{\fontfamily{ptm}\selectfont \textit{g}} }& -i{{\fontfamily{ptm}\selectfont \textit{g}} }\\ 0 & -\gamma & -i{{\fontfamily{ptm}\selectfont \textit{g}} }& i{{\fontfamily{ptm}\selectfont \textit{g}} }\\ i{{\fontfamily{ptm}\selectfont \textit{g}} }& -i{{\fontfamily{ptm}\selectfont \textit{g}} }& i\Delta_{da}-i\Delta_{db} - \frac{1}{2}\left(\kappa+\gamma+\gamma_{\varphi}\right) & 0 \\ -i{{\fontfamily{ptm}\selectfont \textit{g}} }& i{{\fontfamily{ptm}\selectfont \textit{g}} }& 0 & i\Delta_{db}-i\Delta_{da} - \frac{1}{2}\left(\kappa+\gamma+\gamma_{\varphi}\right) \\ \end{pmatrix} \\ \mathbf{N} &= \begin{pmatrix} i\eta & -i\eta & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -i\eta \\ 0 & 0 & i\eta & 0 \end{pmatrix}\end{aligned}$$ For convenience, we define the $\hat{I}_j$ and $\hat{Q}_j$ quadratures for mode $j$ as: $$\begin{aligned} \hat{I}_j = \frac{1}{\sqrt{2}}\left( \hat{d}_j + \hat{d}_j^{\dagger} \right),~\hat{Q}_j = \frac{-i}{\sqrt{2}}\left( \hat{d}_j - \hat{d}_j^{\dagger} \right) \end{aligned}$$ where $\hat{d}_j \in \{\hat{a},\hat{b}\}$ for $j=a,b$ respectively. The above definitions also imply that: $$\begin{aligned} \hat{A}_j^2 \equiv \hat{I}_j^2 + \hat{Q}_j^2 = {\langle \hat{d}_j^{\dagger}\hat{d}_j \rangle} \label{eq:A}\end{aligned}$$ Removing the coupling $({{\fontfamily{ptm}\selectfont \textit{g}} }=0)$ renders the undriven $(\eta = 0)$ dynamical matrix diagonal, and the system decay rates can simply be read off. The linear mode decay rate for ${\langle \hat{a} \rangle}$ is $\frac{\kappa}{2}$ and for ${\langle \hat{a}^{\dagger}\hat{a} \rangle}$ is $\kappa$, indicating that the cavity mode experiences no pure dephasing. In contrast, the nonlinear mode amplitude ${\langle \hat{b} \rangle}$ decays at the rate $(\gamma+\gamma_{\varphi})/2$, while the nonlinear mode occupation decays at the rate $\gamma$. Therefore in this linear dynamical regime it should in principle be possible to determine the dephasing rate $\gamma_{\varphi}$ by observing the decay of the nonlinear mode quadrature $\hat{I}_b$, in comparison to the decay of ${\langle \hat{I}_b^2+\hat{Q}_b^2 \rangle}$. Dephasing measurement in the two-level approximation ---------------------------------------------------- When the anharmonicity $\Lambda$ of the nonlinear mode is large compared to its damping rate $\gamma$, the nonlinear mode may be accurately modeled as a two-level system. In this regime, a standard approach to measuring the two relevant decay rates has been readily employed in cQED: by tuning the resulting two-level system frequency far from the cavity mode, a dispersive coupling between the two-level system and the cavity mode is realized, which enables mapping the two-level system state to one of two cavity pointer states. Then, a measurement of the two-level system state is made via a homodyne measurement of the cavity output field. The effects of dephasing in this regime can then be recast into a form familiar in cavity QED: it leads to an additional depolarization of ${\langle \hat{\sigma}_x \rangle}$ (analogous to ${\langle \hat{I}_b \rangle}$), without affecting the relaxation rate of ${\langle \hat{\sigma}_z \rangle}$ (analogous to ${\langle \hat{I}_b^2 + \hat{Q}_b^2 \rangle})$. A standard Ramsey experiment yields the depolarization rate $(\gamma+\gamma_{\varphi})/2 = (T_2^*)^{-1}$ for ${\langle \hat{\sigma}_x \rangle}$, and by obtaining the relaxation rate $\gamma = T_1^{-1}$ for ${\langle \hat{\sigma}_z \rangle}$, one can extract the pure dephasing rate $\gamma_{\varphi}/2 \equiv (2T_{\varphi})^{-1} = (T_2^*)^{-1} - (2T_1)^{-1}$. By mapping the two-level system state to cavity pointer states, measurements in the dispersive regime enable access to two-level system dynamics that occur on longer timescales set by $1/\gamma$, even though measurements are being made of the cavity which evolves on a much shorter time scale $1/\kappa$, since $\kappa \gg \gamma$. The latter condition is in fact necessary to ensure a measurement time that is shorter than the relaxation time of the nonlinear mode, which reduces errors due to unwanted relaxation between the end of any evolution of interest of the two-level system and the conclusion of the measurement of its state. Furthermore, since this approach measures the time-*integrated* homodyne current to obtain the two-level system dynamics, its temporal resolution is not limited by the DAC (digital-to-analog convertor) that determines the temporal resolution of the obtained homodyne voltage; instead, the temporal resolution is set by the degree of control over microwave pulse generation for the manipulation of the two-level system and the cavity mode. However, for the devices under study here, we are precisely interested in the weakly-nonlinear regime, where $\Lambda/\gamma \ll 1$. While necessary for the observation of coherent frequency combs, this renders addressing just two states of the nonlinear mode unfeasible, and thus rules out making measurements of the nonlinear mode via a dispersive coupling to the cavity mode. In this case, one must resort to a direct temporal measurement of the relaxation of moments, which we discuss in the next section. Cavity ringdown method and theoretical simulations -------------------------------------------------- Even when a two-level description of the nonlinear mode is not feasible, Eqs. (\[eq:dephTD\]) indicate that under weak driving it should still be possible to observe the effect of pure dephasing on the nonlinear mode moments. To do so, one would ideally like to probe the nonlinear mode dynamics directly, without having to observe the linear mode. This requires effectively decoupling the nonlinear mode from the linear mode (by being detuned far away) while still retaining a coupling to the outside world. However, the 3-D transmon design isolates the nonlinear mode from a direct coupling to the environment, successfully allowing for a much higher-$Q$ nonlinear mode than lumped-element or coplanar waveguide architectures. While this design usefully reduces both the relaxation rate $\gamma$ and pure dephasing rate $\gamma_{\varphi}$ [@paik_transmon_2011], it also means that we only have direct access to linear mode quadratures, $\hat{I}_a$, $\hat{Q}_a$. As such, one is restricted to determining the dephasing rate by monitoring moments of cavity quadratures. The approach one would employ is a ringdown setup [@maillet_classical_2016]: a coherent drive is placed on the system to initialize it to a nontrivial state in phase space, following which the drive is turned off and the resulting ringdown dynamics of the measured first and second order cavity moments recorded as the two-mode system returns to the undriven steady-state. Comparing the rates of relaxation for first and second order moments then enables a calculation of the pure dephasing rate $\gamma_{\varphi}$. To explore the feasibility of such an approach, we perform numerical simulations of Eqs. (\[eq:dephTD\]) under this ringdown setup. We assume a much larger pure dephasing rate $\gamma_{\varphi}/2\pi = 50.0~{\rm kHz}$ than estimated for either of our devices, for reasons that will become clear shortly. The typical initialization and ringdown evolution is shown in Fig. \[fig:deph\] (a). The drive $\eta$ is turned on at $t = -0.8~\mu$s, and then turned off at $t=0$, following which the cavity undergoes relaxation to return to the undriven steady-state. The ringdown dynamics are shown in Fig. \[fig:deph\] (b) for two different detunings between the linear and nonlinear modes, $\Delta_{ab}$. We fit exponentials with decay constants $\lambda_1$, $\lambda_2$ to the moments ${\langle \hat{I}_a(t) \rangle}$, ${\langle \hat{A}_a^2(t) \rangle}$ (see Eq. (\[eq:A\]) respectively, and define the *dephasing rate experienced by the linear mode* as $\gamma_{\varphi}^a = \lambda_1 - \lambda_2/2$. When the detuning is large compared to the coupling (i), the linear and nonlinear modes are effectively decoupled, so that the linear mode should experience no pure dephasing and $\gamma_{\varphi}^a$ is vanishingly small. With decreasing detuning (ii), the linear and nonlinear modes hybridizes, and the linear mode inherits some dephasing, so that $\gamma_{\varphi}^a$ increases. The dephasing experienced by the linear mode $\gamma_{\varphi}^a$ as a function of $\Delta_{ab}$ is plotted in Fig. \[fig:deph\] (c), scaled by $\lambda_2/2$. While in principle such an approach may be used to extract the pure dephasing rate $\gamma_{\varphi}$, Fig. \[fig:deph\] (c) brings to light a number of technical difficulties. Firstly, the variation due to pure dephasing is superimposed on the very fast cavity decay rate; the *relative* difference in decay rates $\sim \gamma_{\varphi}/\kappa$ is therefore very small and difficult to extract experimentally, even though we have assumed a dephasing rate here much larger than those obtained in the main text. In contrast, spectroscopy of the two-level system compares $\gamma_{\varphi}$ directly to $\gamma$. Secondly, since this is a direct temporal measurement, its accuracy is limited by the DAC resolution. Small changes in the very short cavity relaxation time are therefore more uncertain. Both these issues mean that obtaining the pure dephasing rate from direct cavity ringdown measurements under moderate to strong hybridization is likely to be inaccurate. As a result, we instead employ the strategy of obtaining $\gamma_{\varphi}$ in the *nonlinear* regime, in particular within the frequency comb regime. Here, the effect of the bare mode decay rates $\gamma$ and $\kappa$ is overcome since the system starts to undergo self-oscillation, as discussed in the main text. Then, the comb coherence is limited entirely by the nonlinearity strength and the pure dephasing rate. By measuring the nonlinearity strength via a pump-probe measurement of the hybridized system, as discussed in Section \[ssec:kerr\], we are able to use SDE simulations of comb coherence to obtain an estimate of $\gamma_{\varphi}$. ------------------------------------------------------------------------
--- abstract: | Given a real Banach space $\mathcal{X}$ and probability space $(\Omega, \Sigma, \mu)$ we characterize the countable additivity of Henstock-Dunford integral for Henstock integrable function taking values in $X$ as those weakly measurable function $ g: \Omega \to \mathcal{X} $ for which $\{y^*g~: y^* \in B_\mathcal{X}^* \} $ is relatively weakly compact in some separable Orlicz space $ \mathcal{L}^{\overline{\phi}}(\mu) .$ We find relatively weakly compact in some Orlicz space with Henstock-Gel’fand integral.\ Henstock-Pettis integral; Denjoy-Dunford integral; Orlicz Space.\ 26A39, 26A42, 28B05, 46E30, 46G10. --- Hemanta Kalita$^{1}$ and Bipan Hazarika$^{2\ast}$ $^{1}$Department of Mathematics, Patkai Christian College (Autonomous), Dimapur, Patkai 797103, Nagaland, India\ $^{2}$Department of Mathematics, Gauhati University, Guwahati 781014, Assam, India\ Email: hemanta30kalita@gmail.com; bh\_rgu@yahoo.co.in. [^1] [^2] Introduction and Preliminaries ============================== During 1957-1958, R. Henstock and J. Kurzweil independently gave a Riemann type integral called Henstock-Kurzweil integral (or Henstock integral). The Dunford, Pettis, Gel’fand integrals are generalizations of the lebesgue integral to Banach valued function. Let $\mathcal{I}_0$ be a compact interval in $ \mathbb{R}^m$ (or $\mathbb{R}^1$) and $ \mathcal{E} \subset \mathbb{R}^m$ (or $\mathbb{R}$) a measurable subset of $\mathcal{ I}_0.$ $\mu(\mathcal{\mathcal{E}})$ stands for the Lebesgue measure. The Lebesgue integral of a function $ g$ over set $ \mathcal{E} $ will denoted by $ \mathcal{L}\int\limits_\mathcal{E} g.$ $ \mathcal{X}$ is a real Banach space with norm $||.|| $ and $\mathcal{X}^* $ is its dual. $ B_\mathcal{X}^*= \{ y^* \in \mathcal{X} : ||y^*|| \leq 1 \} $ is the closed unit ball in $ \mathcal{X}^*.$ Henstock integral (see [@Ye]) is a kind of non absolute integral and contain Lebesgue integral. It has been proved that this integral is equivalent to the special Denjoy integral. In [@R.A] Gordon gave two Denjoy-type extensions of the Dunford and Pettis integrals, the Denjoy-Dunford and Denjoy-Pettis integrals, and discuss their properties. In [@Ye] authors discussed the relationship between Henstock-Dunford and Henstock-Pettis integral. The de la Vallee-Poussin theorem (VPT) is use in [@Ricardo] to localization of uniformly integrable subset of scalar integrable function with respect to a vector measure ‘$m$’ in a suitable Orlicz space. Also one can see [@Alex] for de la Vallee-Poussin theorem and Orlicz spaces. In [@D] author characterized the countable additivity of the Dunford integral of vector functions and also they characterize those strongly measurable vector function that are Pettis integrable through the compactness of certain set of scalar function in a certain Orlicz space.\ In this paper with the help of VPT ([@M], Theorem 2. p3) and Dunford Pettis theorem, we find countable additivity of Henstock-Dunford integral of Henstock integrable function taking values in $\mathcal{X}$ as those weakly measurable function $ g:[a,b] \to \mathcal{X} $ for which $ \{ y^* g : y^* \in B_{\mathcal{X}}^{*} \} $ is relatively weakly compact in some separable Orlicz space $ \mathcal{L}^{\overline{\phi}}(.). $ Also we execute necessary condition of Henstock-Gel’fand in terms of relatively weakly in $ \mathcal{L}^{\overline{\phi}}(.) $ Lastly we find $ \{ y^* g : y^* \in B_{\mathcal{X}}^{*} \} $ relatively weakly compact in some separable Orlicz space $ \mathcal{L}^{\overline{\phi}}(.) $ of weakly Henstock integrable function.\ The intervals $ I $ and $J$ are non overlapping if int$(I) \cap $int$(J) = \phi ,$ where int$(I),$ int$(J)$ is interior of $I$ and $J,$ respectively.\ Now we recall some definitions and notions used in [@Kao].\ Let $ P$ be a partition of the interval $[a,b]$ with $ P=a=y_0 < y_1 < y_2...<y_n=b.$ A tagged partition $ (P,(v_k)_{k=1}^{n} $ is a partition which has selected points $a_k$ in each subinterval $[y_{k-1}, y_k].$ The Riemann sum using the tagged partition can be written $$\mathcal{R}(g,P)= \sum_{k=1}^{n}g(v_k)[y_k, y_{k-1}] .$$ Let $ \delta > 0 .$ A partition $P$ is $\delta$-fine if every sub interval $ [y_{k-1}, y_k] $ satisfies $ y_k - y_{k-1} < \delta .$\ A function $ \delta: [a,b] \to \mathbb{R} $ is called a gauge on $[a,b]$ if $\delta(y) > 0 $ for all $ y \in [a,b ] .$\ For example of $\delta(y)$-fine tagged $P$ partition. Consider the interval $[0,1]$ and $\delta_1(y)=\frac{1}{8}, $ we will find a $\delta_1(y) $ fine tagged partition on $[0,1] .$\ For the choice of tag, $ \delta_1(v_k)=\frac{1}{8} $ any tagged partition $(P, (v_k)_{k=1}^{n})$ in which $ y_k - y_{k-1} < \frac{1}{8} $ is a $\delta_1(y) $ fine tagged partition.\ Consider the following partition, choosing each tag from every interval to be any number in that interval:\ $m([0, \frac{1}{9}]) < \frac{1}{8},~m([\frac{1}{9}, \frac{2}{9}]) < \frac{1}{8},~m([\frac{2}{9}, \frac{3}{9}]) < \frac{1}{8},~m([\frac{3}{9}, \frac{4}{9}]) < \frac{1}{8},~m([\frac{4}{9}, \frac{5}{9}]) < \frac{1}{8}, ~m([\frac{5}{9}, \frac{6}{9}]) < \frac{1}{8}, ~m([\frac{6}{9}, \frac{7}{9}]) < \frac{1}{8},~m([\frac{7}{9}, \frac{8}{9}]) < \frac{1}{8},~m([\frac{8}{9}, \frac{9}{9}]) < \frac{1}{8} $ is an example of a $ \delta_1(y) $ fine tagged. [@Kao] A function $ g:[a,b] \to \mathbb{R} $ is Henstock integral if there exists $ A \in \mathbb{R} $ such that for $ \epsilon > 0 $ there exist a gauge $\delta:[a,b] \to \mathbb{R} $ such that for each tagged partition $(P, (v_k)_{k=1}^{n})$ that is $\delta(y)$ fine, $$|\mathcal{R}(g,P)-A| < \epsilon$$ Or A function $ g:[a,b] \to \mathbb{R} $ is Henstock integrable if there exists a function $ G:[a,b] \to \mathbb{R} $ such that for every $ \epsilon > 0 $ there is a function $ \delta(t) > 0 $ such that for any $ \delta-$fine partition $ D=\{[u,z], t \} $ of $[a,b], $ we have $$|| \sum[g(t)(z-u)-G(u,z)]|| < \epsilon$$ where the sum $ \sum $ is understood to be over $ D= \{ ([u,z], t) \}$ and $G(u,z)= G(z)-G(u).$ We write $ \mathcal{H}\int\limits_{\mathcal{I}_0} g=G(\mathcal{I}_0).$ 1. [@JD] A function $ g:[a,b] \to \mathcal{X} $ is said to be Dunford integrable on $ [a,b] $ if for each $ y^* \in \mathcal{X}^*, $ the function $ y^*g$ is Lebesgue integrable. In this case, as a consequence of the closed graph theorem, for every measurable subset $ A $ of $[a,b],$ there exists a vector $ y_{A}^{**}$ in $ \mathcal{X}^{**}$ such that $$< y^* , y_{A}^{**} > = \int\limits_A y^* g \mbox{~for~all~}y^* \in \mathcal{X}^* .$$ A vector $ {y_A}^{**}$ is called the Dunford integral of $ g $ on $ A$ and is denoted by $ \mathcal{D}\int\limits_A g.$ 2. [@JD] A function $ g:[a,b] \to \mathcal{X} $ is said to be Pettis integrable on $ [a,b] $ if it is Dunford integrable on $[a,b]$ and $ y_{A}^{**} \in \mathcal{X} $ for every measurable subset $ A $ of $[a,b]$\ The Henstock integral to Banach valued functions, is exactly in the same way as the Dunford and Pettis are extensions of the Lebesgue integral. we define Henstock-Gel’fand integral as the style of R.A. Gordon [@Fong] of his Denjoy-Dunford and Denjoy-Pettis integral. Also we refer [@N.Dunford; @Geo; @Morrison; @S] for the reader related to this areas. <!-- --> 1. [@Ye] A function $ g:[a,b] \to \mathcal{X} $ is said to be Henstock-Dunford integrable on $[a,b]$ if for each $ y^* $ in $ \mathcal{X}^* ,$ the function $ y^* g $ is Henstock integrable on $[a,b]$ and if for every interval $ \mathcal{I} $ in $[a,b], $ there exists a vector $ y_\mathcal{I}^{**} $ in $ \mathcal{X}^{**} $ such that $$y_\mathcal{I}^{**}(y^*) = \int\limits_{\mathcal{I}} y^* g \mbox{~for~all~} y^* \in \mathcal{X}^* .$$ We write $ y_{\mathcal{I}_0}^{**} = \mathcal{HD} \int\limits_{\mathcal{I}_0} g = G(\mathcal{I}_0).$ 2. [@Ye] A function $ g:[a,b] \to \mathcal{X} $ is said to be Henstock-Pettis integrable on $[a,b]$ if $g $ is Henstock-Dunford integrable on $[a,b]$ and if $ y_{\mathcal{I}}^{**} $ in $ \mathcal{X} $ for every interval $\mathcal{ I} $ in $[a,b].$ We write $$y_{\mathcal{I}_0}^{**} = \mathcal{HP} \int\limits_{\mathcal{I}_0} g = G(\mathcal{I}_0).$$ <!-- --> 1. [@JD] A function $ g:[a,b] \to \mathcal{X}^* $ is said to be Gel’fand integrable on $[a,b]$ if for each $ y \in \mathcal{X}, yg$ is Lebesgue integrable. In this as consequence of the closed graph theorem, for every measurable subset $ A $ of $[a,b]$ there exist $ y_{A}^{*} $ in $ \mathcal{X}^* $ such that $$< y_{A}^{*} , y > = \int\limits_A yg \mbox{~for~all~} y^* \in \mathcal{X}^* .$$ The vector $ y_{A}^{*} $ is called the Gel’fand integral of $ g$ on $ A $ and is denoted by $ \mathcal{G}\int\limits_A g $ 2. [@B.; @Bongiorno] A function $ g:[a,b] \to \mathcal{X}^* $ is said to be Henstock-Gel’fand integrable on $[a,b]$ if for each $ y \in \mathcal{X}, yg$ is Henstock integrable on $[a,b] $ and for every interval $ \mathcal{I} $ in $[a,b] $ there exist a vector $y_{\mathcal{I}}^{*} \in \mathcal{X} ^* $ such that $ y^*(y) = \int\limits_{\mathcal{I}} yg.$ [@Mohammed] A function $ g:[a,b] \to \mathcal{X} $ is said to be weakly Henstock integrable $(w\mathcal{H})$ on $[a,b]$ with weak integral $\overline{w},$ if there is a sequence of gauges $(\delta_n)$ on $[a,b]$ such that $$\lim\limits_{n \to \infty }< y^* , \sigma(g, p_n)> = < y^* ,\overline{w}> \mbox{~for~all~} y^* \in \mathcal{X}^* .$$ For every sequence $(p_n) $ of Henstock integrable partition of $[a,b]$ adapted to $(\delta_n)$ and $ \overline{w} = \left((w\mathcal{H})-\int\limits_{a}^{b}g\right).$ A weakly measurable function $ g:[a,b] \to \mathcal{X} $ is said to be determined by a weakly compact generated $(WCG)$ subspace of $ \mathcal{X}, $ if there is a weakly compact generated subspace $ D $ of $\mathcal{ X} $ whose linear span is dense in $ \mathcal{X}.$ [@M] Let $ m : \mathbb{R}^+ \to \mathbb{R} ^+ $ be non decreasing right continuous and non negative function satisfying $$m(0)=0, \mbox{~and~} \lim\limits_{t \to \infty } m(t) = \infty.$$ A function $ M:\mathbb{ R} \to \mathbb{R} $ is called an $N$-function if there is a function $'m' $ satisfying the above sense that $$M(u) = \int_{0}^{|u|} m(t) dt.$$ Evidently, $M$ is an $N$-function if it is continuous, convex, even satisfies $$\lim\limits_{u \to \infty } \frac{M(u)}{u}= \infty \mbox{~and~} \lim\limits_{u \to 0} \frac{M(u)}{u}= 0 .$$ For example $ \overline{\phi}_p(x)= x^p;~ p > 1.$\ Let us fix a positive finite measure $ \mu $ and let $\overline{\phi} $ be an $N$-function. The Orlicz space $ \mathcal{L}^{\overline{\phi}}(\mu) $ consists those ($ \mu$-a.e. equivalence classes) of functions $ g \in \mathcal{L}^0(\mu)$ for which $ ||g||_{\mathcal{L}^{\overline{\phi}}(\mu)} < \infty, $ where $$||g||_{\mathcal{L}^{\overline{\phi}}(\mu)} = \inf \left\{ k > 0 : \int\limits_{\Omega}\overline{ \phi}\left(\frac{|g|}{k}\right)d\mu < 1 \right\}$$ is the Luxemburg norm associated to $ \overline{\phi}.$ [@M] 1. An $N$-function $\overline{\phi} $ is said to satisfy $\Delta^{'} $ condition if there is a $ k > 0 $ so that $$\overline{\phi}(xy) \leq k\overline{\phi}(x)\overline{\phi}(y) \mbox{~for~large~values~of~}x \mbox{~and~} y.$$ 2. An $N$-function $\overline{\phi} $ is said to satisfy $\Delta_2 $ condition if there is a $ k > 0 $ so that $$\overline{\phi}(2x) \leq k \overline{\phi}(x) \mbox{~for~large~values~of~} x.$$ We recall the following results: \[thm12\] [@Musial] A subset $ A $ of $ \mathcal{L}^1(\mu) $ is uniformly integrable if and only if there is an $N$-function $ \phi $ with $ \Delta^{'} $ condition such that $ A $ is relatively weakly compact in $ \mathcal{L}^{\overline{\phi}}(\mu).$ \[thm15\] [@Ye] A function $ g:[a,b] \to \mathcal{X} $ is Henstock-Dunford integrable on $[a,b]$ if and only if $ y^* g $ is Henstock integrable on $[a,b]$ for all $ y^* \in \mathcal{X}^*.$ \[lemma1\] [@Ye] A function $ g:[a,b] \to R $ is Henstock integrable on $[a,b]$ if and only if $g$ is Denjoy integrable on $[a,b].$ \[lemm3\] [@Mohammed] Suppose $\mathcal{X}$ contain no copy of $c_0$ and let $ g:[a,b] \to \mathcal{X} $ be $w\mathcal{H}$-integrable function on $[a,b],$ then it is Pettis integrable. 1. A function $F$ is ACG on $\mathcal{E}$ if $F$ is continuous on $\mathcal{E}$ and if $E$ can be expressed as a countable union of sets on each of which $F$ is AC. 2. A function $ g:[a,b] \to \mathcal{X} $ is Denjoy integral on $[a,b]$ if there exist a ACG function $ F:[a,b] \to \mathcal{X} $ such that $ F_{ap}^{'} =g^{'} $ a.e. on $[a,b],$ where $ F_{ap}^{'}$ denotes the approximate derivatives of $F.$ In this case $$\int\limits_{a}^{b} g = F(b) -F(a).$$ We say $g$ is Denjoy integrable on a subset $A$ of $[a,b]$ if $g\chi_A $ is Denjoy integrable on $[a,b]$ and write $$\int_A g =\int\limits_{a}^{b} g \chi_A$$ 3. A function $ g:[a,b] \to X^* $ is said to be Denjoy-Gel’fand integrable on $[a,b]$ if for each $ y \in \mathcal{X},$ $yg $ is Denjoy integrable on $[a,b]$ and for every interval $\mathcal{I} $ in $[a,b]$ there exists a vector $ y_{\mathcal{I}}^{*} $ is called Denjoy-Gel’fand integral of $g$ on $[a,b]$ and we denote $ \mathcal{DG}\int\limits_{a}^{b} g.$ (Theorem 15.9 [@R.A]) A real Denjoy integrable function on $[a,b]$ is not necessarily integrable on all measurable subset of $[a,b].$ In fact if the function is absolutely integrable or equivalent to Lebesgue integral. Main Results ============ \[prop21\] Assume $ g:[a,b] \to \mathcal{X} $ is Denjoy-Gel’fand on $[a,t]$ for all $ t \in [a,b) $ and for each $ y \in \mathcal{X}. $ The limit $\lim\limits_{t \to b}\int\limits_{a}^{b} yg $ exists, then $g$ is Denjoy-Gel’fand on $[a,b]$ and $$< y , \mathcal{DG}\int\limits_{a}^{b} g > = \lim\limits_{t \to b } <y , \mathcal{DG}\int\limits_{a}^{t} g >$$ for each $ y \in \mathcal{X}.$ As Theorem 15.12 of [@R.A], $yg$ is Denjoy integrable on $[a,b]$ for all $ y \in \mathcal{X} $($ y^* \in \mathcal{X}^* $). Let $ c \in [a,b) $ and any sequence $(t_n) $ in $[a,b) $ converges to $b.$ Define $$\begin{aligned} L_c(y) &= \lim\limits_{n} \int\limits_{c}^{t_n} yg \\ &= \lim\limits_{n} < y, \mathcal{DG}\int\limits_{c}^{t_n}g>. \end{aligned}$$ The Uniform boundedness Principle gives us $ L_c $ is continuous on $\mathcal{X}.$ Then $g$ is Denjoy-Gel’fand on $[a,b].$\ Taking $ c=a , $ we get $$\begin{aligned} L_a(y) &= \lim\limits_{n} \int\limits_{a}^{t_n} yg \\ &= \lim\limits_{n} < y, \mathcal{DG}\int\limits_{a}^{t_n}g >. \end{aligned}$$ \[prop22\] Let $ g:[a,b] \to \mathcal{X},$ if $yg $ is Denjoy integrable on $[a,b]$ for each $ y \in \mathcal{X},$ then each perfect set in $[a,b]$ contains a portion on which $g$ is Gel’fand integrable. Let $\mathcal{E}$ be a perfect set in $[a,b]$ and let $\{\mathcal{I}_n\}$ be the sequence of all open interval in $(a,b)$ that intersect $\mathcal{E}$ and have rational end points.\ For each $n,$ let $ \mathcal{E}_n = \mathcal{E} \cap \mathcal{I}_n .$\ For each positive integers $ m $ and $n,$ let $ A_{m}^{n} = \left\{ y \in \mathcal{X} : \int\limits_{\mathcal{E}_n}|yg| \leq m \right\}.$ Then $ \mathcal{X} =\bigcup\limits_n \bigcup\limits_m A_{m}^{n}.$\ We claim that each of the set $ A_{m}^{n} $ is closed.\ Let $y$ be a limit point of $ A_{m}^{n}$ and let $\{y_k\} $ be a sequence in $ A_{m}^{n} $ that converges to $y.$\ Then the sequence $\{ |y_k g|\} $ converges pointwise on $[a,b]$ to the function $|yg| $ and by Fatou’s Lemma, we have $$\int\limits_{\mathcal{E}_n} |yg| \leq \lim\limits_{n \to \infty }\left\{ \int\limits_{\mathcal{E}_n}|y_k g |\right\} \leq m.$$ This gives $ y \in A_{m}^{n} $ and so, $ A_{m}^{n} $ is closed.\ By the Baire Category Theorem there exist $ M, N, x_0 $ and $\rho > 0 $ such that\ $\{ y : ||y-x_0|| \leq \rho \} \subset A_{M}^{N} .$\ For each $ y \in \mathcal{X} $ with $||y|| \neq 0 ,$ we find $$\int\limits_{\mathcal{E}_N }|yg| \leq \frac{||y||}{\rho}\left\{ \int\limits_{\mathcal{E}_N}\left|\frac{\rho}{||y||}yg + x_0 g \right| + \int\limits_{\mathcal{E}_N}|x_0 g| \right\} \leq \frac{2M}{\rho}||y||.$$ Therefore for each $ y \in \mathcal{X} ,$ the function $yg $ is Lebesgue integrable on $ \mathcal{E} \cap \mathcal{I}_N.$\ Therefore $g$ is Gel’fand on $ \mathcal{E} \cap \mathcal{I}_N $ and consequently $g$ is Gel’fand on $\mathcal{E}.$ \[prop23\] Let $ g:[a,b] \to \mathcal{X}^* $ be such that $ yg $ is Denjoy integrable on $[a,b]$ for all $ y \in \mathcal{X}.$ Let $P$ be a closed subset of $[a,b]$ and assume that $g$ is Denjoy-Gel’fand integrable on each open interval $J$ disjoint from $P,$ then there exists a portion $P_0 $ such that if $(\mathcal{I}_n)$ is an enumeration of the interval neighboring to $P_0$ then the series $ \sum\limits_{n} \int\limits_{\mathcal{I}_n} yg $ is absolutely convergent for every $ y \in \mathcal{X}.$ Let $(J_m) $ be an enumeration of all open intervals in $[a,b]$ with the rational end points such that $ J_{m} \cap P \neq \phi.$\ Let $(K_n)$ be an enumeration of all open intervals neighboring to $P$ in $(a,b).$\ For each $ m \in \mathbb{N} ,$ the sequence $(J_{m} \cap K_n)_n $ is an enumeration of all open intervals neighboring to the portion $ J_{m} \cap P. $\ Therefore to prove the result, it is enough to show that there exists $ m_0 \in \mathbb{N} $ such that $$\sum\limits_{n} |\int\limits_{J_{m_0} \cap K_n }yg| < \infty$$ for all $ y \in \mathcal{X}. $\ Let us assume it is not true. For each $ n \in \mathbb{N} ,$ the function $g$ is Denjoy-Gel’fand on $ K_n.$\ As $ k_{n} \cap P = \phi ,$ so $ X \to \mathbb{R}, y \to \int\limits_{J_{m} \cap K_{n}}yg.$\ Define a continuous linear functional for each $ m \in \mathbb{N} ,$ we conclude that for each $ m, j \in \mathbb{N} $ $$T_{j}^{m} : \mathcal{X} \to \ell_1.$$ Then $$y \to\left(\int\limits_{J_{m}\cap K_1} yg, ...,\int\limits_{J_{m} \cap K_j} yg, 0,0,..\right)$$ is a bounded operator.\ By our assumption for each $ m \in \mathbb{N} $ there exists $ y_m \in \mathcal{X} $ such that $$\lim\limits_{j} ||T_{j}^{m}(y_m)||_1 = \sum \left|\int\limits_{J_{m} \cap K_n} yg \right| = \infty.$$ Then by the theorem of condensation of singularities ([@R.A], p81) implies that there exists $ x_0 \in \mathcal{X} $ such that $$\begin{aligned} \label{eq21} \begin{aligned} \sum_{n}\left|\int\limits_{J_{m} \cap K_n} x_0 g \right| &= \lim\limits_{j} ||T_{j}^{m}(x_0)||_1 \\ & = \infty \end{aligned} \end{aligned}$$ for all $ m \in \mathbb{N} .$ Finally each portion of $`P'$ contains a portion of the form $ P \cap J_m$ for some $ m \in \mathbb{N} $ and for each $ m \in \mathbb{N} ,$ then $ J_m \cap K_n'$s are intervals neighboring to $P$ in $J_m.$\ Hence by equation (\[eq21\]) and (Theorem 15.12 of [@R.A]) show that $yg$ is not be Denjoy integrable on $[a,b],$ which is a contradiction. \[prop24\] The function $ g:[a,b] \to \mathcal{X}^* $ is Denjoy-Gel’fand on $[a,b]$ if and only if $yg$ is Denjoy integrable on $[a,b]$ for all $ y^* \in \mathcal{X}^*.$ Let $ g:[a,b] \to \mathcal{X}^* $ be such that $yg $ is Denjoy integrable for all $ y \in \mathcal{X}. $ Let $ S $ be the set of all points $ t \in [a,b] $ such that $ g $ is Denjoy-Gel’fand on neighborhood of $t .$\ Claim : Let $ J $ be an open sub interval of $[a,b] ,$ then $ g $ is Denjoy-Gel’fand on $ J $ if and only if $ J \cap S = \phi $\ The necessary condition is obvious.\ Sufficient condition: Let $ J=(c,d) $ be an open interval in $[a,b] $ which does not meet $S.$\ By compactness, it is clear that $g$ is Denjoy-Gel’fand on any closed sub interval $[c_1, d_1] $ of $(c,d) .$ By Proposition \[prop21\] reaches the claim.\ Now, if $ S \neq \phi.$ By Proposition \[prop22\] each closed set in $[a,b]$ has a portion on which $ g $ is Gel’fand integrable.\ Let $ S_0 = S \cap (c_{0} ,d_{0})$ on which $ g$ is Gel’fand integrable.\ Now, closed set $ \overline{S_0}$ satisfies the assumption of Proposition \[prop23\] on $[c_{0}, d_{0} ].$\ So. there exists a portion $$\begin{aligned} S_1 &= \overline{S_0} \cap (c_1, d_1) \\ & = S \cap (c_1, d_1) \end{aligned}$$ on which $ g$ is Gel’fand integrable. So, that if $( I_n)$ is an enumeration of the intervals neighboring to $ S_1 $ in $ (c_1, d_1).$ Then the series $ \sum\limits_{n}yg $ is absolutely convergent for every $ y \in \mathcal{X}. $\ It is enough to show $ g $ must be Denjoy-Gel’fand on $ (c_1,d_1). $\ As $ (c_1, d_1) $ meets $ S,$ it will contradict the definition of $S$.\ Let $ J $ be an interval in $[c_1, d_1] $ and let $ y \in \mathcal{X} .$ Since $ g $ is Gel’fand integrable on $ S_1,$ therefore $ yg $ is Lebesgue and hence Denjoy integrable on $ S_1 $\ On the other hand, the sequence $ ( I_{n} \cap J)_n $ in which if non empty is an enumeration of the intervals neighboring to $ S \cap J $ in $ J. $ And with the execution of almost two intervals, for all non empty $ I_n \cap J'$s we have $ I_n \cap J = I_n.$\ So, neglecting of at most two terms of the sequence $ (\int\limits_{I_{n} \cap J } y g_n).$ We can say $\sum\limits_{n}\int\limits_{I_{n} \cap d}yg $ is a sub series of $\sum\limits_{n}\int\limits_{I_n}yg_n $ and so, it is absolutely convergent.\ By Corollary 1 [@Gamez] to $ yg$ and $\overline{S_1}\cap J = S \cap J $ on $ J $ to deduce $$\begin{aligned} \label{eq22} \int_J yg= \int\limits_{J} yg \chi_{S \cap J } + \sum_{n}\int\limits_{I_{n} \cap J } yg \end{aligned}$$ for each $ y \in\mathcal{ X}.$\ For each $ m \in \mathbb{N} , $ we define $ y_{m}^{*} $ by $$y_{m}^{*}(y) = \int\limits_{J} yg \chi_{ S \cap J } + \sum\limits_{n=1}^{m} \int\limits_{I_n \cap J } yg .$$\ Since $ g$ is Gel’fand integrable and Denjoy-Gel’fand integral on each $ I_{n} \cap J ,$ the linear functional $ y_{m}^{*} $ are continuous on $\mathcal{X}.$\ Now equation (\[eq22\]) implies $ \int\limits_{J} yg = \lim\limits_{m} y_{m}^{*}(y) $ for each $ y \in \mathcal{X}.$\ Therefore by Uniform bounded principle $y_{J}^{*} $ defined by $y_{J}^{*} = \int\limits_{J} yg $ is continuous on $ \mathcal{X} .$\ Since this happens for all intervals $ J $ on $ [c_1, d_1].$ we conclude $ g $ is Denjoy-Gel’fand integrable on $ [c_1, d_1].$ \[thm21\] A function $ g:[a,b] \to \mathcal{X}^* $ is Henstock-Gel’fand on $ [a,b] $ if and only if $ yg $ is Henstock integrable on $ [a,b] $ If $ g $ is Henstock-Gel’fand integrable on $ [a,b].$ Then by definition of Henstock-Gel’fand $yg $ is Henstock integrable on $[a,b].$\ Conversely, let $ yg $ be Henstock integrable on $[a,b].$ then by Lemma \[lemma1\], $ yg$ is Denjoy integrable on $[a,b] $ and $\mathcal{D}\int\limits_{a}^{b} yg = \mathcal{H}\int\limits_{a}^{b} yg$\ Then Proposition \[prop24\] implies that $ g$ is Denjoy-Gel’fand integrable on $[a,b] $ and for every interval $ I $ on $[a,b]$ there exists a vector $ y_{I}^{*} = \mathcal{D}\int\limits_{I} yg $ for all $ y \in \mathcal{X}.$\ That give us $ y_{I}^{*} = \mathcal{H}\int\limits_{I} yg $ for all $ y \in \mathcal{X} $ so, $ g $ is Henstock-Gel’fand integrable on $[a,b].$ Let $\mathcal{X}$ contain no copy of $ c_0 $ and let $ g:[a,b] \to \mathcal{X} $ is $w\mathcal{H}$-integrable on $[a,b] $ then $\{ y^* g : y^* \in \overline{B_{\mathcal{X}}^{*}} \} $ is uniformly integrable. Let $ g:[a,b] \to \mathcal{X} $ is $w\mathcal{H}$-integrable on $[a,b] .$ Then $ g $ is (by Corollary 3.7 [@Mohammed]) Pettis integrable.\ For each equi-continuous $ K \subset \mathcal{X}^* ,$ the set $\{ y^* g: y^* \in K \}$ is relatively weakly compact in $ \mathcal{L}_1(\mu).$\ Hence $\{ y^* g: y^* \in \overline{B_{\mathcal{X}}^{*}} \} $ is relatively weakly compact in $ \mathcal{L}_1(\mu).$\ Also [@Ricardo] gives $\{ y^* g : y^* \in \overline{B_{\mathcal{X}}^{*}} \} $ is uniformly integrable in $ \mathcal{L}_1(\mu) .$ Henstock-Dunford Integral and Orlicz Space ========================================== Let $(\Omega, \Sigma, \mu ) $ be a finite measure space and $ g: I \subseteq \Omega \to \mathcal{X} $ be Henstock-Dunford function. Then following are equivalent: 1. The Henstock-Dunford integral of $ g$ is countable additive; that is the set function\ $ \mathcal{HD}\int gd\mu: \Sigma \to \mathcal{X}^{**} $ defined $$\left(\mathcal{HD}\int gd \mu\right)(\mathcal{E})= \mathcal{HD}\int\limits_{\mathcal{E}} g d\mu$$ 2. There is an $N$-function $\bar{\phi} $ with $ \Delta^{'} $ property such that $\{ y^* g : y^* \in B_{\mathcal{X}}^{*} \} $ is relatively weakly compact in the Orlicz space $ \mathcal{L}^{\bar{\phi}}(\mu).$ Let $ g:[a,b] \to \mathcal{X} $ be Henstock-Dunford integrable function. Put $ \gamma(\mathcal{E}) = \mathcal{HD}\int\limits_{\mathcal{E}}g,$ where $\mathcal{E} \in \Sigma.$\ Therefore by Theorem \[thm15\], we have $ \gamma(E) = \mathcal{H}\int\limits_{\mathcal{E}}y^* g$\ Let $ E_1, E_2 $ be non overlapping subset of $ \mathcal{E}.$ Then $$\begin{aligned} \gamma( E_1 \cup E_2 ) &= \mathcal{HD} \int\limits_{ E_1 \cup E_2 } g\\ & = \mathcal{H} \int\limits_{E_1 \cup E_2 } y^* g \\ & = \mathcal{H} \int\limits_{E_1} y^* g +\mathcal{H} \int\limits_{E_2} y^* g \\ &= \gamma(E_1) + \gamma(E_2) \end{aligned}$$ And $$\begin{aligned} \gamma\left(\bigcup\limits_{n=1}^{\infty}E_n\right) & = \mathcal{HD}\int\limits_{\cup_{n=1}^{\infty }E_n}g\\ & = \mathcal{H}\int\limits_{\cup_{n=1}^{\infty }E_n}y^* g \\ &= \sum\limits_{n=1}^{\infty} \gamma(E_n) \end{aligned}$$ in norm topology of $ \mathcal{X},$ for all sequence $(E_n)$ of non overlapping members of field $ F \subseteq [a,b] $ such that $ \bigcup\limits_{n=1}^{\infty} E_n \in F $\ Now according as [@A.G] $\gamma $ is countable additive if and only if $ T :\mathcal{X}^* \to \mathcal{L}^1(\mu) $ defined by $ T(y^*) = y^* g $ is weakly compact. So, $\{ y^* g: y^* \in B_{\mathcal{X}}^{*} \} $ is uniformly integrable in $ \mathcal{L}^1(\mu) .$\ Now by Theorem \[thm12\] it is equivalent to the existence of a $N$-function $\bar{\phi} $ with $ \Delta^{'} $ property such that $\{ y^* g: y^* \in B_{\mathcal{X}}^{*} \} $ is relatively compact in $ \mathcal{L}^{\bar{\phi}}(\mu) .$ This completes the proof. Let $ g: [a,b] \to \mathcal{X} $ be Henstock-Dunford integrable function, then followings are equivalent: 1. $ g$ is Henstock-Pettis integrable 2. $g $ is weakly compact generated determined and there is an $N$-function $\bar{\phi} $ with $ \Delta ^{'}$ such that $\{ y^* g: y^* \in B_{\mathcal{X}}^{*} \} $ is relatively compact in $ \mathcal{L}^{\bar{\phi}}(\mu) .$ A strongly measurable function $ g:I \to \mathcal{X} ,$ and $\mathcal{X}$ contain no copy of $c_0$ then is Henstock-Pettis integrable if and only if there is an $N$-function $\bar{\phi} $ with $ \Delta^{'} $ property such that $\{y^* g : y^* \in B_{\mathcal{X}}^{*} \} $ is relatively weakly compact in the Orlicz space $ \mathcal{L}^{\bar{\phi}}(\mu) $ If $ g:[a,b] \to \mathcal{X} $ is strongly measurable, then its range is essentially separable and weakly compact determined (Theorem 2 of [@JD] p.42).\ If $ g$ is Henstock-Pettis integrable, then each perfect set in $[a,b]$ contain a portion which is Pettis integrable (Theorem 2.6 of [@Ye]). Therefore this portion is Dunford with countable additive vector measure.\ Hence $\{ y^* g: y^* \in B_{\mathcal{X}}^{*} \} $ is uniformly integrable on that portion and consequently, there is an $N$-function $\bar{\phi} $ with $ \Delta ^{'}$ such that $\{ y^* g: y^* \in B_{\mathcal{X}}^{*} \} $ is relatively compact in $ \mathcal{L}^{\bar{\phi}}(\mu) .$\ Conversely, Suppose $ g $ is strong measurable and there is an $N$-function $\bar{\phi} $ with $ \Delta ^{'}$ such that $\{ y^* g: y^* \in B_{\mathcal{X}}^{*} \} $ is relatively compact in $ \mathcal{L}^{\bar{\phi}}(\mu) .$\ Since $ g $ is strongly measurable, it has range weakly compactly generated determined. As $ \mathcal{L}^{\bar{\phi}}(\mu) \subset \mathcal{L}^1(\mu) $ (see [@Ricardo]). So $\{ y^* g: y^* \in B_{\mathcal{X}}^{*} \} $ is a bounded subset of $ \mathcal{L}^{\bar{\phi}}(\mu).$\ As $ D_0 =\{ y^* g: y^* \in B_{\mathcal{X}}^{*} \} $ is weakly compactly generated and $ g$ is strongly measurable, So, for each $ y^* \in \mathcal{X}^* ,$ there exists a sequence $(\gamma)_{n=1}^{\infty}$ of $ D_0$-valued simple function such that $ y^*g = \lim y^* \gamma_n \mu$-a.e.\ So, $ g $ is Pettis integrable with $ \mathcal{X} $ contains no copy of $ c_0 $\ Thus $ g$ is Henstrock-Pettis integrable with $ \mathcal{X} $ contains no copy of $ c_0 $ Let $g :[a,b] \to \mathcal{X} $ be Henstock-Dunford. If $ p > 1 $ such that $\{ y^* g: y^* \in B_{\mathcal{X}}^{*} \} $ is bounded in $ \mathcal{L}^p(\mu), $ then $ g$ is countably additive. For a class of strongly measurable function in $\mathcal{ X} , $ contains no isomorphic copy of $ c_0.$ A function $ g:[a,b] \to \mathcal{X}^* $ is Henstock-Gel’fand, then there is an $N$-function $\bar{\phi} $ with $ \Delta ^{'}$ such that $\{ y^* g: y^* \in B_{\mathcal{X}}^{*} \} $ is relatively compact in $ \mathcal{L}^{\bar{\phi}}(\mu) .$ Let $T: \mathcal{X} \to \mathcal{L}_1$ by $ T(y)=yg .$ Then $T$ is bounded linear operator.\ Claim: $T$ is weakly compact.\ If $v(\mathcal{E})(y)= \mathcal{HG}\int\limits_{\mathcal{E}} g,$ Then $v(\mathcal{E})(y) = \mathcal{H}\int\limits_{\mathcal{E}} yg $ (by Theorem \[thm21\])\ Then $v(\mathcal{E}) $ is vector measure with countably additivity.\ By Pettis theorem (p10 of [@D]) we have $\lim\limits_{\mu(\mathcal{E}) \to 0} v(\mathcal{E})(y)=0 $ for all $ y \in \mathcal{X} .$\ That is for $ \epsilon > 0 $ there exists a $\delta >0 $ such that $$\mu(\mathcal{E})< \delta \Rightarrow ||v(\mathcal{E})|| < \epsilon.$$ So, $\mu(\mathcal{E})< \delta $ implies $ \sup\limits_{y \in B_{\mathcal{X}} }\int\limits_{\mathcal{E}}|yg|d \mu < \epsilon .$\ By Dunford-Bartle-Hanse (Corollary 6, page 14 of [@D]), $ v(\Sigma) $ is relatively weakly compact.\ Therefore $ \left\{ \int\limits_{\mathcal{E}} yg : \mathcal{E} \in \Sigma , y \in B_{\mathcal{X}} \right\} $ is bounded in some field $F.$\ So, $ \sup\limits_{y \in B_{\mathcal{X}} }\int\limits_\Omega |yg|d \mu < \infty .$\ Thus $\{yg : y \in B_{\mathcal{X}} \} $ is uniformly integrable.\ By de la Vallee Poussin theorem $\{yg : y \in B_{\mathcal{X}} \} $ is relatively weakly compact in $ \mathcal{L}^{\bar{\phi}}(\mu) $ with $ \Delta^{'}$ condition. For a real Banach space $\mathcal{X}$ contains no copy of $ c_0,$ if $ g:[a,b] \to \mathcal{X} $ is $w\mathcal{H}$-integrable function on $[a,b]$ then $\{ y^* g: y^* \in \overline{B_{\mathcal{X}}^{*}} \} $ is relatively weakly compact in certain separable Orlicz space $ \mathcal{L}^{\bar{\phi}}(\mu). $ For a real Banach space $\mathcal{X}$ contains no copy of $ c_0,$ if $ g:[a,b] \to \mathcal{X} $ is $w\mathcal{H}$-integrable function on $[a,b]$ then $\{ y^* g: y^* \in \overline{ B_{\mathcal{X}}^{*}} \} $ is uniformly integrable in $ \mathcal{L}_1(\mu).$\ Theorem \[thm12\] gives $\{ y^* g : y^* \in \overline{B_{\mathcal{X}}^{*}} \} $ is relatively weakly compact in certain separable Orlicz space $ \mathcal{L}^{\bar{\phi}}(\mu).$\ As $\mathcal{X}$ is reflective, it is true for $\{ y^* g : y^* \in B_{\mathcal{X}}^{*} \}. $ From Lemma \[lemm3\] countable additivity of $w\mathcal{H}$-integrable function can determine easily. J. Alexopoulos, De la Vallee Poussin’s theorem and weakly compact sets in Orlicz spaces, Quaest Math. 17(2)(1994) 231–248. D. Barcenas, C. E. Finol *On vector measures, Uniformly integrable and Orlicz Spaces in vector measures. Integration and related Topics*: Operator Theory: Advances and Applications, Vol 201, Birkhauser Verlag Basel, 2010. pp 51-57. B. Bongiorno, L. Di Piazza, K. Musial, Differentiation of additive interval Measure with values in a conjugate Banach Space, Functiones et Approximatio 50(1)(2014) 169–180. R. del Campo, A. Fernandez, F. Mayolal, F. Naranjo, The de la Vallee-Poussin theorem and Orlicz spaces associated to vector measure, J. Math. Anal. Appl. 470(2019) 279–291. J. Diestel, J. Joseph, *Vector measures*, Math. Survey 15 AMS. Series Providence 1977. N. Dunford, J.T. Schwartz *Linear Operators, Part I*, Wiley-Interscience, New York 1988. C.K. Fong, A continuous version of Orlicz-Pettis theorem via vector valued Henstock-Kurzweil integrals, Canad. Math. Bull. 24(2)(1981) 169-176. J.L. Gamez, J. Mendoza, Denjoy-Dunford and Denjoy-Pettis integral, Studia Mathematica 130(2)(1998) 115-133. . Y. Geoju, On the Henstrock-Kurzweil-Dunford and Kurzweil-Henstock-Pettis, Rocky Mountain J. Math. 39(4)(2009) 1233-1244. Y. Guoju, A.N. Tianguing, On Henstock-Dunford and Henstock-Pettis integral, Inter. J. Math. Math. Sci. 25(7)(2001) 467–478. R.A. Gordon *The Integrals of Lebesgue, Denjoy, Perron and Henstock*, Grad. Stud. Math 4; Amer. Math Soc. Providence 1994 S. Kao, J. Gonzales, The Henstock-Kurzweil Integral, Lecture Notes, April 28, 2015 pages 10. T.J. Morrison, A note on Denjoy integrability of Abstractly valued functions, Proc. Amer. Math. Soc. 61(2)(1976) 385-386. K. Musial *Pettis Integral, Hand book of Measure theory* 531-568 (edited by E. Pap) North Holland, Amsterdan, 2002. M.M. Rao, Z. Ren, *The Theory of Orlicz Spaces*, Marcel Dekker, Inc. Newwork 1991. M. Saadoune, R. Sayyad, From weak Henstock to weak McShane integrability, Real Analysis Exchange, 38(2) (2012-2013) 447–468. S. Schwabik, Y. Guoju, *Topics in Banach Spaces Integration, Series in Real Analysis*, Vol 10. World scientific, Singapore 2005. A.G. Stefansson, *Pettis intrgrability*, Trans. Amer. Math. Soc. 330(1)(1992) 401-418 [^1]: $^{\ast}$ The corresponding author. [^2]:
--- abstract: 'Non-equilibrium active matter made up of self-driven particles with short-range repulsive interactions is a useful minimal system to study active matter as the system exhibits collective motion and nonequilibrium order-disorder transitions. We studied high-aspect-ratio self-propelled rods over a wide range of packing fraction and driving to determine the nonequilibrium state diagram and dynamic properties. Flocking and nematic-laning states occupy much of the parameter space. In the flocking state the average internal pressure is high and structural and mechanical relaxation times are long, suggesting that rods in flocks are in a translating glassy state despite overall flock motion. In contrast, the nematic-laning state shows fluid-like behavior. The flocking state occupies regions of the state diagram at both low and high packing fraction separated by nematic-laning at low driving and a history-dependent region at higher driving; the nematic-laning state transitions to the flocking state for both compression and expansion. We propose that the laning-flocking transitions are a type of glass transition which, in contrast to other glass-forming systems, can show fluidization as density increases. The fluid internal dynamics and ballistic transport of the nematic-laning state may promote collective dynamics of rod-shaped microorganisms.' author: - 'Hui-Shun Kuan*$^{a}$*, Robert Blackwell*$^{b}$*, Loren E. Hough*$^{b}$*, Matthew A. Glaser*$^{b}$*, and M. D. Betterton*$^{b}$*' bibliography: - 'collective.bib' - 'zoterolibrary.bib' title: 'Hysteresis, reentrance, and glassy dynamics in systems of self-propelled rods' --- Active matter made up of self-driven particles exhibits novel physical properties include collective motion, nonequilibrium order-disorder transitions, and anomalous fluctuations and mechanical response [@ramaswamy10; @*marchetti13]. Understanding active matter may aid the development of new technologies including autonomously motile and self-healing synthetic materials. Examples of active matter include animal flocks [@cavagna10], crawling and swimming cells [@rappel99; @*cisneros11; @zhang10a; @thutupalli14], vibrated granular materials [@narayan07; @deseigne10], self-propelled colloidal particles [@bricard13; @palacci13], and the cellular cytoskeleton and cytoskeletal extracts [@nedelec97; @*butt10]. Among active matter, self-propelled rods (SPR) provide a useful minimal model system. Self-propulsion and excluded volume interactions via a short-range repulsive potential are the only ingredients; rod alignment occurs through collisions. Experiments which may be approximated as SPR include vibrated granular rods [@kudrolli08], motion of cytoskeletal filaments on a motor-bound surface [@butt10; @schaller10], and surface or film swarming of rod-like bacteria [@sokolov07; @zhang10a; @wensink12a; @thutupalli14]. Because of their simplicity SPR are attractive to simulation study [@kraikivski06; @peruani06; @yang10; @peruani11a; @wensink12; @wensink12a; @mccandlish12; @abkenar13] and have also been the focus of analytic theory [@baskaran08; @*baskaran08a; @wensink12]. SPR display a rich variety of dynamic states, including collective motion [@vicsek95; @*gregoire04; @*chate08; @*bertin06; @*bertin09; @*ginelli10; @*peruani12; @*aldana07; @*ihle13; @toner95; @mishra10; @gopinath12; @peshkov12; @weber13; @deseigne10] and formation of dynamic clusters [@peruani06; @schaller10; @yang10; @peruani11b; @*peruani13; @mccandlish12; @abkenar13; @weber13]. For SPR, rod shape, density, and driving are important in determining the dynamic behavior [@peruani06; @baskaran08; @*baskaran08a; @yang10; @peruani11a; @mccandlish12; @wensink12; @wensink12a; @abkenar13]. For low driving, equilibrium-like isotropic and nematic liquid crystal phases are recovered [@baskaran08; @*baskaran08a; @abkenar13]. For higher driving, dynamic states characterized by the appearance of flocks, stripes, and swirls appear [@peruani06; @baskaran08; @*baskaran08a; @mccandlish12; @wensink12; @wensink12a; @abkenar13]. Baskaran and Marchetti derived a hydrodynamic model from the kinetics of SPR with two-rod collisions and determined a state diagram from linear stability analysis of homogeneous states, finding that activity lowers the isotropic-nematic transition density [@baskaran08; @*baskaran08a]. Previous simulation work has observed flocking and laning states similar to those we study here[@wensink12; @mccandlish12; @abkenar13], but did not measure on dynamic state transitions, hysteresis, or structural and mechanical properties. In this work, by studying the state diagram over a broader range of parameters with extensive expansion and compression simulations and mechanical and structural characterization, we demonstrate strong hysteresis, the emergence of glassy dynamics in the flocking state, and reentrant fluidization. We studied self-propelled 2D spherocylinders with Brownian dynamics, as in previous work [@mccandlish12], using the computational scheme of Tao et al. [@tao05] developed for equilibrium simulations of concentrated solutions of high-aspect-ratio particles. Rods have length $L$ and diameter $\sigma$. The center-of-mass and orientational equations of motion for rod $i$ with center-of-mass position ${\bf r}_i$ and orientation ${\bf u}_i$ are $$\begin{aligned} \label{eq:brownian} {\bf r}_i(t + \delta t) &=& {\bf r}_i(t) + {\bf \Gamma}_i^{-1}(t) \cdot {\bf F}_i(t) \delta t + \delta {\bf r}_i(t),\\ {\bf u}_i(t + \delta t) &=& {\bf u}_i(t) + {1 \over {\gamma_r}} {\bf T}_i(t) \times {\bf u}_i(t) \delta t + \delta {\bf u}_i(t),\end{aligned}$$ where the random displacements $\delta {\bf r}_i(t)$ and $\delta {\bf u}_i(t)$ are Gaussian-distributed, ${\bf \Gamma}_i^{-1}(t)$ is the inverse friction tensor, $\gamma_r$ is the rotational drag coefficient, and ${\bf F}_i(t)$ and ${\bf T}_i(t)$ are the the deterministic force and torque on particle $i$ [@supplement]. Excluded-volume interactions between particles are modeled by the WCA potential as a function of the minimum distance $s_{ij}$ between the two finite line segments of length $L$ that define the axes of particles $i$ and $j$ [@supplement; @weeks71]. The self-propulsion force is directed along the particle axis with ${\bf F}_i^{\rm drive}= F_D {\bf u}_i$. In the absence of nonequibrium driving, this model has been well-characterized both in 2D [@bates00] and 3D [@bolhuis97; @*mcgrother96]. We nondimensionalize using the length $\sigma$, energy $k_B T$, and time $\tau = D/\sigma^2$, where $D$ is the diffusion coefficient of a sphere of diameter $\sigma$. The three dimensionless parameters are the rod aspect ratio $R = L/\sigma$, fixed at 40, the packing fraction $\phi = A_{\rm rods}/A_{\rm system}$, and the translational Peclet number ${\rm Pe} = F_D L/(k_B T)$. We varied $\phi$ between $0.01$ and $1.04$ (where $\phi>1$ is possible due to the slight softness of the repulsive potential), and Pe between 0 and 320. We simulated $N=4000$ rods in a square, periodic box. Most simulations were initialized in an equilibrium isotropic, nematic, or crystalline initial condition, then nonequilibrium activity was turned on and the system was allowed to run for $10^7 \tau$. The simulation measurement run was $10^7 \tau$, and the time step $\Delta t = 0.25 \tau$. ![image](hysteresis){width="\textwidth"} At zero or low driving, we find equilibrium isotropic, nematic, and crystalline states (fig. \[overview\]a-d). While we did not map the equilibrium phase transitions in detail, our observations are consistent with previous work [@bates00]. As the Peclet number increases, lower packing fractions roughly corresponding to the equilibrium isotropic phase typically show flocking behavior characterized by collective motion of clusters of various sizes coexisting with a low-density vapor (fig. \[overview\]e), as observed previously [@peruani06; @yang10; @peruani11b; @*peruani13; @mccandlish12; @weber13]. While the flocking state remains globally isotropic (consistent with previous predictions [@baskaran08]), the formation of dense aligned clusters is characterized by short-range density correlations that lead to peaks in the pair distribution function and the emergence of polar and nematic orientational correlations that persist over a cluster-size length scale (fig. S1, and other data not shown). Rod mean-squared displacements are ballistic at short times, turning over to diffusive at long times due to flock reorientation. The long-time angular mean-squared displacement is diffusive. The flocking state shows large density heterogeneity suggestive of two-phase coexistence between dense orientationally ordered clusters and low-density isotropic rods. In previous work on self-propelled spheres or disks, two-phase coexistence of a dense cluster and a dilute vapor was observed that appears qualitatively similar to what we observe here [@palacci13; @theurkauff12; @*buttinoni13; @henkes11; @*fily12; @*redner13; @*speck14; @*wensink14; @*yang14; @*takatori15; @fily14]. However, flocks are dynamic and are constantly merging, breaking up, and exchanging particles with the dilute region [@peruani06; @peruani11b; @*peruani13]. We identified flocks based on measurements of the contact number $c_i = \sum_{i \neq j} e^{-s_{ij}^2}$ and local polar order parameter $p_i = \sum_{i \neq j}{\bf u}_i \cdot {\bf u}_j e^{-s_{ij}^2}/c_i$ of rod $i$. Two-dimensional histograms show peaks in the density for large $p_i$ over a range of $c_i$ (fig. S2); individual flocks were defined as collections of neighboring flock particles[@supplement] (fig. S2). We identified flocks and isolated them in a box empty of other rods; this led the isolated flock to break up, demonstrating that flocks are not stable as isolated clusters. Flock size distributions are stable in time and power law in form with an exponential cutoff, as observed previously[@peruani06; @zhang10a; @peruani11; @chen12; @peruani13] (fig. S3). As the Peclet number increases, higher packing fractions driven from an equilibrium nematic or crystal typically show nematic-laning behavior characterized by the formation of polar lanes of upward- and downward-moving particles (fig. \[overview\]f,g). The density is approximately uniform and the orientational order is globally nematic in most cases with polar correlations on the scale of the system size in the alignment direction and on the scale of a typical lane width perpendicular to the alignment direction (fig. S1 and data not shown). Rod mean-squared displacements are ballistic in the alignment direction and diffusive perpendicular, while the angular mean-squared displacement is bounded due to the the maximum angular deviation of rods. The emergence of lanes in SPR and related models has been observed in previous simulation studies [@mccandlish12; @wensink12; @wensink12a; @abkenar13; @nagai15], and laning has been studied previously for spherical particles both in experiments [@leunissen05; @*sutterlin09; @*vissers11] and theory/simulation [@chakrabarti03; @dzubiella02; @*netz03; @*delhommelle05; @*glanz12]. Laning occurs because of the differences in collisions experienced by rods as a function of their polar environment: a rod moving surrounded by opposite polarity rods will experience more collisions, and therefore more momentum transfer, than when surrounded by rods of similar polarity. A rod surrounded by others of similar polarity will therefore experience reduced lateral movement and be less likely to leave the polar lane [@chakrabarti03; @mccandlish12]. To characterize the transitions between nematic-laning and flocking states, we performed expansion and compression runs in which the packing fraction was changed by $\Delta \phi = 0.02$, the simulation was run for $10^7 \tau$ to reach a dynamic steady state, and then measurements were performed over an additional $10^7 \tau$. The appearance of the nematic-laning state is dependent on initial conditions; lanes with equal numbers of up- and down-moving rods result from initialization with an equilibrium nematic state and the high rod packing fraction which prevents rod reorientation. Upon expansion, the system undergoes an abrupt transition to the flocking state (fig. \[hysteresis\]a), while compression simulations subsequently started in the flocking typically remain in the flocking state (fig. \[hysteresis\]b). If we apply a nematic aligning field to a compressed flocking state, the induced rod reorientation can break up the flock and allow a transition back to the nematic-laning state (fig. \[hysteresis\]c). This strong hysteresis is another signature of an abrupt dynamic transition between the laning and flocking states. While previous work has examined the nonequilibrium state diagram of SPR[@peruani06; @baskaran08; @*baskaran08a; @mccandlish12; @wensink12; @wensink12a; @abkenar13], to our knowledge this is the first study to demonstrate strong hysteresis in this system. McCandlish et al. found the laning state to be unstable to break up [@mccandlish12]. While the strong hysteresis we observe makes it difficult to guarantee that any nonequilibrum state is stable for infinite time, our expansion and compression simulations effectively extended our simulation times up to $2 \times 10^8 \tau$ in the nematic-laning state, and upon reaching the transition boundary we typically see break up of the lanes into flocks within the $10^7 \tau$ equilibration run. Therefore in our system the laning phase appears to be stable, consistent with other work [@wensink12; @wensink12a; @abkenar13]. The instablity observed by McCandlish et al. may be related to the reentrance we observe if the simulations were performed near the upper limit of stability of the nematic-laning state. During expansion runs, the isotropic internal pressure $P_o$, measured by the virial[@supplement], abruptly changes by a factor of 2–10 at the transitions between nematic-laning and flocking states (fig. \[hysteresis\]d). (The nature of the pressure in active systems has been the subject of recent work [@solon14; @*solon14a; @*takatori14]; here we consider the internal pressure determined by the virial only.) At the highest packing fractions the internal pressure approaches a plateau value near 10 for all systems, suggesting that a pure dense flocking state has been reached. The internal pressure of the flocking state lies along an envelope that decreases with decreasing packing fraction as the rod flocking/isotropic fraction varies. Nematic-laning systems undergo transitions to flocking upon both expansion and compression (fig. \[pressure\]a,b, open circles labeled by arrows indicate starting simulations of expansion/compression runs). Flocking systems, typically remain flocking upon compression, but for low packing fractions a transition back to the nematic-laning state upon compression can occur (fig. \[pressure\]b, open circles labeled by downward-pointing arrows indicate starting simulations of compression runs). The dense clusters and high pressure in the flocking state suggest that the clusters may have slow internal dynamics. To characterize structural relaxation we measured the normalized structure-factor autocorrelation function $C(t)/C(0)$, where $C(t)=\langle \delta S(k,t) \delta S(k,0) \rangle$, $k$ is the magnitude of the wavevector and $\delta S(k,t) = S(k,t) - \left\langle S(k,t) \right\rangle$ is the fluctuation in the the angle-averaged structure factor $S(k,t) = {{1} \over {2 \pi N}} \int_0^{2 \pi} d\phi \rho({\bf k}, t) \rho(-{\bf k}, t) $ [@supplement]. Because the angle-averaged structure factor is rotationally invariant, its autocorrelation probes internal structural relaxation of flocks and lanes but is insensitive to flock reorientation. We determined the location of the peak nearest to wave number $k=2 \pi/\sigma$, corresponding to side-by-side filaments separated by approximately one diameter. In the nematic-laning state, the structure-factor autocorrelation exponentially decays (fig. \[pressure\]c, red curve). However in the flocking state, the structure-factor autocorrelation has a power-law tail, indicating slow structural relaxation (fig. \[pressure\]c, blue curve). Expansion to lower packing fractions has little effect on the power-law exponent, indicating that slow relaxation of dense clusters controls the decay of the structure-factor autocorrelation. Compression leads to a density-dependent exponent (fig. S4). Mechanical relaxation was measured by the autocorrelation function of the off-diagonal internal stress tensor $\langle \Pi_{xy}(t) \Pi_{xy}(0) \rangle$ [@supplement]. In the nematic-laning state, the stress autocorrelation drops to zero around $t = 1$ (fig. \[pressure\]d, blue, red, and purple curves). In the flocking state, the stress autocorrelation function relaxes to a small but long-lived plateau (fig. \[pressure\]d, yellow-green and grey curves). Consistent with this, the effective shear viscosity measured via the Green-Kubo relation [@supplement] shows a factor of $10^3$ increase upon transition from the nematic-laning to the flocking state for Pe=80 (fig. S4). The large increases in pressure and shear viscosity and slowed structural and mechanical relaxation that occurs upon transition from nematic-laning to flocking suggest that this is a type of glass transition in which flocks, although collectively moving, have an internally glassy, solid-like structure. Related observations were made in an experimental system with self-propelled colloids, for which nonequilibrium driving promoted formation of small, mobile crystalline clusters [@palacci13]. Related phase separation between a low-density gas and high-density liquid, glassy clusters or crystals has been observed both in experiments [@palacci13; @theurkauff12; @*buttinoni13] and theory and simulations [@henkes11; @*fily12; @*redner13; @*speck14; @*wensink14; @*yang14; @*takatori15; @fily14]. In contrast to both recent active jamming work and classic granular jamming [@liu98; @*reichhardt14], in our self-propelled rod system the increased importance of aligning interactions means that the transition to the translating glassy flocking state can occurs both as density is raised and *lowered*. This reentrant fluidization appears to be a novel feature of this transition in systems of self-propelled rods. Self-propelled rods couple shape anisotropy to directional polarity, in contrast to self-propelled spheres. This enables a rich state diagram for SPR with important implications for transport (fig. \[compare\]). Orientational ordering allows SPR to form a nematic-laning state at high packing fraction characterized by fluid internal dynamics and ballistic transport along the lanes. Much of the same region of parameter space of self-propelled spheres consists of phase-separated liquid-liquid coexistence (fig. \[compare\])[@henkes11; @*fily12; @*redner13; @*speck14; @*wensink14; @*yang14; @*takatori15; @fily14] for which particle dynamics are diffusive[@fily14] and the formation of dense clusters limits particle motion. Perhaps the physics of laning is important for collective motion of rod-shaped microorganisms such as *Myxococcus xanthus*, which during fruiting-body formation assemble into dense streams qualitatively similar to the lanes we observe[@thutupalli14]. Ballistic transport through coupling of orientational order and self propulsion may give an advantage to rod-shaped rather than spherical bacteria. We thank Lisa Manning, John Toner, Leo Radzihovsky, and Joel Eaves for useful discussions. This work was supported by NSF grants MRSEC-DMR-0820579, EF-ATB-1137822, and DMR-0847685 and NIH grant T32 GM-065103. This work utilized the Janus supercomputer, which is supported by the National Science Foundation (CNS-0821794) and the University of Colorado Boulder. The Janus supercomputer is a joint effort of the University of Colorado Boulder, the University of Colorado Denver and the National Center for Atmospheric Research. Janus is operated by the University of Colorado Boulder.
--- abstract: 'The purpose of this paper is to study weak solutions of a nonlinear Neumann problem considered on a ball. Assuming that the potential is invariant, we consider an orbit of critical points, i.e. we do not assume that critical points are isolated. We apply techniques of the equivariant analysis to examine bifurcations from the orbits of trivial solutions. We formulate sufficient conditions for local and global bifurcations, in terms of the right-hand side of the system and eigenvalues of the Laplace operator. Moreover, we characterise orbits at which the global symmetry-breaking phenomenon occurs.' address: - | Faculty of Mathematics and Computer Science\ Nicolaus Copernicus University\ PL-87-100 Toruń\ ul. Chopina $12 \slash 18$\ Poland, ORCID 0000–0002–2417–9960 - | Faculty of Mathematics and Computer Science, University of Warmia and Mazury\ ul. Sloneczna 54, PL-10-710 Olsztyn, Poland, ORCID 0000–0003–0121–1136 - | School of Mathematics, West Pomeranian University of Technology\ PL-70-310 Szczecin, al. Piastów $48\slash 49$, Poland, ORCID 0000–0002–6117–2573 author: - Anna Gołȩbiewska - Joanna Kluczenko - Piotr Stefaniak title: Bifurcations from the orbit of solutions of the Neumann problem --- Introduction ============ In this paper, we study bifurcations of weak solutions of elliptic systems of the form: $$\label{eq:neumannwstep1} \left\{ \begin{array}{rclcl} - \triangle u & =& \lambda \nabla F(u ) & \text{ in } & B^N \\ \frac{\partial u}{\partial \nu} & = & 0 & \text{ on } & S^{N-1}, \end{array}\right.$$ where $B^N$ is the open unit ball in ${\mathbb{R}}^N$, $S^{N-1}=\partial B^N$ and the function $F\colon {\mathbb{R}}^m \to {\mathbb{R}}$ satisfies additional assumptions, see Section 2. In particular, we are interested in the equivariant case. Namely, we assume that on the space ${\mathbb{R}}^m$ there is defined an action of the compact Lie group $\Gamma$ and $\nabla F$ is the $\Gamma$-equivariant mapping. Moreover, it is known that $B^N$ is $SO(N)$-invariant, where $SO(N)$ stands for the special orthogonal group in dimension $N$. Consider the set $\nabla F^{-1}(0).$ For $u_0 \in \nabla F^{-1}(0)$ the constant function $\tilde{u}_0 \equiv u_0$ is a solution of for all $\lambda \in {\mathbb{R}}$. Therefore, we obtain the family of trivial solutions $\{\tilde{u}_0\} \times {\mathbb{R}}$. Investigating the change of the Conley index for different levels $\lambda \in \mathbb{R}$, one can obtain a sequence of nontrivial weak solutions bifurcating from the point $(\tilde{u}_0, \lambda_0)$, for some values $\lambda_0 \in \mathbb{R}.$ Investigating the change of the topological degree, one can prove the existence of the continuum, containing $(0, \lambda_0),$ of nontrivial weak solutions of the system (i.e. the global bifurcation of weak solutions). For a system of elliptic differential equations with Dirichlet boundary conditions such methods have been used in many papers, among others by the first and the second author in [@GawRyb], [@GolRyb1], [@Klu]. A similar method has been also used in [@GolKlu] for the system with the Neumann boundary conditions with the infinity instead of the critical point. The phenomenon of symmetry breaking for elliptic systems with the Neumann boundary conditions has been considered by the third author in [@Ste]. The results described above are obtained with the assumption that $u_0$ is an isolated critical point of the potential $F$. Assuming that $\nabla F$ is a $\Gamma$-equivariant mapping, we obtain that for $u_0 \in \nabla F^{-1}(0)$ also $\gamma u_0 \in \nabla F^{-1}(0)$ for all $\gamma \in \Gamma$. It is therefore clear, that the assumption that the critical point $u_0$ is an isolated one, does not have to be satisfied in this case. The method, that can be used in this situation, is an investigating of the index of the isolated orbit. Under some additional assumptions, this method has been recently proposed by Perez-Chavela, Rybicki and Strzelecki in [@PRS]. In this paper it has been proved that the computation of the Conley index of the orbit can be in some cases reduced to computation of the index of a point from the space normal to the orbit. To study weak solutions of the system we apply variational methods, i.e. we associate with the system a functional $\Phi$ defined on a suitable Hilbert space ${\mathbb{H}}$. Its critical points are in one-to-one correspondence with weak solutions of . The tools we use are the finite and infinite dimensional equivariant Conley index (see [@[Bartsch]], [@Geba] for the definition in the finite dimensional case and [@Izydorek] for the infinite dimensional case) and the degree for invariant strongly indefinite functionals, defined in [@GolRyb1]. Consider the group ${\mathcal{G}}=\Gamma \times SO(N).$ Since ${\mathbb{R}}^m$ is a $\Gamma$-representation and $B^N$ is an $SO(N)$-invariant set, the space ${\mathbb{H}}$ is a ${\mathcal{G}}$-representation. It occurs that for $u_0 \in (\nabla F)^{-1}(0)$, $(g\tilde{u}_0, \lambda)$ is a critical point of $\Phi$ for all $g\in {\mathcal{G}}, \lambda \in {\mathbb{R}}.$ Therefore we can consider the set of trivial solutions ${\mathcal{T}}={\mathcal{G}}(\tilde{u}_0) \times {\mathbb{R}}$. We are going to investigate bifurcations of nontrivial solutions from the family ${\mathcal{T}}$. Our aim is to formulate necessary and sufficient conditions, in terms of the right-hand side of the system and of the eigenvalues of the Laplace operator, for a bifurcation from the orbit ${\mathcal{G}}(\tilde{u}_0) \times\{\lambda_0\}$. We also consider the global symmetry-breaking phenomenon at the orbit ${\mathcal{G}}(\tilde{u}_0) \times \{\lambda_0\}$. More precisely, knowing that the trivial solutions are radial, we study when the bifurcating solutions are non-radial. The analogous problem has been studied by the third author in [@RybShiSte] and [@RybSte] on the sphere and on the geodesic ball, with the use of the lemma due to Dancer (see [@Dancer1979]), characterising isotropy groups of bifurcating solutions. In our situation, if the group $\Gamma$ is not a discrete one, we cannot use this result. Therefore we generalise it. After this introduction the paper is organised in the following way: In Section \[sec:preliminaries\] we introduce the problem and recall some definitions. With an elliptic system on a ball we associate a functional. Next we study the properties of the linear system. We end this section with the definitions of local and global bifurcations from an orbit and of the admissible pair. In Section 3 we formulate and prove the main results of this article, namely Theorems \[th:BIF\] and \[th:GLOB\] concerning the local and global bifurcations of solutions, and Theorem \[thm:SymmBreak\], concerning the symmetry breaking problem. First we consider the phenomenon of bifurcation from the critical orbit. We start with some auxiliary results. In Lemma \[fact:warunekkonieczny\] we describe the set of parameters at which the bifurcation of solutions can occur. In Theorem \[th:zmiana\_indeksu\] we investigate the change of the Conley index at the levels obtained in Lemma \[fact:warunekkonieczny\]. This result is applied to prove Theorems \[th:BIF\] and \[th:GLOB\]. The local bifurcation of solutions, under weaker assumptions, is considered also in Theorem \[thm:0\]. Next we study the symmetry breaking problem. In Theorem \[thm:SymmBreak\] we prove the bifurcation of orbits of non-radial solutions emanating from orbits of radial ones. To obtain this result, we generalise the result of Dancer in Lemma \[lem:IsGr\]. In Section \[sec:illustration\] we illustrate our results with a few examples. Using the properties of the eigenspaces of the Laplace operator (with the Neumann boundary conditions) on the ball, we verify assumptions of our main results. Section \[sec:app\] is the appendix. In the main part of our paper we assume that the reader is familiar with some classical definitions and facts, concerning for example the equivariant Conley index or the properties of eigenspaces of the Laplace operator on a ball. However, it is not easy to find the full description of these properties. Therefore, for the completeness of the paper we collect in this section the information which we use to prove our main results. In this section we present also an equivariant version of the implicit function theorem in infinite dimensional spaces, due to Dancer. Notation -------- Suppose that $G$ is a compact Lie group. We denote by ${\overline{\operatorname{sub}}}(G)$ the set of closed subgroups of $G$. For $u$ from a given $G$-space $X$ we denote by the $G(u)$ the orbit through $u$ and $G_u$ stands for the isotropy group of $u$. Further, by $U(G)$ we denote the Euler ring of $G$ and we use the symbol $\chi_G(\cdot)$ to denote the $G$-equivariant Euler characteristic of a pointed finite $G$-CW-complex. Moreover, the symbols $CI_{G}(S, f)$ and $\mathcal{CI}_{{\mathcal{G}}}(S, f)$ stand for the Conley indices of an isolated invariant set $S$ of the flow generated by $f$, considered respectively in finite and infinite dimensional cases. More precise description can be found in Appendix. Finally, for a Hilbert space ${\mathbb{H}}$ and $u_0 \in {\mathbb{H}}$ we denote by $B_{\delta}(u_0,{\mathbb{H}})$ (respectively $D_{\delta}(u_0,{\mathbb{H}})$) the open (respectively closed) ball in ${\mathbb{H}}$ centred at $u_0$ and with radius $\delta$. In particular, we use the symbol $B^N$ for the open ball if $\delta=1$, $u_0=0$ and ${\mathbb{H}}={\mathbb{R}}^N$ and we write $S^{N-1}$ for $\partial B^N.$ Preliminaries {#sec:preliminaries} ============= Throughout this paper $\Gamma$ stands for a compact Lie group and ${\mathbb{R}}^m$ is an orthogonal representation of the group $\Gamma$. Consider $F\colon {\mathbb{R}}^m \to {\mathbb{R}}$ satisfying: 1. $F \in C^2({\mathbb{R}}^m , {\mathbb{R}})$ is such that for every $u \in {\mathbb{R}}^m$ we have $|\nabla^2 F(u) | \leq a + b |u|^{q}$ where $a,b \in {\mathbb{R}}$ and $1<q < \frac{4}{N-2}$ for $N \geq 3$ and $ 1< q< \infty $ for $N=2,$ 2. $F$ is $\Gamma$-invariant, i.e. $F(\gamma u) =F(u)$ for every $\gamma \in \Gamma$, $u\in{\mathbb{R}}^m$. Our aim is to study bifurcations of weak solutions of the nonlinear Neumann problem, parameterised by $\lambda \in {\mathbb{R}}$, $$\label{eq:neumann} \left\{ \begin{array}{rclcl} - \triangle u & =& \lambda \nabla F(u ) & \text{ in } & B^N \\ \frac{\partial u}{\partial \nu} & = & 0 & \text{ on } & S^{N-1}. \end{array}\right.$$ Denote by $H^1(B^N)$ the first Sobolev space on $B^N$ and consider a separable Hilbert space ${\mathbb{H}}= \bigoplus_{i=1}^{m} H^1(B^N)$ with the scalar product $$\label{iloczyn} {\displaystyle}\langle v, w \rangle_{{\mathbb{H}}} = {\displaystyle}\sum_{i=1}^m \langle v_i, w_i \rangle_{H^1( B^N)} = \sum_{i=1}^m \int\limits_{B^N} (\nabla v_i(x), \nabla w_i(x)) + v_i(x)\cdot w_i(x) dx.$$ Denote by ${\mathcal{G}}$ the group $\Gamma \times SO(N)$, where $SO(N)$ is the special orthogonal group in dimension $N$. Note that the space ${\mathbb{H}}$ with the scalar product given by is an orthogonal ${{\mathcal{G}}}$-representation with the ${{\mathcal{G}}}$-action given by $$\label{eq:action} (\gamma, \alpha) (u)(x)= \gamma u({\alpha}^{-1}x)\ \text{ for }\ (\gamma, \alpha) \in {{\mathcal{G}}}, u \in {\mathbb{H}}, x\in B^N.$$ It is well known that weak solutions of the problem are in one-to-one correspondence with critical points (with respect to $u$) of the functional $\Phi \colon {\mathbb{H}}\times {\mathbb{R}}\rightarrow {\mathbb{R}}$ defined by $$\label{eq:Phi} \Phi (u,\lambda) = \frac{1}{2} \int\limits_{B^N} |\nabla u(x)|^2 dx- \lambda\int\limits_{B^N} F(u(x))dx.$$ Computing the gradient of $\Phi$ with respect to $u$ we obtain: $$\label{eq:gradientPhi} \langle \nabla_{u} \Phi (u,\lambda), v \rangle_{{\mathbb{H}}} = \int\limits_{B^N} (\nabla u(x), \nabla v(x) ) - (\lambda \nabla F (u(x)), v(x) ) dx, \ u,v \in {\mathbb{H}}.$$ Moreover, $$\begin{split} \left<\nabla^2_u \Phi(u, \lambda)w, v\right>_{{\mathbb{H}}}=\int\limits_{B^N} ( \nabla w(x),\nabla v(x)) -(\lambda \nabla^2 F(u(x))w(x), v(x)) dx, \ u, w, v \in {\mathbb{H}}. \end{split}$$ Assumption (B2) implies that $\nabla_u \Phi \colon{\mathbb{H}}\times {\mathbb{R}}\rightarrow {\mathbb{H}}$ is ${{\mathcal{G}}}$-equivariant. Moreover, from imbedding theorems and the assumption (B1) it follows that the operator $\nabla_u \Phi$ is a completely continuous perturbation of the identity. Linear equation {#linear} --------------- In this subsection we consider the equation in the linear case, i.e. the system: $$\label{eq:lin_neumann} \left\{ \begin{array}{rclcl} - \triangle u & = &\lambda A u & \text{ in } & B^N \\ \frac{\partial u}{\partial \nu} & = & 0 & \text{ on } & S^{N-1}, \end{array} \right.$$ where $A$ is a real, symmetric $(m\times m)$-matrix. Using formula we can associate with the functional $\Phi_A \colon {\mathbb{H}}\times {\mathbb{R}}\rightarrow {\mathbb{R}}$ given by $$\label{eq:phi} \Phi_A (u,\lambda) = \frac{1}{2} \int\limits_{B^N} |\nabla u(x)|^2 dx - \frac{\lambda}{2} \int\limits_{B^N} (Au(x),u(x))dx.$$ Note that from for every $v\in{\mathbb{H}}$ we have $$\langle \nabla_u \Phi_A(u,\lambda),v\rangle_{{\mathbb{H}}}= \langle u, v \rangle_{{\mathbb{H}}} - \langle L_{\lambda A} u, v \rangle_{{\mathbb{H}}},$$ where $$\label{LA} \langle L_{\lambda A} u, v \rangle_{{\mathbb{H}}} = \int\limits_{B^N} (u(x), v(x)) + (\lambda A u(x), v(x)) dx.$$ The existence and boundedness of the operator $L_{\lambda A}\colon {\mathbb{H}}\rightarrow {\mathbb{H}}$ follow from the Riesz theorem. By definition $L_{\lambda A}$ is self-adjoint. Let us denote by $\sigma(-\Delta; B^N) = \{ 0= \beta_1 < \beta_2 < \ldots < \beta_k < \ldots\}$ the set of distinct eigenvalues of the Laplace operator (with the Neumann boundary conditions) on the ball. Write ${\mathbb{V}}_{-\Delta}(\beta_k)$ for the eigenspace of $-\Delta$ corresponding to $\beta_k \in \sigma(-\Delta; B^N)$. In Appendix we give a more precise description of these eigenspaces. By the spectral theorem it follows that ${\displaystyle}H^1(B^N) = cl ( \bigoplus_{k=1}^{\infty} {\mathbb{V}}_{-\Delta} (\beta_k)).$ Let us denote by ${\mathbb{H}}_k$ the space ${\displaystyle}\bigoplus_{i=1}^m {\mathbb{V}}_{-\Delta}(\beta_k).$ In particular, $u=\sum\limits_{k=1}^{\infty}u_k$ for every $u\in {\mathbb{H}}$, where $u_k\in{\mathbb{H}}_k$. Let $\alpha_1,\ldots,\alpha_m$ denote the eigenvalues of $A$ (not necessarily distinct) with corresponding eigenvectors $f_1,\ldots,f_m$, which form an orthonormal basis of ${\mathbb{R}}^m$. Let $\pi_j\colon{\mathbb{H}}\to H^1(B^N)$ be a projection such that $\pi_j(u)(x)=(u(x),f_j)$, $j=1,\ldots, m$. Clearly, if $u_k \in {\mathbb{H}}_k,$ then $\pi_j(u_k) \in {\mathbb{V}}_{-\Delta}(\beta_k)$ for $j=1, \ldots, m.$ In the lemma below we characterise the operator $L_{\lambda A}$, given by the formula . \[operatorLlambdaA\] For every $u\in{\mathbb{H}}$ $$L_{\lambda A} u=\sum\limits_{k=1}^{\infty}\sum\limits_{j=1}^m \frac{1+\lambda\alpha_j}{1+\beta_k} \pi_j(u_k) \cdot f_j.$$ The proof of this lemma is standard, see for example the proof of Lemma 3.2 in [@GolKlu]. Let us denote by $\sigma(L)$ the spectrum of a linear operator $L\colon {\mathbb{H}}\to {\mathbb{H}}$. From the above lemma there immediately follows the corollary: \[spektrumLA\] Let $L_{\lambda A}$ be defined by . Then: $$\sigma(L_{\lambda A})=\left\{ \frac{1+\lambda\alpha_j}{1+\beta_k}\colon \alpha_j \in\sigma(A), \beta_k\in \sigma(-\Delta; B^N) \right\}.$$ Moreover, $$\sigma(Id-L_{\lambda A})=\left\{ \frac{\beta_k-\lambda\alpha_j}{1+\beta_k}\colon \alpha_j \in\sigma(A), \beta_k\in \sigma(-\Delta; B^N) \right\}.$$ Fix eigenvalues $\alpha_{j_0}\in \sigma(A)$ and $\beta_{k_0}\in\sigma(-\Delta; B^N)$. Let ${\mathbb{V}}_A(\alpha_{j_0})$ be the eigenspace associated with the eigenvalue $\alpha_{j_0}$ and $\mu_{A}(\alpha_{j_0}) = \dim {\mathbb{V}}_A(\alpha_{j_0})$. Let $\Pi_{j_0}\colon {\mathbb{R}}^m\to{\mathbb{R}}^m$ be an orthogonal projection such that $\Pi_{j_0}({\mathbb{R}}^m)={\mathbb{V}}_A(\alpha_{j_0})$ and define $\tilde{\Pi}_{j_0}\colon {\mathbb{H}}\to {\mathbb{H}}$ by $(\tilde{\Pi}_{j_0}(u))(x)=\Pi_{j_0}(u(x))$. Denote $${\mathbb{V}}_{-\Delta}(\beta_{k_0})^{\mu_{A}(\alpha_{j_0})}=\tilde{\Pi}_{j_0}\left(\bigoplus\limits_{j=1}^m{\mathbb{V}}_{-\Delta}(\beta_{k_0}) \right).$$ It follows that $${\mathbb{V}}_{-\Delta}(\beta_{k_0})^{\mu_{A}(\alpha_{j_0})}= \mathrm{span}\left\{ h\cdot f\colon h\in {\mathbb{V}}_{-\Delta}(\beta_{k_0}), f\in {\mathbb{V}}_A(\alpha_{j_0}) \right\}\subset{\mathbb{H}}.$$ From Lemma \[operatorLlambdaA\] we obtain: \[cor:kernel\] If $\sigma(\lambda A)\cap \sigma(-\Delta; B^N)=\{\alpha_{j_1},\ldots, \alpha_{j_s}\}$, then $$\ker( Id - L_{\lambda A})={\mathbb{V}}_{-\Delta}(\alpha_{j_1})^{\mu_{\lambda A}(\alpha_{j_1})}\oplus\ldots\oplus {\mathbb{V}}_{-\Delta}(\alpha_{j_s})^{\mu_{\lambda A}(\alpha_{j_s})}.$$ Notion of the bifurcation from the critical orbit. {#subsec:bifurcation} -------------------------------------------------- Fix $u_0 \in (\nabla F)^{-1}(0)$. Since $F$ is $\Gamma$-invariant, and therefore $\nabla F$ is $\Gamma$-equivariant, $\gamma u_0 \in (\nabla F)^{-1}(0)$ for all $\gamma \in \Gamma$, i.e. $\Gamma(u_0) \subset (\nabla F)^{-1}(0).$ We call such a set a critical orbit of $F$. Note that $T_{u_0} \Gamma (u_0) \subset \ker \nabla^2 F (u_0)$ and therefore $\dim \ker \nabla^2 F (u_0) \geq \dim T_{u_0} \Gamma (u_0) = \dim \Gamma (u_0) .$ We assume that in this inequality there holds: $$\label{eq:orbita} \dim \ker \nabla^2 F (u_0) = \dim \Gamma (u_0).$$ We call such an orbit a non-degenerate one. By the equivariant Morse lemma, see [@[Wass]], from we conclude that $\Gamma (u_0)$ is isolated in $(\nabla F)^{-1}(0).$ Since $u_0 \in (\nabla F)^{-1}(0)$, a constant function $\tilde{u}_0\equiv u_0$ is a solution of the problem for all $\lambda \in \mathbb{R}$. Therefore, $(\tilde{u}_0, \lambda)$, and consequently $(\gamma\tilde{u}_0, \lambda)$ for every $\gamma\in \Gamma$, is a critical point of the functional $\Phi$ given by . Since from we have ${\mathcal{G}}(\tilde{u}_0)=\Gamma(\tilde{u}_0)$, we obtain a critical orbit of $\Phi$ and therefore a ${{\mathcal{G}}}$-orbit of weak solutions of for all $\lambda \in \mathbb{R}$. Hence we can consider a family of solutions ${\mathcal{T}}= {{\mathcal{G}}}(\tilde{u}_0) \times {\mathbb{R}}\subset {\mathbb{H}}\times {\mathbb{R}}$. We call the elements of ${\mathcal{T}}$ the trivial solutions of . Put ${\mathcal{N}}=\{(v, \lambda) \in ({\mathbb{H}}\times {\mathbb{R}}) \setminus {\mathcal{T}}\colon \nabla_v\Phi(v, \lambda)=0\}.$ A local bifurcation from the orbit ${{\mathcal{G}}}(\tilde{u}_0) \times \{\lambda_0\} \subset {\mathcal{T}}$ of solutions of occurs if the point $(\tilde{u}_0, \lambda_0)$ is an accumulation point of the set ${\mathcal{N}}$. Note that if $(\tilde{u}_0, \lambda_0)$ is an accumulation point of ${\mathcal{N}}$ then for all $g \in {\mathcal{G}}$, $(g\tilde{u}_0, \lambda_0)$ is also an accumulation point. Therefore ${\mathcal{G}}(\tilde{u}_0)\subset cl({\mathcal{N}})$. A global bifurcation from the orbit ${{\mathcal{G}}}(\tilde{u}_0) \times \{\lambda_0\} \subset {\mathcal{T}}$ of solutions of occurs if there is a connected component ${\mathcal{C}}(\lambda_0)$ of $cl ({\mathcal{N}})$ such that either ${\mathcal{C}}(\lambda_0)\cap ({\mathcal{T}}\setminus ({{\mathcal{G}}}(\tilde{u}_0)\times \{\lambda_0\}))\neq \emptyset$ or ${\mathcal{C}}(\lambda_0)$ is unbounded. The set of all $\lambda_0 \in {\mathbb{R}}$ such that a local (respectively global) bifurcation from the orbit ${{\mathcal{G}}}(\tilde{u}_0) \times \{\lambda_0\} $ occurs we denote by $BIF$ (respectively $GLOB$). Note that directly from the above definitions it follows that $GLOB \subset BIF.$ Admissible pair {#subsec:adm} --------------- The notion of the admissible pair has been introduced in [@PRS]. Fix a compact Lie group $G$ and let $H\in{\overline{\operatorname{sub}}}(G)$. Denote by $(H)_G$ the conjugacy class of $H.$ A pair $(G,H)$ is called admissible, if for any $K_1,K_2\in{\overline{\operatorname{sub}}}(H)$ the following condition is satisfied: if $(K_1)_H\neq (K_2)_H$, then $(K_1)_G\neq (K_2)_G$. \[admissible\] The pair $(\Gamma\times SO(N), \{e\}\times SO(N))$ is admissible. Let us denote by H the group $\{e\} \times SO(N)$ and recall that ${\mathcal{G}}=\Gamma \times SO(N).$ Moreover, let $\tilde{K}_1, \tilde{K}_2 \in {\overline{\operatorname{sub}}}(H).$ By definition of $H$ there are $K_1, K_2 \in {\overline{\operatorname{sub}}}(SO(N))$ such that $\tilde{K}_1 = \{e\} \times K_1$ and $\tilde{K}_2 = \{e\} \times K_2$. Suppose that $(\tilde{K}_1)_{{\mathcal{G}}}= (\tilde{K}_2)_{{\mathcal{G}}}$, i.e. $(\{e\} \times K_1)_{{\mathcal{G}}} =(\{e\} \times K_2)_{{\mathcal{G}}}.$ Therefore there exists ${(\gamma, \alpha )\in {{\mathcal{G}}}}$ such that $\{e\} \times K_1 = (\gamma, \alpha )(\{e\} \times K_2)(\gamma, \alpha )^{-1}$ and hence $$\begin{aligned} \{e\} \times K_1=\{\gamma e \gamma^{-1}\} \times \alpha K_2 \alpha ^{-1} = \{e\} \times \alpha K_2 \alpha ^{-1}= (e,\alpha ) (\{e\} \times K_2)(e,\alpha )^{-1}.\end{aligned}$$ Thus $(\tilde{K}_1)_H = (\tilde{K}_2)_H$ and the proof is complete. Main Results {#sec:main} ============ Consider the nonlinear system with a potential $F$ satisfying (B1), (B2). Fix $u_0 \in (\nabla F)^{-1}(0)$ such that the orbit $\Gamma(u_0)$ is non-degenerate. We put two additional assumptions: 1. $F(u) = \frac{1}{2} ( A (u-u_0), u-u_0 ) + g(u-u_0),$ where $A $ is a real symmetric $(m\times m)$-matrix and $\nabla g (u) = o(|u|)$ for $|u| \rightarrow 0$, 2. $\Gamma_{u_0}=\{e\}.$ From the assumption (B3) we conclude that the gradient of the functional associated with the equation has the following form: $$\nabla_u \Phi (u, \lambda)= u - \tilde{u}_0 - L_{\lambda A}(u-\tilde{u}_0)+\lambda \nabla \eta(u-\tilde{u}_0),$$ where $L_{\lambda A}\colon{\mathbb{H}}\rightarrow {\mathbb{H}}$ is a ${\mathcal{G}}$-equivariant operator given by . Moreover, $\nabla \eta\colon{\mathbb{H}}\rightarrow {\mathbb{H}}$ given by $\langle \nabla \eta(u), v\rangle_{{\mathbb{H}}}= \int_{B^N}(\nabla g(u(x)),v(x)) dx$ is a ${\mathcal{G}}$-equivariant operator such that $\nabla \eta(u)=o(|u|_{{\mathbb{H}}})$ for $|u|_{{\mathbb{H}}}\rightarrow 0.$ From the assumption (B4) it follows that ${\mathcal{G}}_{\tilde{u}_0}=\{e\} \times SO(N).$ Bifurcation from the critical orbit. {#bifurcations} ------------------------------------ Following the standard notation we denote the linear part of $\nabla_u \Phi(\cdot, \lambda)$ at $\tilde{u}_0$ by $\nabla^2_u \Phi(\tilde{u}_0, \lambda),$ thus $\nabla^2_u \Phi(\tilde{u}_0, \lambda)u=u - L_{\lambda A} u$. Let us denote by $\Lambda$ the set $\bigcup_{\alpha_j \in \sigma(A) \setminus\{0\}} \bigcup_{\beta_k \in \sigma(-\Delta;B^N)} \{\frac{\beta_k}{\alpha_j}\}.$ \[fact:warunekkonieczny\] If $\lambda_0 \in BIF,$ then $\lambda_0 \in \Lambda.$ We first observe that for all $\lambda \in {\mathbb{R}}$, since ${\mathcal{G}}(\tilde{u}_0)$ is a critical orbit of $\Phi(\cdot,\lambda)$, we have $\dim \ker \nabla^2_u\Phi(\tilde{u}_0,\lambda)\geq \dim ({\mathcal{G}}(\tilde{u}_0)\times\{\lambda\})$. Moreover if $\lambda_0 \in BIF$, this inequality is a strict one. Indeed, if $\dim \ker \nabla^2_u\Phi(\tilde{u}_0,\lambda_0)= \dim ({\mathcal{G}}(\tilde{u}_0)\times\{\lambda_0\})$, then by the equivariant implicit function theorem (see Theorem \[G-ImplicitInfinite\]) there exists $\varepsilon >0$ such that the only solutions of the equation $\nabla_u \Phi (u, \lambda) = 0$ are elements of ${{\mathcal{G}}}(\tilde{u}_0) \times \{\lambda\}$ for $\lambda \in (\lambda_0 - \varepsilon, \lambda_0 + \varepsilon).$ From this we obtain $\lambda_0 \not \in BIF.$ Therefore, if $\lambda_0 \in BIF,$ $$\label{eq:warunek_dostateczny} \dim \ker \nabla^2_u \Phi (\tilde{u}_0, \lambda_0) > \dim ({{\mathcal{G}}}(\tilde{u}_0) \times \{\lambda_0\}).$$ Since ${\mathcal{G}}(\tilde{u}_0)=\Gamma(\tilde{u}_0),$ we conclude from and that ${\displaystyle}\dim \ker \nabla^2_u \Phi (\tilde{u}_0, \lambda_0) > \dim ({{\mathcal{G}}}(\tilde{u}_0) \times \{\lambda_0\})= \dim \ker \nabla^2 F (u_0),$ i.e. $\dim \ker \left(Id - L_{\lambda_0 A}\right) > \dim \ker A.$ Using Corollary \[spektrumLA\] we obtain that this condition is satisfied if and only if $\{(\alpha_j, \beta_k) \in \sigma(A) \times \sigma(-\Delta; B^N)\colon \beta_k = \lambda_0 \alpha_j\} \neq \{(0,0)\}.$ Therefore there are $(\alpha_j, \beta_k) \in \sigma(A)\setminus\{0\} \times \sigma(-\Delta,B^N)$ such that $\beta_k=\lambda_0\cdot \alpha_j$, i.e. $\lambda_0 \in \Lambda.$ Fix $\lambda_0 \in \Lambda $ and choose $\varepsilon>0$ such that $\Lambda\cap[\lambda_0-\varepsilon,\lambda_0+\varepsilon]=\{\lambda_0\}$. From the definition of $\Lambda$ such a choice is always possible. Since $\lambda_0 \pm \varepsilon \notin \Lambda,$ Lemma \[fact:warunekkonieczny\] implies that $\lambda_0\pm \varepsilon \notin BIF$ and therefore ${\mathcal{G}}(\tilde{u}_0) \subset {\mathbb{H}}$ is an isolated critical orbit of the ${\mathcal{G}}$-invariant functionals $\Phi(\cdot, \lambda_0 \pm \varepsilon) \colon {\mathbb{H}}\to {\mathbb{R}}.$ From this and the properties of flows induced by gradient operators, we conclude that ${\mathcal{G}}(\tilde{u}_0)$ is also an isolated invariant set (in the sense of the equivariant Conley index theory, see [@Izydorek]) for the flows induced by the operators $-\nabla_u \Phi(\cdot, \lambda_0 \pm \varepsilon) $. Therefore, the indices $CI_{{{\mathcal{G}}}}({{\mathcal{G}}}(\tilde{u}_0),-\nabla_u\Phi(\cdot, \lambda_0-\varepsilon))$, $CI_{{{\mathcal{G}}}}({{\mathcal{G}}}(\tilde{u}_0),-\nabla_u\Phi(\cdot, \lambda_0+\varepsilon))$ are well-defined. In the following we study when they are not equal. Assume that $\sigma(\lambda_0 A)\cap \sigma(-\Delta; B^N)=\{\alpha_{j_1},\ldots, \alpha_{j_s}\}.$ We consider the conditions: - \[char1\] $\lambda_0\neq 0$ and there is $i\in \{1,\ldots,s\}$ satisfying $\dim{\mathbb{V}}_{-\Delta}(\alpha_{j_i})>1$, - \[char2\] $\lambda_0\neq 0$, $\dim{\mathbb{V}}_{-\Delta}(\alpha_{j_i})=1$ for every $i\in \{1,\ldots,s\}$ and $\dim \ker (Id-L_{\lambda_0A}) - \dim \ker A$ is an odd number, - \[char3\] $\lambda_0= 0$ and $\sum_{\alpha \in \sigma_+(A)} \mu_A(\alpha)-\sum_{\alpha \in \sigma_-(A)} \mu_A(\alpha)$ is odd. Note that we can reformulate conditions (C1)–(C3) in the following way: 1. $\lambda_0\neq 0$ and there is $i\in \{1,\ldots,s\}$ such that ${\mathbb{V}}_{-\Delta}(\alpha_{j_i})$ is a nontrivial $SO(N)$-representation, 2. $\lambda_0\neq 0$, $\dim{\mathbb{V}}_{-\Delta}(\alpha_{j_i})=1$ for every $i\in \{1,\ldots,s\}$ and $\sum^s_{i=1} \mu_{\lambda_0 A}(\alpha_{j_i})-\mu_A(0)$ is odd, 3. $\lambda_0=0$ and $m-\dim \ker A$ is odd. Indeed, 1. $\dim{\mathbb{V}}_{-\Delta}(\alpha_{j_i})>1$ if and only if ${\mathbb{V}}_{-\Delta}(\alpha_{j_i})$ is a nontrivial $SO(N)$-representation, see Remark \[rem:nontriviality\]; 2. since $\dim{\mathbb{V}}_{-\Delta}(\alpha_{j_i})=1$, from Corollary \[cor:kernel\] we obtain $\dim \ker (Id-L_{\lambda_0A})=\sum^s_{i=1} \mu_{\lambda_0 A}(\alpha_{j_i})$; 3. since $\sum_{\alpha \in \sigma_+(A)} \mu_A(\alpha)+\sum_{\alpha \in \sigma_-(A)} \mu_A(\alpha)+\mu_A(0)=m$, if $m-\dim \ker A$ is odd, then so is $\sum_{\alpha \in \sigma_+(A)} \mu_A(\alpha)-\sum_{\alpha \in \sigma_-(A)} \mu_A(\alpha)$. \[th:zmiana\_indeksu\] Assume that $\lambda_0 \in \Lambda $ and one of the conditions (C1)–(C3) is satisfied. Then $$\mathcal{CI}_{{{\mathcal{G}}}}({{\mathcal{G}}}(\tilde{u}_0),-\nabla_u\Phi(\cdot, \lambda_0-\varepsilon)) \neq \mathcal{CI}_{{{\mathcal{G}}}}({{\mathcal{G}}}(\tilde{u}_0),-\nabla_u\Phi(\cdot, \lambda_0+\varepsilon)).$$ Denote by $\tilde{{\mathbb{H}}}\subset{\mathbb{H}}$ the linear subspace normal to ${{\mathcal{G}}}(\tilde{u}_0)$ at $\tilde{u}_0$, i.e. $\tilde{{\mathbb{H}}}=T_{\tilde{u}_0}^{\perp} {{\mathcal{G}}}(\tilde{u}_0)\subset {\mathbb{H}}$. We start the proof with showing that we can reduce comparing the Conley indices ${\mathcal{C}}{\mathcal{I}}_{{{\mathcal{G}}}}({{\mathcal{G}}}(\tilde{u}_0),-\nabla_u\Phi(\cdot, \lambda_0\pm\varepsilon))$ to comparing Euler characteristics of some indices on the space $\tilde{{\mathbb{H}}}$. For $n\geq 1$ put ${\mathbb{H}}^n=\bigoplus_{k=1}^{n}{\mathbb{H}}_k$ and $\Phi^n=\Phi_{|{\mathbb{H}}^n\times{\mathbb{R}}}\colon {\mathbb{H}}^n\times{\mathbb{R}}\rightarrow {\mathbb{R}}.$ Note that ${\mathcal{G}}(\tilde{u}_0)=\Gamma(\tilde{u}_0) \subset T_{\tilde{u}_0} {\Gamma}(\tilde{u}_0) \oplus T_{\tilde{u}_0}^{\perp}{\Gamma}(\tilde{u}_0) \approx {\mathbb{R}}^m \approx {\mathbb{H}}_1.$ Therefore ${\mathcal{G}}(\tilde{u}_0)$ is a critical orbit of $\Phi^n(\cdot, \lambda_0 \pm \varepsilon)$ for $n \geq 1$. Note that, from the choice of $\varepsilon$ and the definition of $\Phi^n$, it is a non-degenerate one. Since $\nabla_u \Phi(\cdot,\lambda)$ is a completely continuous perturbation of the identity for all $\lambda\in{\mathbb{R}}$, from the definition of the infinite dimensional equivariant Conley index, see [@Izydorek], the assertion of the theorem is equivalent to $$CI_{{\mathcal{G}}} ({\mathcal{G}}(\tilde{u}_0), -\nabla_u\Phi^n(\cdot, \lambda_0-\varepsilon)) \neq CI_{{\mathcal{G}}} ({\mathcal{G}}(\tilde{u}_0), -\nabla_u\Phi^n(\cdot, \lambda_0+\varepsilon))$$ for $n$ sufficiently large. It is known that the ${\mathcal{G}}$-action on ${\mathbb{H}}$ given by defines a ${\mathcal{G}}_{\tilde{u}_0}$-action on $\tilde{{\mathbb{H}}}$. Recall that ${\mathcal{G}}_{\tilde{u}_0}=\{e\} \times SO(N).$ Hence $\tilde{{\mathbb{H}}}$ is an orthogonal $SO(N)$-representation. For $n\geq 1$ put $\tilde{{\mathbb{H}}}^n={\mathbb{H}}^n \cap \tilde{{\mathbb{H}}}=T_{\tilde{u}_0}^{\perp} \Gamma (\tilde{u}_0)\oplus \bigoplus_{k=2}^{n}{\mathbb{H}}_k $ and define $\Psi^n_{\pm}=\Phi^n(\cdot, \lambda_0 \pm \varepsilon)_{|\tilde{{\mathbb{H}}}^n}\colon \tilde{{\mathbb{H}}}^n \rightarrow {\mathbb{R}}.$ From this definition the functionals $\Psi^n_{\pm}$ are $SO(N)$-invariant. Since ${\mathcal{G}}(\tilde{u}_0)$ is a non-degenerate critical orbit of $\Phi^n(\cdot, \lambda_0\pm \varepsilon)$, $\tilde{u}_0\in \tilde{{\mathbb{H}}}$ is a non-degenerate critical point of $\Psi^n_{\pm}$. Hence $\{\tilde{u}_0\}$ is an isolated invariant set (in the sense of the Conley index theory) of the flows generated by $-\nabla \Psi^n _{\pm}$. Note that since ${\mathcal{G}}_{\tilde{u}_0}=\{e\} \times SO(N)$, by Lemma \[admissible\] the pair $({{\mathcal{G}}}, {{\mathcal{G}}}_{\tilde{u}_0})$ is admissible. Therefore, using Fact \[cor:3.2\] we obtain that the assertion reduces to $$\chi_{{\mathcal{G}}_{\tilde{u}_0}}(CI_{{{\mathcal{G}}_{\tilde{u}_0}}}(\{\tilde{u}_0\},-\nabla\Psi^n_-)) \neq \chi_{{\mathcal{G}}_{\tilde{u}_0}}( CI_{{{\mathcal{G}}_{\tilde{u}_0}}}(\{\tilde{u}_0\},-\nabla\Psi^n_+))$$ for $n\in \mathbb{N}$ sufficiently large. It is easy to see that this inequality is equivalent to $$\chi_{SO(N)}(CI_{SO(N)}(\{\tilde{u}_0\},-\nabla\Psi^n_-)) \neq \chi_{SO(N)}( CI_{SO(N)}(\{\tilde{u}_0\},-\nabla\Psi^n_+)).$$ We proceed to show that there exists $n_0 \in \mathbb{N}$ such that for $n \geq n_0$ $$\label{eq:krok2} CI_{SO(N)}(\{\tilde{u}_0\},-\nabla \Psi^n_{\pm})=CI_{SO(N)}(\{\tilde{u}_0\},-\nabla \Psi^{n_0}_{\pm}).$$ Let ${\nu} \in \mathbb{N}.$ For $\delta>0$ sufficiently small and $\lambda \in [\lambda_0 -\varepsilon, \lambda_0+\varepsilon]$ we define $SO(N)$-equivariant gradient homotopy $H_{\lambda}^{\nu}\colon (D_{\delta}(\tilde{u}_0,\tilde{{\mathbb{H}}}^{\nu})\times [0,1], \partial D_{\delta}(\tilde{u}_0,\tilde{{\mathbb{H}}}^{\nu}) \times [0,1]) \rightarrow (\tilde{{\mathbb{H}}}^{\nu},\tilde{{\mathbb{H}}}^{\nu}\setminus \{0\})$ by $$H_{\lambda}^{\nu}(u,t)= u -\tilde{u}_0 - L_{\lambda A} (u -\tilde{u}_0)+t \lambda_0 P_{\nu} \circ \nabla \eta (u - \tilde{u}_0),$$ where $P_{\nu}\colon \tilde{{\mathbb{H}}} \rightarrow \tilde{{\mathbb{H}}}^{\nu}$ is the orthogonal $SO(N)$-equivariant projection onto $\tilde{{\mathbb{H}}}^{\nu}.$ Note that from Lemma \[operatorLlambdaA\] we have $P_{\nu} \circ L_{\lambda A} = L_{\lambda A} \circ P_{\nu}$ and hence this homotopy is well-defined. Let us denote by $\xi_{\lambda}^{\nu}\colon\tilde{{\mathbb{H}}}^{\nu}\to{\mathbb{R}}$ the $SO(N)$-invariant potential of $H^{\nu}_{\lambda}(\cdot,0).$ It is clear that $\nabla \xi_{\lambda}^{\nu}\colon \tilde{{\mathbb{H}}}^{\nu} \rightarrow \tilde{{\mathbb{H}}}^{\nu}$ is self-adjoint $SO(N)$-equivariant linear map and is given by the formula $\nabla \xi_{\lambda}^{\nu}=(Id-L_{\lambda A})_{|\tilde{{\mathbb{H}}}^{\nu}}.$ From the homotopy invariance of the Conley index, see Theorem \[thm:homotopy\], we obtain $$\label{eq:krok2cont} CI_{SO(N)}(\{\tilde{u}_0\},-\nabla\Psi^{\nu}_{\pm})= CI_{SO(N)}(\{\tilde{u}_0\},-\nabla\xi^{{\nu}}_{\lambda_0 \pm \varepsilon}).$$ Recall that $(\beta_k)$ denotes the sequence of the eigenvalues of the Neumann Laplacian and note that $\beta_k\to +\infty$. Therefore, there exists $n_0\in{\mathbb{N}}$ such that the inequalities $\frac{\beta_n-(\lambda_0\pm\varepsilon)\alpha_j}{1+\beta_n}>0$ hold for every $n\geq n_0$ and $\alpha_j \in\sigma(A)$. Hence, by Corollary \[spektrumLA\], there exists $n_0\in {\mathbb{N}}$ such that $m^-(\nabla\xi_{\lambda_0 \pm \varepsilon}^n)=m^-(\nabla\xi_{\lambda_0 \pm \varepsilon}^{n_0})$ for every $n\geq n_0$, where $m^-(\cdot)$ is the Morse index. Since $(\nabla\xi^{n}_{\lambda_0 \pm \varepsilon})_{|\tilde{{\mathbb{H}}}^{n_0}}=\nabla\xi^{n_0}_{\lambda_0 \pm \varepsilon}$, the eigenspaces corresponding to the negative eigenvalues of $\nabla\xi^{n}_{\lambda_0 \pm \varepsilon}$ and $\nabla\xi^{n_0}_{\lambda_0 \pm \varepsilon}$ are the same $SO(N)$-representations. Thus, from Theorem \[CIjakosfera\], $$CI_{SO(N)}(\{\tilde{u}_0\},-\nabla\xi^{n}_{\lambda_0 \pm \varepsilon})= CI_{SO(N)}(\{\tilde{u}_0\},-\nabla\xi^{n_0}_{\lambda_0 \pm \varepsilon}),$$ which implies . What is left is to show that $$\chi_{SO(N)}\left(CI_{SO(N)}(\{\tilde{u}_0\}, -\nabla \Psi^{n_0}_+)\right) \neq \chi_{SO(N)}\left(CI_{SO(N)}(\{\tilde{u}_0\}, -\nabla \Psi^{n_0}_-)\right).$$ Denote by ${\mathcal{W}}(\lambda)$ the direct sum of the eigenspaces of $Id-L_{\lambda A}$ (i.e. of $\nabla \xi^{n_0}_{\lambda}$) corresponding to the negative eigenvalues and by ${\mathcal{V}}(\lambda)$ the eigenspace corresponding to the zero eigenvalue. Note that from Corollary \[spektrumLA\], $${\mathcal{W}}(\lambda)= \Bigg(\bigoplus_{\alpha_j\in\sigma( A)} \ \bigoplus_{\substack{\beta_k \in \sigma(-\Delta;B^N)\\\beta_k<\lambda \alpha_j}} {\mathbb{V}}_{-\Delta}(\beta_k)^{\mu_{ A}(\alpha_{j})}\Bigg)\cap \tilde{{\mathbb{H}}},$$ $${\mathcal{V}}(\lambda)=\Bigg(\bigoplus_{\alpha_j\in\sigma(A)} \ \bigoplus_{\substack{\beta_k \in \sigma(-\Delta;B^N)\\ \beta_k=\lambda \alpha_j}} {\mathbb{V}}_{-\Delta}(\beta_k)^{\mu_{ A}(\alpha_{j})}\Bigg)\cap \tilde{{\mathbb{H}}}.$$ From Theorem \[CIjakosfera\], $CI_{SO(N)}(\{\tilde{u}_0\},-\nabla\xi^{n_0}_{\lambda_0 \pm \varepsilon})$ are $SO(N)$-homotopy types of $S^{{\mathcal{W}}(\lambda_0\pm\varepsilon)}.$ Hence, from , $$\chi_{SO(N)}\left(CI_{SO(N)}(\{\tilde{u}_0\},-\nabla\Psi^{n_0}_{\pm})\right)= \chi_{SO(N)}\left(S^{{\mathcal{W}}(\lambda_0\pm\varepsilon)}\right).$$ 1. Suppose that $\lambda_0>0$ and $\varepsilon$ is such that $\lambda_0 - \varepsilon>0$. Recall that $\beta_k \geq 0$ for all $\beta_k \in \sigma(-\Delta;B^N).$ Then ${\mathcal{W}}(\lambda_0+\varepsilon)={\mathcal{W}}(\lambda_0-\varepsilon)\oplus {\mathcal{V}}(\lambda_0).$ If the assumption (C1) is satisfied, then, by Theorem \[thm:nontrivialityofEC\] and Remark \[rem:nontriviality\], we obtain $\chi_{SO(N)}(S^{{\mathcal{V}}(\lambda_0)})\neq {\mathbb{I}}\in U(SO(N)).$ Similarly, if (C2) is fulfilled, then ${\mathcal{V}}(\lambda_0)$ is a trivial $SO(N)$-representation and, from Corollary \[spektrumLA\] and the definition of $\tilde{{\mathbb{H}}}$, $\dim {\mathcal{V}}(\lambda_0)=\dim \ker (Id-L_{\lambda_0A}) - \dim \ker A$ is odd. Therefore: $$\begin{aligned} \chi_{SO(N)}(S^{{\mathcal{V}}(\lambda_0)}) =(-1)^{\dim{\mathcal{V}}(\lambda_0)}\chi_{SO(N)}\left(SO(N)/SO(N)^+\right)=-{\mathbb{I}}.\end{aligned}$$ In both cases we have $$\begin{aligned} \chi_{SO(N)}\left(CI_{SO(N)}(\{\tilde{u}_0\},-\nabla\Psi^{n_0}_{+})\right)= \chi_{SO(N)}(S^{{\mathcal{W}}(\lambda_0-\varepsilon)\oplus {\mathcal{V}}(\lambda_0)})=\\ = \chi_{SO(N)}(S^{{\mathcal{W}}(\lambda_0-\varepsilon)})\star \chi_{SO(N)}(S^{{\mathcal{V}}(\lambda_0)}) \neq \chi_{SO(N)}(S^{{\mathcal{W}}(\lambda_0-\varepsilon)})=\\ = \chi_{SO(N)}\left(CI_{SO(N)}(\{\tilde{u}_0\},-\nabla\Psi^{n_0}_{-})\right).\end{aligned}$$ In the second equality we use the fact that $S^{{\mathcal{W}}(\lambda_0-\varepsilon)\oplus {\mathcal{V}}(\lambda_0)}$ is $SO(N)$-homeomorphic to $S^{{\mathcal{W}}(\lambda_0-\varepsilon)}\wedge S^{{\mathcal{V}}(\lambda_0)}$ and the formula for multiplication in $U(SO(N))$, see . Then we use invertibility of $\chi_{SO(N)}(S^{{\mathcal{W}}(\lambda_0-\varepsilon)})$ in $U(SO(N))$, see [@GolRyb1]. 2. Suppose that $\lambda_0<0$. Then ${\mathcal{W}}(\lambda_0-\varepsilon)={\mathcal{W}}(\lambda_0+\varepsilon)\oplus {\mathcal{V}}(\lambda_0) $ and hence $$\begin{aligned} \chi_{SO(N)}\left(CI_{SO(N)}(\{\tilde{u}_0\},-\nabla\Psi^{n_0}_{+})\right)\neq \chi_{SO(N)}\left(CI_{SO(N)}(\{\tilde{u}_0\},-\nabla\Psi^{n_0}_{-})\right),\end{aligned}$$ as before. 3. Finally, suppose that $\lambda_0=0$. Then, since $${\mathcal{W}}(\pm\varepsilon)= \bigoplus\limits_{\alpha_j\in\sigma_{\pm}(A)}{\mathbb{V}}_{-\Delta}(0)^{\mu_{A}(\alpha_j)},$$ and therefore ${\mathcal{W}}(\pm\varepsilon)$ are trivial $SO(N)$-representations, $$\begin{aligned} &&\chi_{SO(N)}\left(CI_{SO(N)}(\{\tilde{u}_0\},-\nabla\Psi^{n_0}_{\pm})\right)=\chi_{SO(N)}(S^{{\mathcal{W}}(\pm\varepsilon)})=\\ &=&(-1)^{\dim{\mathcal{W}}(\pm\varepsilon)}\cdot \chi_{SO(N)}\left(SO(N)/SO(N)^+\right)=(-1)^{\dim{\mathcal{W}}(\pm\varepsilon)} \cdot{\mathbb{I}}.\end{aligned}$$ Hence, because the assumption (C3) implies that $\dim{\mathcal{W}}(\varepsilon)-\dim{\mathcal{W}}(-\varepsilon)$ is odd, we have $$\chi_{SO(N)}\left(CI_{SO(N)}(\{\tilde{u}_0\},-\nabla\Psi^{n_0}_{+})\right)\neq \chi_{SO(N)}\left(CI_{SO(N)}(\{\tilde{u}_0\},-\nabla\Psi^{n_0}_{-})\right),$$ which completes the proof. Now we are in a position to prove one of the main results of our paper, namely the bifurcation theorems. \[th:BIF\] Consider the system with the potential $F$ and $u_0 \in \nabla F^{-1}(0)$ satisfying assumptions (B1)–(B4). Assume that $\lambda_0 \in \Lambda $ and one of the conditions (C1)–(C3) is satisfied. Then a local bifurcation of solutions of occurs from the orbit ${{\mathcal{G}}}(\tilde{u}_0) \times \{\lambda_0\}$. From Theorem \[th:zmiana\_indeksu\] it follows that if one of the conditions (C1)–(C3) is satisfied then ${\mathcal{C}}{\mathcal{I}}_{{{\mathcal{G}}}}({{\mathcal{G}}}(\tilde{u}_0),-\nabla_u\Phi(\cdot, \lambda_{0}-\varepsilon)) \neq {\mathcal{C}}{\mathcal{I}}_{{{\mathcal{G}}}}({{\mathcal{G}}}(\tilde{u}_0),-\nabla_u\Phi(\cdot, \lambda_{0}+\varepsilon)),$ for sufficiently small $\varepsilon >0.$ Following for example the idea of the proof of Theorem 2.1 of [@SmoWass], using the continuation property of the Conley index, one can prove that the change of the Conley index implies a local bifurcation of critical orbits. It is known that, in general, the change of the Conley index along the family of trivial solutions, does not imply the global bifurcation. However, using the relation between the Conley index and the degree for strongly indefinite functionals, under some assumptions one can prove the existence of connected sets of bifurcating solutions. It occurs that (C1)–(C3) are this kind of assumptions. \[th:GLOB\] Consider the system with the potential $F$ and $u_0 \in \nabla F^{-1}(0)$ satisfying assumptions (B1)–(B4). Assume that $\lambda_0 \in \Lambda $ and one of the conditions (C1)–(C3) is satisfied. Then a global bifurcation of solutions of occurs from the orbit ${{\mathcal{G}}}(\tilde{u}_0) \times \{\lambda_0\}$. Let ${\mathcal{U}}\subset {\mathbb{H}}$ be an open, bounded and ${{\mathcal{G}}}$-invariant subset such that $\nabla_u \Phi (\cdot, \lambda_0\pm\varepsilon)^{-1}(0)\cap {\mathcal{U}}={{\mathcal{G}}}(\tilde{u}_0).$ Denote by $\nabla_{{\mathcal{G}}}\textrm{-}\mathrm{deg}(\cdot, \cdot)$ the degree for equivariant gradient maps of the form completely continuous perturbation of the identity, defined in [@Ryb2005]. From the definition of this degree, for $n_0$ sufficiently large, $$\begin{aligned} \nabla_{{{\mathcal{G}}}}\textrm{-}\mathrm{deg}(\nabla_u\Phi(\cdot, \lambda_0\pm\varepsilon),{\mathcal{U}}) &=& \nabla_{{{\mathcal{G}}}}\textrm{-}\mathrm{deg}(\nabla_u\Phi^{n_0}(\cdot, \lambda_0\pm\varepsilon),{\mathcal{U}}\cap {\mathbb{H}}^{n_0})=\\&=&\chi_{{{\mathcal{G}}}}\left(CI_{{{\mathcal{G}}}}({{\mathcal{G}}}(\tilde{u}_0),-\nabla_u\Phi^{n_0}(\cdot, \lambda_0\pm\varepsilon))\right),\end{aligned}$$ where $\Phi^{n_0}$ is defined as in the proof of Theorem \[th:zmiana\_indeksu\]. The latter equality is the relation between the Conley index and the degree proved by Gęba in [@Geba], see also Corollary 1 in [@GolRyb]. From Theorem \[th:zmiana\_indeksu\] and Fact \[cor:3.2\] we have $$\chi_{{\mathcal{G}}} (CI_{{{\mathcal{G}}}}({{\mathcal{G}}}(\tilde{u}_0),-\nabla_u\Phi^{n_0}(\cdot, \lambda_0-\varepsilon))) \neq \chi_{{\mathcal{G}}}(CI_{{{\mathcal{G}}}}({{\mathcal{G}}}(\tilde{u}_0),-\nabla_u\Phi^{n_0}(\cdot, \lambda_0+\varepsilon))).$$ Therefore $$\nabla_{{{\mathcal{G}}}}\textrm{-}\mathrm{deg}(\nabla_u\Phi(\cdot, \lambda_0-\varepsilon),{\mathcal{U}})\neq \nabla_{{{\mathcal{G}}}}\textrm{-}\mathrm{deg}(\nabla_u\Phi(\cdot, \lambda_0+\varepsilon),{\mathcal{U}}).$$ From the equivariant version of the Rabinowitz alternative, see for example Theorem 3.3 of [@GolRyb1], the change of the degree for ${{\mathcal{G}}}$-equivariant gradient maps implies a global bifurcation, so we obtain the assertion. In Theorem \[th:GLOB\] we have proved that if the assumption (C3) is satisfied, then $0\in GLOB$. On the other hand, repeating the reasoning from the proof of this theorem it is easy to show that if the number $\sum_{\alpha_j \in \sigma_+(A)} \mu_A(\alpha_j)-\sum_{\alpha_j \in \sigma_-(A)} \mu_A(\alpha_j)$ is even, then the Euler characteristics $\chi_{{\mathcal{G}}}\left(CI_{{\mathcal{G}}}({\mathcal{G}}(\tilde{u}_0),-\nabla_u\Phi^{n_0}(\cdot,\varepsilon)\right)$ and $\chi_{{\mathcal{G}}}\left(CI_{{\mathcal{G}}}({\mathcal{G}}(\tilde{u}_0),-\nabla_u\Phi^{n_0}(\cdot,-\varepsilon))\right)$ are equal. Therefore, we do not know whether $0 \in GLOB$. However, under the assumption weaker than (C3) we can prove the result concerning the local bifurcation. \[thm:0\] Consider the system with the potential $F$ and $u_0 \in \nabla F^{-1}(0)$ satisfying assumptions (B1)–(B4). Assume that $\lambda_0 =0$ and $\sum_{\alpha_j \in \sigma_+(A)} \mu_A(\alpha_j)\neq\sum_{\alpha_j \in \sigma_-(A)} \mu_A(\alpha_j)$. Then a local bifurcation of solutions of occurs from the orbit ${{\mathcal{G}}}(\tilde{u}_0) \times \{0\}$. Using the notation of the proof of Theorem \[th:zmiana\_indeksu\], we observe that ${\mathcal{W}}(\pm\varepsilon)$ are trivial $SO(N)$-representations. Therefore $CI_{SO(N)}(\{\tilde{u}_0\},-\nabla\Psi^{n_0}_{\pm})$ are $SO(N)$-homotopy types of $S^{\dim {\mathcal{W}}(\pm\varepsilon)}$. Using information from [@PRS] (namely Theorem 3.1 and the equality (2.11)) and from [@Kawakubo] (Lemma 1.88) we obtain that $CI_{{\mathcal{G}}}({\mathcal{G}}(\tilde{u}_0),-\nabla_u\Phi^{n_0}(\cdot,\pm \varepsilon))$ are ${\mathcal{G}}$-homotopy types of $$\left({\mathcal{G}}/{\mathcal{G}}_{\tilde{u}_0}\times S^{\dim {\mathcal{W}}(\pm\varepsilon)}\right)/\left({\mathcal{G}}/{\mathcal{G}}_{\tilde{u}_0}\times \{*\}\right).$$ From Proposition 1.53 of [@Kawakubo], we obtain that the above is ${\mathcal{G}}$-homotopy equivalent to $$X_{\pm}=\left({\mathcal{G}}(\tilde{u}_0)\times S^{\dim {\mathcal{W}}(\pm\varepsilon)}\right)/\left({\mathcal{G}}(\tilde{u}_0)\times \{*\}\right).$$ But $X_+$ and $X_-$ are different ${\mathcal{G}}$-homotopy types. Indeed, if $X_+$ and $X_-$ are the same ${\mathcal{G}}$-homotopy types, then the orbit spaces $X_{+}/{\mathcal{G}}$ and $X_{-}/{\mathcal{G}}$ are the same homotopy types. This is impossible, since the spaces $X_{\pm}/{\mathcal{G}}$ are homotopy types of $S^{\dim {\mathcal{W}}(\pm\varepsilon)}$, see [@TomDieck]. Analysis similar to that in the proof of Theorem \[th:BIF\] shows the assertion. Symmetry breaking {#subsec:symmbreak} ----------------- In this section we consider the symmetry-breaking problem, i.e. the change of the isotropy groups of solutions of along connected sets. More precisely, we characterise bifurcation orbits of the equation at which the global symmetry-breaking phenomenon occurs. Here and thereafter we use the notation of Section \[bifurcations\]. Recall that ${\mathcal{T}}$ denotes the set of trivial solutions. \[def:symmbreak\] We say that a global symmetry-breaking phenomenon occurs at the orbit ${\mathcal{G}}(\tilde{u}_0)\times\{\lambda_{0}\}$ if $\lambda_0\in GLOB$ and there exists $U\subset {\mathbb{H}}\times{\mathbb{R}}$ such that ${\mathcal{G}}(\tilde{u}_0)\times\{\lambda_{0}\}\subset U$ and ${\mathcal{G}}_{(u,\lambda)}\neq {\mathcal{G}}_{(\tilde{u}_0, \lambda_0)}$ for all $(u,\lambda)\in (U\cap (\nabla_u\Phi)^{-1}(0))\setminus {\mathcal{T}}$. Note that since the group ${\mathcal{G}}$ acts trivially on the set of parameters $\lambda$, the condition ${\mathcal{G}}_{(u,\lambda)}\neq {\mathcal{G}}_{(\tilde{u}_0, \lambda_0)}$ is equivalent to ${\mathcal{G}}_u \neq {\mathcal{G}}_{\tilde{u}_0}$. In particular we are interested in studying $SO(N)$-symmetries of solutions. We say that the function $u$ satisfying $SO(N)_u=SO(N)$ is radially symmetric. Our aim in this section is to prove the following characterisation of global symmetry-breaking phenomenon of solutions of : \[thm:SymmBreak\] Consider the system with the potential $F$ and $u_0 \in \nabla F^{-1}(0)$ satisfying assumptions (B1)–(B4). Fix $\lambda_0\in\Lambda$ and suppose that $\sigma(\lambda_0 A)\cap \sigma(-\Delta; B^N)\setminus\{0\}=\{\alpha_{j_1},\ldots, \alpha_{j_s}\}$ and ${\mathbb{V}}_{-\Delta}(\alpha_{j_i})^{SO(N)}=\{0\}$ for every $i=1,\ldots,s$. Then the global symmetry-breaking phenomenon occurs at the orbit ${\mathcal{G}}(\tilde{u}_0)\times\{\lambda_{0}\}$. Note that the assumption ${\mathbb{V}}_{-\Delta}(\alpha_{j_i})^{SO(N)}=\{0\}$ means that there is no radially symmetric eigenfunction associated with $\alpha_{j_i}$. To prove this theorem we first verify the following lemma: \[lem:IsGr\] Fix $\lambda_0 \in \Lambda$. Then there exists $U\subset {\mathbb{H}}\times{\mathbb{R}}$ such that ${\mathcal{G}}(\tilde{u}_0)\times\{\lambda_{0}\}\subset U$ and for all $(u,\lambda)\in (U\cap (\nabla_u\Phi)^{-1}(0))\setminus {\mathcal{T}}$ there exists $\overline{u}\in\ker \nabla_u^2\Phi_{|{\mathbb{H}}_1^{\perp}}(\tilde{u}_0,\lambda_0)\setminus \{0\}$ such that ${\mathcal{G}}_u\subset {\mathcal{G}}_{\overline{u}}$. Consider $ {\mathbb{U}}_1={\mathrm{ im \;}}\nabla_u^2\Phi_{|{\mathbb{H}}_1^{\perp}}(\tilde{u}_0,\lambda_0)\oplus {\mathbb{H}}_1$ and ${\mathbb{U}}_2=\ker \nabla_u^2\Phi_{|{\mathbb{H}}_1^{\perp}}(\tilde{u}_0,\lambda_0). $ Note that ${\mathbb{H}}={\mathbb{U}}_1\oplus{\mathbb{U}}_2$ and the spaces ${\mathbb{U}}_1$ and ${\mathbb{U}}_2$ are ${{\mathcal{G}}}$-representations. For $u\in{\mathbb{H}}$ we put $u=(u_1,u_2)\in{\mathbb{U}}_1\oplus{\mathbb{U}}_2$. In particular, since $\tilde{u}_0\in {\mathbb{H}}_1$, we identify this element with $(\tilde{u}_0,0)\in {\mathbb{U}}_1\oplus{\mathbb{U}}_2$. The equation $$\label{SB} \nabla_u\Phi(u,\lambda)=0$$ is equivalent to the system $$\label{SB1} \pi_1(\nabla_u\Phi(u_1,u_2,\lambda))=0,$$ $$\label{SB2} \pi_2(\nabla_u\Phi(u_1,u_2,\lambda))=0.$$ where $\pi_1\colon{\mathbb{H}}\to{\mathbb{U}}_1$ and $\pi_2\colon{\mathbb{H}}\to{\mathbb{U}}_2$ are ${{\mathcal{G}}}$-equivariant projections. Moreover, since ${\mathcal{G}}(\tilde{u}_0)\subset {\mathbb{H}}_1\subset {\mathbb{U}}_1$, $$\dim\ker\nabla^2_u\Phi_{|{\mathbb{U}}_1}(\tilde{u}_0,\lambda_{0})=\dim {\mathcal{G}}(\tilde{u}_0),$$ i.e. ${\mathcal{G}}(\tilde{u}_0)$ is a non-degenerate critical orbit of $\Phi(\cdot,\lambda_0)_{|{\mathbb{U}}_1}$. Therefore, by the equivariant implicit function theorem (see Theorem \[G-ImplicitInfinite\]) applied to the functional $\Phi\colon{\mathbb{U}}_1\oplus({\mathbb{U}}_2 \times {\mathbb{R}}) \rightarrow {\mathbb{R}}$, the point $(0, \lambda_0)$ and the equation , there exist open sets ${\mathcal{O}}_{0}\subset {\mathbb{U}}_2$, ${\mathcal{O}}_{\lambda_0}\subset {\mathbb{R}}$ such that $0\in {\mathcal{O}}_{0},\lambda_0 \in {\mathcal{O}}_{\lambda_0}$ and a ${{\mathcal{G}}}$-equivariant map $\tau\colon {\mathcal{G}}(\tilde{u}_0)\times {\mathcal{O}}_{0}\times {\mathcal{O}}_{\lambda_0}\to {\mathbb{U}}_1 $ such that (i) $\tau(u_1,0,\lambda_0)=u_1$ for $u_1\in{\mathcal{G}}(\tilde{u}_0)$, (ii) $\pi_1(\nabla_u\Phi(\tau(u_1,u_2,\lambda),u_2,\lambda))=0$ if $u_1 \in {\mathcal{G}}(\tilde{u}_0), u_2\in {\mathcal{O}}_{0}$ and $\lambda \in {\mathcal{O}}_{\lambda_0}$ and these are the only solutions of $\pi_1(\nabla_u\Phi(u_1,u_2,\lambda))=0$ near the orbit if $u_2\in {\mathcal{O}}_{0}$ and $\lambda \in {\mathcal{O}}_{\lambda_0}.$ Hence all the solutions of the equation , and consequently the solutions of and , can have (in the neighbourhood of the orbit) only the following isotropy groups: $${{\mathcal{G}}}_{(\tau(u_1,u_2,\lambda),u_2,\lambda)}={{\mathcal{G}}}_{\tau(u_1,u_2,\lambda)}\cap {{\mathcal{G}}}_{u_2}\cap{{\mathcal{G}}}_{\lambda}= {{\mathcal{G}}}_{\tau(u_1,u_2,\lambda)}\cap {{\mathcal{G}}}_{u_2}\subset {{\mathcal{G}}}_{u_2}.$$ To finish the proof observe that in the case $u_2=0$ we have $(\tau(u_1,0,\lambda),0,\lambda)\in {\mathbb{U}}_1\times\{0\}\times{\mathbb{R}}$ for $u_1\in {\mathcal{G}}(\tilde{u}_0)$, $\lambda\in{\mathcal{O}}_{\lambda_0}$. Considering only the solutions of and observing that such solutions in ${\mathbb{U}}_1\times\{0\}\times{\mathbb{R}}$ are the trivial ones, we obtain $(\tau(u_1,0,\lambda),0,\lambda)\in{\mathcal{T}}$, which completes the proof. Lemma \[lem:IsGr\] generalises the lemma due to Dancer from [@Dancer1979]. Dancer’s result states that if the kernel of the second derivative of the functional at a bifurcation point does not contain nonzero radially-symmetric elements, then at a neighbourhood of this point all nontrivial solutions are not radial. This lemma cannot be applied to prove Theorem \[thm:SymmBreak\] in the case $\dim {\mathcal{G}}(\tilde{u}_0)>0$, since $\ker \nabla_u^2\Phi(\tilde{u}_0,\lambda_0)$ contains constant (and therefore radially symmetric) functions from the space tangent to the orbit. Note that Theorem \[th:GLOB\] implies that $\lambda_0\in GLOB$. Moreover, from Corollary \[cor:kernel\] we have $$\ker \nabla_u^2\Phi_{|{\mathbb{H}}_1^{\perp}}(\tilde{u}_0,\lambda_0)= \ker( Id - L_{\lambda_0 A})\cap {\mathbb{H}}_1^{\perp}={\mathbb{V}}_{-\Delta}(\alpha_{j_1})^{\mu_{\lambda_0A}(\alpha_{j_1})}\oplus\ldots\oplus {\mathbb{V}}_{-\Delta}(\alpha_{j_s})^{\mu_{\lambda_0A}(\alpha_{j_s})}.$$ Since $\alpha_{j_1},\ldots, \alpha_{j_s}\neq 0$ are such that ${\mathbb{V}}_{-\Delta}(\alpha_{j_i})^{SO(N)}=\{0\}$ for every $i=1,\ldots,s$, we conclude that $$\label{eq:czymker} \ker \nabla_u^2\Phi_{|{\mathbb{H}}_1^{\perp}}(\tilde{u}_0,\lambda_0)^{SO(N)}=\{0\}.$$ Lemma \[lem:IsGr\] yields that there exists $U\subset {\mathbb{H}}\times{\mathbb{R}}$ such that if $\nabla_u \Phi(u,\lambda)=0$ and $(u,\lambda)\in U\setminus {\mathcal{T}}$ then there exists $\overline{u}\in\ker \nabla_u^2\Phi_{|{\mathbb{H}}_1^{\perp}}(\tilde{u}_0,\lambda_0)\setminus \{0\}$ such that ${\mathcal{G}}_u\subset {\mathcal{G}}_{\overline{u}}$. Since ${\mathcal{G}}_{\tilde{u}_0}=\{e\}\times SO(N)$, to prove that ${\mathcal{G}}_u\neq {\mathcal{G}}_{\tilde{u}_0}$ it suffices to note that the isotropy group of $\overline{u}$ is not of the form $H\times SO(N)$, where $H\in{\overline{\operatorname{sub}}}(\Gamma)$. Indeed, if ${{\mathcal{G}}}_{\overline{u}}= H\times SO(N)$, then $\overline{u}(\alpha^{-1} x)= \overline{u}(x)$ for every $\alpha\in SO(N)$, $x\in B^N$, i.e. $SO(N)_{\overline{u}}=SO(N)$ and therefore from we obtain $ \overline{u} =0$, which contradicts $\overline{u}\in\ker \nabla_u^2\Phi_{|{\mathbb{H}}_1^{\perp}}(\tilde{u}_0,\lambda_0)\setminus \{0\}$. Note that if the assumptions of Theorem \[thm:SymmBreak\] are satisfied, i.e. $\ker\nabla_u^2\Phi_{|{\mathbb{H}}_1^{\perp}}(\tilde{u}_0,\lambda_0)^{SO(N)}=\{0\}$, then there is a neighbourhood $U$ of the bifurcation orbit such that all nontrivial solutions from $U$ are non-radial. In other words, in Theorem \[thm:SymmBreak\] we obtain a connected family of orbits of non-radial solutions bifurcating from the set of radial ones. \[rem:radial\] Let $\lambda_0 \in BIF.$ By the proof of Lemma \[lem:IsGr\] we deduce that there is a neighbourhood of the orbit ${\mathcal{G}}(\tilde{u}_0)\times\{\lambda_0\}$ such that all nontrivial solutions of $\nabla_u\Phi(u,\lambda)=0$ can have only the isotropy groups of the form ${{\mathcal{G}}}_{\tau(u_1,u_2,\lambda)}\cap {{\mathcal{G}}}_{u_2}$. Note that $u_1\in {\mathcal{G}}(\tilde{u}_0)$ and hence ${\mathcal{G}}_{u_1}=\{e\}\times SO(N)$. Consider the additional assumption: $$\ker \nabla_u^2\Phi_{|{\mathbb{H}}_1^{\perp}}(\tilde{u}_0,\lambda_0)^{SO(N)}=\ker \nabla_u^2\Phi_{|{\mathbb{H}}_1^{\perp}}(\tilde{u}_0,\lambda_0).$$ Then ${{\mathcal{G}}}_{u_2}= \Gamma_{u_2}\times SO(N)$. Therefore by the proof of Lemma \[lem:IsGr\], and since a ${{\mathcal{G}}}$-equivariant function $\tau$ increases isotropy groups (i.e. ${{\mathcal{G}}}_{(u_1,u_2,\lambda)} \subset {{\mathcal{G}}}_{\tau(u_1,u_2,\lambda)}$), we have $${\mathcal{G}}_{u_1}\cap{\mathcal{G}}_{u_2}= (\{e\}\times SO(N))\cap (\Gamma_{u_2}\times SO(N))=\{e\}\times SO(N)\subset {{\mathcal{G}}}_{\tau(u_1,u_2,\lambda)}\cap {{\mathcal{G}}}_{u_2},$$ i.e. solutions of $\nabla_u\Phi(u,\lambda)=0$ in the neighbourhood of the orbit $ {\mathcal{G}}(\tilde{u}_0) \times \{\lambda_0\}$ have isotropy groups of the form $H\times SO(N)$, where $H\in{\overline{\operatorname{sub}}}(\Gamma)$. Hence all solutions from the neighbourhood of the orbit are radial. Fix $\lambda_0\in\Lambda$ and suppose that $\sigma(\lambda_0 A)\cap \sigma(-\Delta; B^N)\setminus\{0\}=\{\alpha_{j_1},\ldots, \alpha_{j_s}\}$ are such that $\alpha_{j_1},\ldots, \alpha_{j_s}\notin {\mathcal{A}}_0$, where ${\mathcal{A}}_0$ is defined in Section \[sec:eigenspaces\]. Then from Remark \[rem:bezA0\] it follows that ${\mathbb{V}}_{-\Delta}(\alpha_{j_i})^{SO(N)}=\{0\}$ and therefore the assumptions of Theorem \[thm:SymmBreak\] are satisfied. Hence the global symmetry-breaking phenomenon occurs at the orbit ${\mathcal{G}}(\tilde{u}_0)\times\{\lambda_{0}\}$. Illustration {#sec:illustration} ============ In this section we discuss a few examples in order to illustrate the abstract results proved in the previous section. Using the properties of the eigenspaces of the Laplace operator (with the Neumann boundary conditions) on the ball, we verify assumptions (C1)–(C3). More precisely we apply the information collected in Subsection \[sec:eigenspaces\]. **[Example 1.]{} Consider the system for $N=2$ with the potential $F$ and $u_0 \in \nabla F^{-1}(0)$ satisfying assumptions (B1)–(B4). Assume that $\lambda_0 \in {\mathbb{R}}\setminus\{0 \}$ and $\sigma(\lambda_0 A) \cap \sigma(-\Delta; B^2)\setminus\{0\}= \{\alpha\},$ where $\sqrt{\alpha}$ is not a root of $J_0'(x)=0$ for $J_0$ being the Bessel function of order $0$. Following the notation of Section \[sec:eigenspaces\] it means that $\alpha \not \in {\mathcal{A}}_0$.** In this situation, from Theorem \[thm:nieprzywiedlnosc\] and Fact \[fact:opis\] the assumption (C1) of Section 3 is satisfied. By Theorem \[th:GLOB\] we obtain that a global bifurcation occurs from the orbit ${\mathcal{G}}(\tilde{u}_0) \times \{\lambda_0\}$. Moreover, from Remark \[rem:bezA0\] it follows that ${\mathbb{V}}_{-\Delta}(\alpha)^{SO(2)}=\{0\}.$ Then by Theorem \[thm:SymmBreak\] the global symmetry breaking occurs at the orbit ${\mathcal{G}}(\tilde{u}_0)\times \{\lambda_0\}.$ **[Example 2.]{} Consider the system for $N=2$ with the potential $F$ and $u_0 \in \nabla F^{-1}(0)$ satisfying assumptions (B1)–(B4). Assume that $\lambda_0 \in {\mathbb{R}}\setminus\{0 \},$ $\sigma(\lambda_0 A) \cap \sigma(-\Delta; B^2)\setminus\{0\}= \{\alpha_1, \ldots,\alpha_s \}$ and there exists $i\in \{1, \ldots,s\}$ such that $\sqrt{\alpha_i}$ is not a root of $J_0'(x)=0$.** As in Example 1, a global bifurcation occurs from the orbit ${\mathcal{G}}(\tilde{u}_0) \times \{\lambda_0\}.$ If moreover $\alpha_i \not \in {\mathcal{A}}_0$ for all $i\in \{1, \ldots,s\}$, then the global symmetry breaking occurs at the orbit ${\mathcal{G}}(\tilde{u}_0)\times \{\lambda_0\}.$ **[Example 3.]{} Consider the system for $N=3$ with the potential $F$ and $u_0 \in \nabla F^{-1}(0)$ satisfying assumptions (B1)–(B4). Assume that $\lambda_0 \in {\mathbb{R}}\setminus\{0 \}$, $\sigma(\lambda_0 A) \cap \sigma(-\Delta; B^3)\setminus\{0\}= \{\alpha_1, \ldots,\alpha_s \}$ and there exists $i\in \{1, \ldots,s\}$ such that $\sqrt{\alpha_i}$ is not a solution of the equation: $$J_{\frac12}'(x)-\frac{1}{2x}J_{\frac12}(x)=0,$$ where $J_{\frac12}$ is the Bessel function of order $\frac12$. Therefore $\alpha_i \not \in {\mathcal{A}}_0.$** In this situation, since ${\mathcal{H}}_l^3 \subset {\mathbb{V}}_{-\Delta}(\alpha_i)$ for some $l > 0$ (by Fact \[fact:opis\]), the assumption (C1) is satisfied and from Theorem \[th:GLOB\] we obtain that a global bifurcation occurs from the orbit ${\mathcal{G}}(\tilde{u}_0) \times \{\lambda_0\}$. Moreover, if $\alpha_i \not \in {\mathcal{A}}_0$ for all $i\in \{1, \ldots,s\}$, then from Remark \[rem:bezA0\] we conclude that ${\mathbb{V}}_{-\Delta}(\alpha_i)^{SO(3)}=\{0\}$ for all $i\in \{1, \ldots,s\}$. Therefore, by Theorem \[thm:SymmBreak\] it follows that the global symmetry breaking occurs at the orbit ${\mathcal{G}}(\tilde{u}_0)\times \{\lambda_0\}.$ **[Example 4.]{} Consider the system with the potential $F$ and $u_0 \in \nabla F^{-1}(0)$ satisfying assumptions (B1)–(B4). Assume that $\lambda_0 \in {\mathbb{R}}\setminus\{0 \}$ and that $\sigma(\lambda_0 A) \cap \sigma(-\Delta; B^N)\setminus\{0\}= \{\alpha_1, \ldots,\alpha_s \},$ where $\sqrt{\alpha_i}$ is a solution of the equation $$J'_{\frac{N-2}{2}}(x)-\frac{N-2}{2x} J_{\frac{N-2}{2}}(x)=0$$ for every $i \in \{1, \ldots, s\}$.** If there exists $i \in \{1, \ldots,s\}$ such that $\dim{\mathbb{V}}_{-\Delta}(\alpha_i)>1$ then the assumption (C1) is satisfied and by Theorem \[th:GLOB\] we obtain that a global bifurcation occurs from the orbit ${\mathcal{G}}(\tilde{u}_0) \times \{\lambda_0\}$. If $\dim{\mathbb{V}}_{-\Delta}(\alpha_{i})=1$ for all $i\in \{1, \ldots,s\}$, then we assume additionally that $\sum_{i=1}^s \mu_{\lambda_0 A}(\alpha_i)-\mu_A(0)$ is an odd number. In this situation the assumption (C2) is satisfied and by Theorem \[th:GLOB\] we obtain that a global bifurcation occurs from the orbit ${\mathcal{G}}(\tilde{u}_0) \times \{\lambda_0\}$. Note that, if $\dim{\mathbb{V}}_{-\Delta}(\beta_{i})=1$ for all $i\in \{1, \ldots,s\}$, then $\ker( Id - L_{\lambda_0 A})^{SO(N)}=\ker( Id - L_{\lambda_0 A})$ (see Remark \[rem:nontriviality\](2)). Therefore, from Remark \[rem:radial\], we conclude that all nontrivial solutions at a neighbourhood of ${\mathcal{G}}(\tilde{u}_0) \times \{\lambda_0\}$ (bifurcating from this orbit) are radial, i.e. there is no symmetry breaking at the orbit. **[Example 5.]{} Consider the system with the potential $F$ and $u_0 \in \nabla F^{-1}(0)$ satisfying assumptions (B1)–(B4). Assume that $\lambda_0=0$.** If $m-\dim \ker A$ is odd, then the assumption (C3) is satisfied and we obtain a global bifurcation from the orbit ${\mathcal{G}}(\tilde{u}_0) \times \{0\}.$ If $m-\dim \ker A > 0$, then Theorem \[thm:0\] implies a local bifurcation from the orbit ${\mathcal{G}}(\tilde{u}_0) \times \{0\}.$ As in Example 4, it is easy to see that all nontrivial solutions at a neighbourhood of the orbit are radial. Appendix {#sec:app} ======== In the following section, to make the paper self-contained, we collect some classical definitions and facts which we use to prove our main results. The equivariant implicit function theorem ----------------------------------------- Below we reformulate an equivariant version of the implicit function theorem in infinite dimensional spaces, due to Dancer (see [@[Dancer]], paragraph 3). \[G-ImplicitInfinite\] Let $G$ be a compact Lie group and suppose that (i) ${\mathbb{H}}_1$, ${\mathbb{H}}_2$ are Hilbert spaces, which are orthogonal $G$-representations, (ii) $\Phi\colon {\mathbb{H}}_1\oplus{\mathbb{H}}_2\to {\mathbb{R}}$ is a $G$-invariant functional of class $C^2$, (iii) there is $v_0\in {\mathbb{H}}_2$ such that $\nabla^2_u\Phi(u,v_0)$ is Fredholm for every $u\in{\mathbb{H}}_1$, there is $u_0\in{\mathbb{H}}_1$ such that $\nabla_u\Phi(u_0,v_0)=0$ and $G(u_0)$ is a non-degenerate critical orbit of $\Phi(\cdot, v_0)$. Then there exist $\delta>0$ and a continuous $G$-invariant map $\tau\colon G(u_0)\times B_{\delta}(v_0,{\mathbb{H}}_2)\to {\mathbb{H}}_1$ such that 1. $\tau(u,v_0)=u$ on $G(u_0)$, 2. $\nabla_u\Phi(\tau(u,v),v)=0$ if $u\in G(u_0)$ and $v\in B_{\delta}(v_0,{\mathbb{H}}_2)$ and these are the only solutions of $\nabla_u\Phi(u,v)=0$ near $G(u_0)$ if $v\in B_{\delta}(v_0,{\mathbb{H}}_2)$, 3. for each $v\in B_{\delta}(v_0,{\mathbb{H}}_2)$, the map $u\mapsto \tau(u,v)$ is one-to-one. Equivariant Conley index {#subsec:CI} ------------------------ In this subsection we collect properties of the equivariant Conley index. For a fuller treatment we refer to [@[Bartsch]], [@Geba] in the finite dimensional case and to [@Izydorek] for the infinite dimensional case. Let $G$ be a compact Lie group and suppose that $\Omega$ is a $G$-invariant subset of a finite dimensional $G$-representation ${\mathbb{V}}$. The $G$-equivariant Conley index of an isolated invariant set of a (local) flow is defined as a $G$-homotopy type of a pointed $G$-space, see [@[Bartsch]], [@Geba]. If $f\colon\Omega\to{\mathbb{V}}$ is a $G$-equivariant map of class $C^1$, then it generates a local $G$-flow $\eta$, such that $\eta(x_0,\cdot)$ is the local solution of the problem $y'(t)=f(y(t))$, $y(0)=x_0$. We denote by $CI_G(S, f)$ the Conley index of an isolated invariant set $S$ of the flow generated by $f$. Put $S^{{\mathbb{V}}}=D_1(0,{\mathbb{V}})/\partial D_1(0,{\mathbb{V}})$ and denote by $[S^{{\mathbb{V}}}]_G$ a $G$-homotopy type of a pointed $G$-space $S^{{\mathbb{V}}}$. From the definition of the Conley index and the Hartman–Grobman theorem there follows (see also [@SmoWass]): \[CIjakosfera\] Let $f\colon {\mathbb{V}}\to{\mathbb{R}}$ be a $G$-invariant map of class $C^2$ and suppose that $v_0 \in{\mathbb{V}}$ is such that $G(v_0)=\{v_0\}$, $\nabla f(v_0)=0$ and $\det\nabla^2 f(v_0)\neq 0$. Then $CI_G(\{v_0\}, -\nabla f)=[S^{{\mathbb{V}}^-}]_G$, where ${\mathbb{V}}^-$ is the direct sum of eigenspaces of $\nabla^2 f(v_0)$ corresponding to the negative eigenvalues. The following theorem is a direct consequence of the Continuation Property of the Conley index, see [@[Bartsch]]: \[thm:homotopy\](Homotopy invariance) Let $v_0 \in {\mathbb{V}}$ be such that $G(v_0)=\{v_0\}$ and suppose that $f\in C^2({\mathbb{V}}\times [0,1], {\mathbb{R}})$ is $G$-invariant. If $\nabla_v f(v_0, t)=0$ and $\det\nabla_v^2 f(v_0,t)\neq 0$ for every $t \in [0,1]$, then $$CI_{G}(\{v_0\},\nabla_v f(\cdot,0))= CI_G(\{v_0\},\nabla_v f(\cdot,1)).$$ The Conley index of a flow generated by a gradient map is homotopy type of a pointed finite $G$-CW-complex, see Proposition 5.6 of [@Geba] for the proof. With a $G$-homotopy type of a pointed finite $G$-CW-complex $X$ one can associate a $G$-equivariant Euler characteristic $\chi_G(X)$, which is an element of the Euler ring $U(G)$ with the unit ${\mathbb{I}}=\chi_G(G/G ^+)$. The actions in $U(G)$ are defined by $$\label{eq:actionsUG} \left.\begin{array}{rcl} \chi_G(X)+ \chi_G(Y)&=&\chi_G(X\vee Y),\\ \chi_G(X)\star \chi_G(Y)&=&\chi_G(X\wedge Y), \end{array} \right.$$ where $X\vee Y$ is the wedge sum and $X\wedge Y$ is the smash product of pointed finite $G$-CW-complexes $X,Y$. The full description of this theory one can find for example in [@TomDieck1], [@TomDieck]. The following theorem is an immediate consequence of Lemma 3.4 of [@[GaRyb]]: \[thm:nontrivialityofEC\] If the group $G$ is connected and ${\mathbb{V}}$ is a nontrivial $G$-representation, then $\chi_G(S^{{\mathbb{V}}})\neq {\mathbb{I}}\in U(G)$. Consider the potential $\varphi \colon {\mathbb{R}}^n \times {\mathbb{R}}\rightarrow {\mathbb{R}}$ and assume that for $\lambda_-, \lambda_+ \in {\mathbb{R}}$ the critical orbit $G(\tilde{u}_0)$ of $\varphi(\cdot, \lambda_{\pm})$ is non-degenerate. In Section \[sec:main\] we compare equivariant Conley indices $CI_{G} (G(\tilde{u}_0), \varphi(\cdot, \lambda_{\pm})).$ Using the result from [@PRS] one can reduce this problem to comparing the Euler characteristics of the Conley indices of potentials restricted to the space orthogonal to the orbit. More precisely, reasoning as in the proof of Corollary 3.2 of [@PRS], from Theorem 3.1 of [@PRS] we obtain the following fact: \[cor:3.2\] Let $\Omega \subset {\mathbb{R}}^n$ be an open and $G$-invariant subset and $\varphi \in C^2(\Omega\times {\mathbb{R}},\mathbb{R})$ be $G$-invariant. Moreover, let $\lambda_-, \lambda_+ \in {\mathbb{R}}$ and $G(\tilde{u}_0) \subset (\nabla_u \varphi(\cdot, \lambda_{\pm}))^{-1}(0)$ be a non-degenerate critical orbit. Put $\phi=\varphi_{|T^{\perp}_{\tilde{u}_0}G(\tilde{u}_0)}$. If the pair $(G, G_{\tilde{u}_0})$ is admissible and $$\chi_{G_{\tilde{u}_0}}(CI_{G_{\tilde{u}_0}}(\{ \tilde{u}_0\}, -\nabla_u\phi(\cdot, \lambda_-)) \neq \chi_{G_{\tilde{u}_0}}(CI_{G_{\tilde{u}_0}}(\{ \tilde{u}_0\}, -\nabla_u\phi(\cdot, \lambda_+))$$ then $$CI_{G}(G(\tilde{u}_0), -\nabla_u\varphi(\cdot, \lambda_-)) \neq CI_{G}(G(\tilde{u}_0),- \nabla_u\varphi(\cdot, \lambda_+)).$$ Moreover, $$\chi_{G}(CI_{G}(G(\tilde{u}_0), -\nabla_u\varphi(\cdot, \lambda_-))) \neq \chi_{G}(CI_{G}(G(\tilde{u}_0),- \nabla_u\varphi(\cdot, \lambda_+))).$$ Suppose now that ${\mathcal{U}}$ is a $G$-invariant subset of an infinite dimensional Hilbert space, which is an orthogonal $G$-representation ${\mathbb{H}}$. The $G$-equivariant Conley index of an isolated invariant set of a (local) $G$-$\mathcal{LS}$-flow is defined as a $G$-homotopy type of a $G$-equivariant spectrum, see [@Izydorek]. As before, if $F\colon{\mathcal{U}}\to{\mathbb{H}}$ is a $G$-equivariant map of class $C^1$ and it is a completely continuous perturbation of the identity, then it generates a local $G$-$\mathcal{LS}$-flow. We denote by $\mathcal{CI}_G(S, F)$ the Conley index of an isolated invariant set $S$ of the flow generated by $F$. Eigenspaces of the Laplace operator {#sec:eigenspaces} ----------------------------------- In this subsection we introduce basic properties of the eigenspaces of the Laplace operator (with the Neumann boundary conditions) on the ball. More precisely, we study the problem: $$\label{eq:appeig} \left\{ \begin{array}{rclcl} - \triangle u & = & \beta u & \text{ in } & B^N \\ \frac{\partial u}{\partial \nu} & = & 0 &\text{ on } & S^{N-1}. \end{array} \right.$$ These properties are known, but it is difficult to find a reference in the literature, except for the case $N=2, 3$, see for example [@[MICH]], [@Mucha]. To make our article self-contained, we sketch here the general case. Let ${\mathcal{H}}^N_l$ denote the linear space of harmonic, homogeneous polynomials of $N$ independent variables, of degree $l$, restricted to the sphere $S^{N-1}.$ \[thm:nieprzywiedlnosc\] The spaces ${\mathcal{H}}^N_l$ are irreducible representations of the group $SO(N)$. Furthermore, if $l \geq 1$ then ${\mathcal{H}}^N_l$ is a nontrivial representation of $SO(N)$ and for $l=0$ it is a trivial one. Moreover, $$\dim{\mathcal{H}}^N_l=\left\{\begin{array}{ccc} 1& \mathrm{ if } & N=2, l=0 \\ 2 & \mathrm{if} & N=2, l \geq 1 \\ (2l+N-2)\frac{(N-3+l)!}{l!(N-2)!} & \mathrm{if} & N \geq 3, l \geq 0.\end{array} \right.$$ For the proof of the irreducibility of the spaces ${\mathcal{H}}^N_l$ we refer the reader to [@Gurarie] (Theorem 5.1). The proof of the latter part of the theorem can be found in [@Shimakura] (Theorem 4.1). To find eigenspaces of the equation we write the Laplacian in polar coordinates $r\geq 0$, $\varphi=(\varphi_1,\ldots, \varphi_N)$, $0\leq \varphi_i< \pi$ for $i=1,\ldots, N-1$, $0\leq \varphi_N< 2\pi$: $$\Delta u = r^{1-N}\frac{\partial}{\partial r}\left(r^{N-1}\frac{\partial u}{\partial r}\right) + \frac{1}{r^2}\Delta_{S^{N-1}} u,$$ where $\Delta_{S^{N-1}}$ is the Laplace–Beltrami operator on $S^{N-1}$. Applying a standard separation of variables $u(\varphi,r)=v(\varphi)\cdot f(r)$ to , we obtain the system $$\begin{aligned} \label{eq:evsphere} -\Delta_{S^{N-1}}v(\varphi)&=&\mu v(\varphi)\ \text{ on }\ S^{N-1},\\ \label{eq:radialpart} r^2 f''(r)+(N-1)rf'(r)+\left(\beta r^2-\mu\right) f(r)&=&0\ \text{ on }\ (0,1), \\ \label{eq:bound1} |f(0)|&<&\infty,\\ \label{eq:bound2} f'(1)&=&0.\end{aligned}$$ The equation has solutions only if $\mu$ is an eigenvalue of $-\Delta_{S^{N-1}}$, i.e. $\mu=\mu_l: =l(l+N-2)$, $l=0,1,\ldots$, with associated eigenspaces equal ${\mathcal{H}}^N_l$, see [@Shimakura]. Substituting $\mu=\mu_l$, $\rho=\sqrt{\beta} r$ and $f(r)=g(\rho)/\rho^{\frac{N-2}{2}}$ into , we get the Bessel equation of order $l+\frac{N-2}{2}$: $$\rho^2 g''(\rho)+\rho g'(\rho)+\left( \rho^2-\left(l+\frac{N-2}{2}\right)^2\right) g(\rho)=0\ \text{ on }\ (0,\sqrt{\beta}).$$ Using we obtain that the solution of this equation is $g(\rho)=C_lJ_{l+\frac{N-2}{2}}(\rho)$, where $C_l\in{\mathbb{R}}$ and $J_{l+\frac{N-2}{2}}$ is the Bessel function of the first kind of order $l+\frac{N-2}{2}$. Since we are interested only in solutions satisfying , taking into consideration that $f'(r)=\sqrt{\beta}(\sqrt{\beta}r)^{1-\frac{N}{2}}\left(g'(\sqrt{\beta}r)-\frac{N-2}{2\sqrt{\beta} r} g(\sqrt{\beta} r)\right)$, we obtain that $\sqrt{\beta}$ satisfies the equation: $$\label{eq:beta} J'_{l+\frac{N-2}{2}}(x)-\frac{N-2}{2x} J_{l+\frac{N-2}{2}}(x)=0.$$ For $m\in {\mathbb{N}}$ we denote by $x_{lm}$ the $m$th solution of in $(0, \infty)$. Put $x_{00}=0$ and ${\mathcal{A}}_l=\{\beta_{lm} = x^2_{lm}\}_{m=1}^{\infty}$ for $l>0$ and ${\mathcal{A}}_0=\{\beta_{0m}=x_{0m}^2\}_{m=0}^{\infty}$. \[fact:opis\] From the above considerations: 1. $\sigma(-\Delta;B^N)$ is the union of the sets ${\mathcal{A}}_l$, 2. if $\beta\in {\mathcal{A}}_l$, then ${\mathcal{H}}^N_l\subset{\mathbb{V}}_{-\Delta}(\beta)$, i.e. ${\mathcal{H}}^N_l$ is $SO(N)$-equivalent to a subspace of ${\mathbb{V}}_{-\Delta}(\beta)$. For $\beta\in\sigma(-\Delta;B^N)$ we have ${\mathbb{V}}_{-\Delta}(\beta)\approx_{SO(N)}\bigoplus\limits_{l\in\{l\geq0\colon \beta\in{\mathcal{A}}_l\}} {\mathcal{H}}^N_l$ (by $\approx_{SO(N)}$ we understand the equivalence relation of $SO(N)$-representations). \[rem:nontriviality\] Since from Theorem \[thm:nieprzywiedlnosc\] we have $\dim{\mathcal{H}}^N_0=1$ and $\dim{\mathcal{H}}^N_l>1$ for $l\geq 1$, it follows that for $\beta\in \sigma(-\Delta;B^N)$: 1. if $\dim{\mathbb{V}}_{-\Delta}(\beta)>1$, then there exists $l>0$ such that ${\mathcal{H}}^N_l\subset{\mathbb{V}}_{-\Delta}(\beta)$ and thus ${\mathbb{V}}_{-\Delta}(\beta)$ is a nontrivial $SO(N)$-representation, 2. if $\dim{\mathbb{V}}_{-\Delta}(\beta)=1$, then ${\mathbb{V}}_{-\Delta}(\beta)\approx_{SO(N)}{\mathcal{H}}^N_0$ and therefore it is a trivial representation of $SO(N)$. \[rem:bezA0\] From Theorem \[thm:nieprzywiedlnosc\] we obtain that if $\beta\in \sigma(-\Delta;B^N)$ and $\beta\notin{\mathcal{A}}_0$, then ${\mathbb{V}}_{-\Delta}(\beta)^{SO(N)}=\{0\}$. To illustrate the above description of the eigenspaces, we will look more closely at the cases $N=2,3$. Suppose that $N=2$. Then, for $l\in{\mathbb{N}}\cup\{0\}$, the equation is of the form $J_l'(x)=0$ and therefore $x_{lm}$ is the $m$th solution of $J_l'(x)=0$ in $(0, \infty)$ and $x_{00}=0$. \[lem:eigenspace\] Under the above notation, $\sigma(-\Delta; B^2 )=\bigcup_{l=0}^{\infty}{\mathcal{A}}_l=\{\beta_{lm} = x^2_{lm}\}_{l=1,m=1}^{\infty} \cup \{\beta_{0m}=x_{0m}^2\}_{m=0}^{\infty}$ with corresponding eigenvectors given by 1. $v_{lm}^1(r,\phi)=J_l(x_{lm}r)\cos l \varphi$ and $v_{lm}^2(r,\phi)=J_l(x_{lm}r)\sin l \varphi$ for $\beta_{lm}$ in the case $l>0,$ 2. $v_{0m}(r,\phi)=J_0(x_{0m}r)$ for $\beta_{0m}$ in the case $l=0.$ Note that from the above fact it follows that ${\mathcal{H}}^2_l\approx_{SO(2)}\operatorname{span}\{v_{lm}^1, v_{lm}^2\}$ for $l> 0$ and ${\mathcal{H}}^2_0\approx_{SO(2)}\operatorname{span}\{ v_{0m}\}$. \[cor:sone\] Let $\beta \in \sigma(-\Delta; B^2 ),$ then 1. If $\beta \in{\mathcal{A}}_l$ for $l>0$, i.e. $\beta=\beta_{lm}$ for given $l,m>0$, then ${\mathbb{V}}_{-\Delta}(\beta)$ is a nontrivial ${SO(2)}$-representation. Moreover, if $\dim{\mathbb{V}}_{-\Delta}(\beta)$ is even, then ${\mathbb{V}}_{-\Delta}(\beta)^{SO(2)}=\{0\}$ and if $\dim{\mathbb{V}}_{-\Delta}(\beta)$ is odd, then ${\mathbb{V}}_{-\Delta}(\beta)^{SO(2)}\approx_{SO(2)}{\mathcal{H}}^2_0$. 2. If $\beta \in{\mathcal{A}}_0$, i.e. $\beta=\beta_{0m}$ for a given $m \in {\mathbb{N}}$, then $\dim {\mathbb{V}}_{-\Delta}(\beta)$ is an odd number. Moreover, if $\dim{\mathbb{V}}_{-\Delta}(\beta)=1$, then ${\mathbb{V}}_{-\Delta}(\beta)\approx_{SO(2)}{\mathcal{H}}^2_0$ is a trivial $SO(2)$-representation. Suppose now that $N=3$. Then, for $l\in{\mathbb{N}}\cup\{0\}$, the equation is of the form $J_{l+\frac12}'(x)-\frac{1}{2x}J_{l+\frac12}(x)=0$ and therefore $x_{lm}$ is the $m$th solution of this equation in $(0, \infty)$ and $x_{00}=0$. \[lem:eigenspaceN=3\] Under the above notation, $\sigma(-\Delta; B^3 )=\bigcup_{l=0}^{\infty}{\mathcal{A}}_l=\{\beta_{lm} = x^2_{lm}\}_{l=1,m=1}^{\infty} \cup \{\beta_{0m}=x_{0m}^2\}_{m=0}^{\infty}$ with corresponding eigenvectors: 1. for $\beta_{lm}$ in the case $l>0$: $$\begin{aligned} v_{kml}^1(r,\varphi_1,\varphi_2)&=&\frac{1}{\sqrt{r}}J_{l+\frac12}( x_{lm}r) P_{lk}(\cos \varphi_1)\sin k\varphi_2, \\ v_{kml}^2(r,\varphi_1,\varphi_2)&=&\frac{1}{\sqrt{r}}J_{l+\frac12}(x_{lm}r) P_{lk}(\cos \varphi_1)\cos k\varphi_2, \\ v_{0ml}(r,\varphi_1,\varphi_2)&=&J_{l+\frac12}(x_{lm}r)P_l(\cos \varphi_1),\end{aligned}$$ where $k=1,\dots, l$ and $P_{lk}$, $P_l$ are Legendre functions, 2. for $\beta_{0m}$: $v_{0m0}(r,\varphi_1,\varphi_2)=J_{\frac12}(x_{0m}r)$. From the above fact it follows that ${\mathcal{H}}^3_l\approx_{SO(3)}\operatorname{span}\{v_{0ml},v_{1ml}^1,v_{1ml}^2,\ldots v_{lml}^1, v_{lml}^2\}$ for $l>0$ and ${\mathcal{H}}^3_0\approx_{SO(3)}\operatorname{span}\{v_{0m0}\}$. The description of ${\mathcal{H}}^N_l$ in the general case can be found in [@Vilenkin] (Chapter IX). [99]{} V. M. Babič, M. B. Kapilevič, S. G. Mihlin, G. I. Natanson, P. M. Riz, L. N. Slobodeckii, M. M. Smirnov, *The linear equations of mathematical physics*, (Russian), Nauka, Moscow, 1964. T. Bartsch, *Topological methods for variational problems with symmetries*, Lecture Notes in Mathematics 1560, Springer-Verlag, Berlin, 1993. E. N. Dancer, *On nonradially symmetric bifurcation*, J. London Math. Soc. 20(2) (1979), 287–292. E. N. Dancer, *The $G$-invariant implicit function theorem in infinite dimensions II*, Proceedings of the Royal Society of Edinburgh 102A(3-4) (1986), 211–220. G. L. Garza, S. Rybicki, *Equivariant bifurcation index*, Nonlinear Anal. 73(9) (2010), 2779–2791. J. Gawrycka, S. Rybicki, *Solutions of systems of elliptic differential equations on circular domains*, Nonlinear Anal. 59(8) (2004), 1347–1367. K. Gȩba, *Degree for gradient equivariant maps and equivariant Conley index*, Topological nonlinear analysis II, Birkhäuser (1997), 247–272. A. Go[ł]{}ȩbiewska, J. Kluczenko, *Connected sets of solutions for a nonlinear Neumann problem*, accepted for publication in Differential and Integral Equations (2016). A. Go[ł]{}ȩbiewska, S. Rybicki, *Global bifurcations of critical orbits of $G$-invariant strongly indefinite functionals*, Nonlinear Anal. 74(5) (2011), 1823–1834. A. Go[ł]{}ȩbiewska, S. Rybicki, *Equivariant Conley index versus the degree for equivariant gradient maps*, Disc. Contin. Dyn. Syst. Ser. S 6(4) (2013), 985–997. D. Gurarie, *Symmetries and Laplacians. Introduction to harmonic analysis, group representations and applications*, North-Holland Mathematics Studies 174, North-Holland, Amsterdam, 1992. M. Izydorek, *Equivariant Conley index in Hilbert spaces and applications to strongly indefinite problems*, Nonlinear Anal. TMA 51(1) (2002), 33–66. K. Kawakubo, *The theory of transformation groups*, Oxford University Press, 1991. J. Kluczenko, *Bifurcation and symmetry breaking of solutions of systems of elliptic differential equations*, Nonlinear Anal. 75(11) (2012), 4278–4295. K. Muchewicz, S. Rybicki, *Existence and continuation of solutions for a nonlinear Neumann problem*, Nonlinear Anal. 69(10) (2008), 3423–3449. E. Pérez-Chavela, S. Rybicki, D. Strzelecki, *Symmetric Liapunov center theorem*, Calc. Var. 56(2) (2017), doi:10.1007/s00526-017-1120-1. S. Rybicki, *Degree for equivariant gradient maps*, Milan J. Math. 73 (2005), 103–144. S. Rybicki, N. Shioji, P. Stefaniak, *Rabinowitz alternative for non-cooperative elliptic systems on geodesic balls*, presented for publication, arXiv:1703.08417. S. Rybicki, P. Stefaniak, *Unbounded sets of solutions of non-cooperative elliptic systems on spheres*, J. Differential Equations 259(7) (2015), 2833–2849. N. Shimakura, *Partial differential operators of elliptic type*, Translations of Mathematical Monographs 99, American Mathematical Society, Providence, Rhode Island, 1992. J. Smoller, A. Wasserman, *Bifurcation and symmetry-breaking*, Invent. Math. 100(1) (1990), 63–95. P. Stefaniak, *Symmetry breaking of solutions of non-cooperative elliptic systems*, J. Math. Anal. Appl. 408(2) (2013), 681–693. T. tom Dieck, *Transformation groups and representation theory*, Lecture Notes in Mathematics 766, Springer, Berlin, 1979. T. tom Dieck, *Transformation groups*, Walter de Gruyter & Co., Berlin, 1987. N. Ja. Vilenkin *Special functions and the theory of group representation*, Translations of Mathematical Monographs 22, American Mathematical Society, 1988. A. G. Wasserman, *Equivariant differential topology, Topology 8 (1969), 127–150.*
--- abstract: 'We study the effect of disorder on the dynamics of a transverse domain wall in ferromagnetic nanostrips, driven either by magnetic fields or spin-polarized currents, by performing a large ensemble of GPU-accelerated micromagnetic simulations. Disorder is modeled by including small, randomly distributed non-magnetic voids in the system. Studying the domain wall velocity as a function of the applied field and current density reveals fundamental differences in the domain wall dynamics induced by these two modes of driving: For the field-driven case, we identify two different domain wall pinning mechanisms, operating below and above the Walker breakdown, respectively, whereas for the current-driven case pinning is absent above the Walker breakdown. Increasing the disorder strength induces a larger Walker breakdown field and current, and leads to decreased and increased domain wall velocities at the breakdown field and current, respectively. Furthermore, for adiabatic spin transfer torque, the intrinsic pinning mechanism is found to be suppressed by disorder. We explain these findings within the one-dimensional model in terms of an effective damping parameter $\alpha^*$ increasing with the disorder strength.' author: - 'Ben Van de Wiele$^{1}$, Lasse Laurson$^2$, and Gianfranco Durin$^{3,4}$' title: The effect of disorder on transverse domain wall dynamics in magnetic nanostrips --- Domain wall (DW) dynamics in nanoscale ferromagnetic wires and strips driven by magnetic fields or spin-polarized currents is a subject of major technological importance for the operation of potential future nanoscale magnetic memory [@PAR-08; @HAY-08] and logic [@ALL-05] devices. In these devices information is typically stored as magnetic domains along a nanostrip/wire and is processed by DW motion. For the reliable operation of such devices it is of fundamental importance to understand and control the effect of imperfections or disorder on the DW dynamics, necessarily present in any realistic samples, e.g. in the form of thickness fluctuations and grain structure of the sample, or various impurities and defects in the material. At the same time, such systems constitute a low-dimensional limit of the general problem of driven elastic manifolds in a random potential [@LEC-09]. While the crucial importance of disorder for the dynamics of higher-dimensional DWs is well established, resulting in phenomena such as the Barkhausen effect [@DUR-06], majority of studies of DW motion in systems with nanostrip/wire geometry neglect disorder effects. This applies to both theoretical studies and interpretations of experimental results. Some exceptions include studies demonstrating enhanced DW propagation due to roughness of the edges of the strip [@NAK-03; @MAR-12]. Recently also the effect of spatially varying saturation magnetization $M_s$ on the dynamics of vortex walls was studied, resulting in an effective damping increasing with the disorder strength [@MIN-10]. Similar spatially distributed disorder has also been studied in a simplified, line-based model of a transverse DW [@LAU-10; @LAU-11]. Experimental studies of DW dynamics in wires have revealed its stochastic nature in the case of short current pulses [@MEI-07], and has been attributed to the presence of disorder in the samples, in combination with thermal effects. For longer current pulses, the resulting average DW velocities have been shown to be quite low [@KLA-05], likely due to pinning effects induced by structural disorder. Dynamical pinning effects have also been observed in experiments of field-driven vortex wall dynamics [@TAN-08; @JIA-10]. However, despite of these advances, many details of the disorder effects on DW dynamics in nanostructures remain to be clarified. In this Letter, we consider by micromagnetic simulations the effect of disorder on the field and current-driven dynamics of a transverse DW in a narrow and thin Permalloy strip. Disorder is modelled by including randomly positioned small non-magnetic regions (voids) in the system. Our results show that the field and current-driven DW dynamics exhibit remarkable differences which are only revealed in the presence of disorder. In particular, we identify two fundamentally different DW pinning mechanisms acting in a field-driven system, operating below and above the Walker breakdown field, respectively, with the latter mechanism being absent in the current-driven case. Also the Walker breakdown itself is affected by the presence of disorder, such that it is shifted to larger field and current values with increasing disorder strength. At the same time the DW velocities at the breakdown field and current get smaller and larger, respectively. Furthermore, for adiabatic spin transfer torque, the intrinsic pinning mechanism is found to be suppressed by disorder. These findings emphasize the importance of understanding the interplay between disorder, the DW structure and the properties of the external driving force, and are shown to be related to an effective damping parameter $\alpha^*$ increasing with the disorder strength. We perform a large ensemble of micromagnetic simulations with the GPU-based micromagnetic simulator MuMax [@VAN-11], making it possible to obtain large statistics for averaging over the disorder realizations. To study the time evolution of the magnetization ${\bf M}({\bf r},t)$ with an amplitude $M_s$, we solve the Landau-Lifshitz (LL) equation with the spin-transfer torque terms [@ZHA-04], $$\begin{aligned} \label{eq:1} \frac{\partial {\bf M}}{\partial t} & = & -\frac{\gamma}{1+\alpha^2} {\bf{M}}\times {\bf H}_{eff} \\ \nonumber & & -\frac{\alpha \gamma}{M_s(1+\alpha^2)} {\bf M}\times({\bf M}\times{\bf H}_{eff}) \\ \nonumber & & -\frac{b_j}{M_s^2(1+\alpha^2)}{\bf M}\times ({\bf M}\times ({\bf j}\cdot \nabla){\bf{M}}) \\ \nonumber & & -\frac{b_j}{M_s(1+\alpha^2)}(\xi-\alpha){\bf M} \times ({\bf j}\cdot \nabla){\bf M},\end{aligned}$$ where ${\bf H}_{eff}$ is the effective magnetic field (with contributions from the external, exchange and demagnetization fields), $\gamma$ is the gyromagnetic ratio, $\alpha$ is the Gilbert damping constant, $\xi$ is the degree of non-adiabaticity, ${\bf j}$ is the current density, and $b_j=P\mu_B/(eM_s(1+\xi^2))$, with $P$ the polarization, $\mu_B$ the Bohr magneton and $e$ the electron charge. ![(Color online) The average velocity $v_m$ of the moving DWs (main figure) and $v_{exp} = (1-P_{pin})v_m$ (inset) as a function of $H_{ext}$ and $\sigma$. Error bars correspond to the standard deviation of $v_m$. The pinning probabilities $P_{pin}$ during the 20 ns simulation (bottom panel) exhibit large values for large $H_{ext}$ due to the core pinning mechanism.[]{data-label="fig:field"}](velocity_field2.eps){width="8cm"} We consider Permalloy strips of width $w=100$ nm and thickness $10$ nm, such that the stable DW structure is a head-to-head V-shaped symmetric transverse wall, separating in-plane domains pointing along the strip axis [@NAK-05]. The used material parameters are those of Permalloy, i.e. $M_s=860 \times 10^3$ A/m and $\alpha=0.02$, and no anisotropy fields are included in Eq. (\[eq:1\]). To clearly see the effect of quenched disorder on the DW dynamics, we set the temperature $T=0$. The system is discretized by considering $N$ cells of size $3.125 \times 3.125 \times 10$ nm$^3$. Upon application of an external magnetic field ${\bf H}_{ext} = H_{ext}\hat{\bf x}$ along the strip axis in the absence of disorder, the DW is displaced along the strip. If the field is below the Walker breakdown field $H_W$, the DW essentially keeps its equilibrium structure during the propagation, with a small out-of-plane component close to the tip of the V-shape, and a velocity roughly linearly proportional to the applied field. Above $H_W$, an antivortex is nucleated at the tip of the V-shape. It then propagates across the strip width, reversing the polarity of the DW magnetization. This process is repeated such that the DW polarity oscillates back and forth, dramatically decreasing the average DW velocity [@THI-06]. With disorder included in the form of randomly positioned non-magnetic voids of linear size 3.125 nm with varying densities $\sigma$ within a strip of length $L$ = 3.2 $\mu$m, the DW can get pinned even for non-zero applied fields [@remark1]. This makes measurement and even definition of the DW velocity a non-trivial task. Thus, in what follows we consider both the “conditional velocities” $v_m$ of the moving DWs, conditioned on the fact that the DWs will not get pinned during the time interval $\Delta t = 20$ ns we consider in the simulations (i.e. the DW will either reach the end of the strip or it is still moving after $\Delta t = 20$ ns)[@v_m], and the probability $P_{pin}$ for the DW to get pinned during $\Delta t$. These are computed by averaging over 50 disorder realizations for each $H_{ext}$ and $\sigma$. Notice that here we consider a $T = 0$ system, such that a pinned DW cannot depin. An alternative measure of the DW velocity (which is likely to be closer to typical experimental measurements where $T>0$) is given by $v_{exp} = (1-P_{pin})v_m$. In general, $P_{pin}$ will increase with the observation (time and length) scale, thus making also $v_{exp}$ a scale-dependent quantity. ![(Color online) Examples of the spatial distribution of the contributions of the applied field $H_{ext} =$ 5 mT (top) and current density $j_{ext} = 20 \times 10^{12}$ A/m$^2$ with $\xi=0$ (middle) to $\partial {\bf M}/\partial t$ in Eq. (\[eq:1\]), corresponding to the magnetization configuration shown in the bottom panel, exhibiting an antivortex in the middle of the strip. $\partial {\bf M}/\partial t$ is given in units of $M_s/$s. The randomly positioned voids with $\sigma = 3125 \mu$m$^{-2}$ are shown as grey dots.[]{data-label="fig:maps"}](./im_pyTorque_xi0_white.eps){width="8cm"} Fig. \[fig:field\] shows the resulting average velocities $v_{m}$ of the moving DWs as a function of $H_{ext}$ and $\sigma$. The presence of voids induces a finite depinning field $H_{dep}(\sigma)$ increasing with $\sigma$. For $H_{ext} > H_{dep}(\sigma)$, $v_m$ first increases until a maximum velocity is reached at $H_{ext} = H_W(\sigma)$, and then starts to decrease again. The position $H_W(\sigma)$ of this maximum, corresponding to the Walker breakdown, is shifted towards larger field values as $\sigma$ is increased, and the corresponding maximum velocity $v_{m}(H_W(\sigma))$ decreases with $\sigma$. The error bars in Fig. \[fig:field\] correspond to the standard deviation of $v_m$, and indicate that the dynamics of moving DWs has a stochastic nature due to the random disorder. Notice in particular that the pinning probability $P_{pin}$ exhibits a non-monotonic dependence on $H_{ext}$, with strong pinning for both small and large $H_{ext}$, while for intermediate applied fields (corresponding to large values of $v_m$) pinning is less likely. The maximum value of $v_{exp}$ (inset of Fig. \[fig:field\]) exhibits a strong dependence on $\sigma$, and depends also on the observation scale via $P_{pin}$ (not shown). For large $H_{ext}$, $P_{pin}$ is close to 1 for $\Delta t = 20$ ns, and consequently $v_{exp}$ is essentially zero. Similar pinning effects for large applied fields have been observed experimentally for vortex walls [@TAN-08; @JIA-10]. To gain insight on the mechanisms behind this behavior, we consider snapshots of the DW configurations and the various contributions to $\partial {\bf M}/\partial t$ in Eq. (\[eq:1\]). For small $H_{ext}$, we find that the overall DW structure is preserved, with the disorder inducing only minor distortions. If the DW gets pinned, this happens by a collective action of several voids. This mechanism is known as [*collective pinning*]{}. and it is responsible for the non-zero depinning field $H_{dep}<H_W(\sigma)$. Remarkably, we identify a fundamentally different pinning mechanism for large fields, $H_{ext} > H_W(\sigma)$: In this regime, an antivortex is able to propagate to the interior of the strip, resulting in pinned DW configurations (occurring with probability $P_{pin}$) with the antivortex core positioned exactly on top of a void or a local void structure. We refer to this mechanism as [*core pinning*]{}, and attribute it to the fact that the energy of the system can be significantly lower when the antivortex core or part of it - involving large magnetization gradients and out-of-plane magnetization - is placed in a non-magnetic region (or more generally, in a region with low $M_s$). In the field-driven case the DW is susceptible to get pinned by this mechanism because the Zeeman torque is relatively small in magnitude and does not directly displace the DW (top panel of Fig. \[fig:maps\]); Instead, the small out-of-plane magnetization due to the Zeeman torque induces demagnetizing fields, which act to move the DW. Such an indirect driving mechanism is sensitive to the perturbations due to disorder, leading to several effects, including $\sigma$-dependent $H_{dep}$ and $H_W$, and in particular the core pinning mechanism for high $H_{ext}$. We proceed to contrast these results with the current-driven case, by applying a current density ${\bf j}=-j_{ext}\hat{\bf x}$ with $P=0.5$ along strips of length $L$=6.4 $\mu$m. We first consider perfect adiabaticity ($\xi = 0$, top panel of Fig. \[fig:current\]). Due to intrinsic pinning [@LI-04], there is a non-zero depinning current $j_{dep,int}$ in the absence of disorder, above which DW motion involves repeated polarity transformations mediated by antivortex propagation across the strip width. Adding disorder with the same procedure as above reveals two intriguing observations: First, it appears that the DW is able to move even for currents slightly below $j_{dep,int}$. This surprising finding can be explained by noticing that the intrinsic pinning mechanism is due to the ability of the DW to deform in such a way that the torques due to interactions within the DW (i.e. the effective field) exactly counterbalance the adiabatic spin-transfer torque [@LI-04]. However, the presence of disorder induces additional DW deformations and imposes constraints on the ability of the DW to counteract the current-induced torques, leading to non-zero values for both $v_m$ and $1-P_{pin}$ for $j_{ext}$ somewhat below $j_{dep,int}$. Notice that while $v_{exp}$ (inset of the top panel in Fig. \[fig:current\]) exhibits non-linear field dependence reminiscent of typical creep motion for small fields, we are considering here a $T = 0$ system in which a pinned DW cannot depin due to the absence of thermal fluctuations [@remark2]. ![(Color online) The average velocity $v_m$ of the moving DWs as a function of $j_{ext}$ and $\sigma$, for $\xi = 0$ (top) and $\xi = 0.04$ (bottom). Error bars correspond to the standard deviation of $v_m$. The pinning probabilities $P_{pin}$ during the 20 ns simulation highlight the absence of core pinning for large current densities. The insets show $v_{exp} = (1-P_{pin})v_m$ for $\xi=0$ (top panel), and the effective $\alpha^*(\sigma)$ for various $\xi$ (bottom panel), respectively.[]{data-label="fig:current"}](./velocity_xi=0.eps "fig:"){width="8cm"}\ ![(Color online) The average velocity $v_m$ of the moving DWs as a function of $j_{ext}$ and $\sigma$, for $\xi = 0$ (top) and $\xi = 0.04$ (bottom). Error bars correspond to the standard deviation of $v_m$. The pinning probabilities $P_{pin}$ during the 20 ns simulation highlight the absence of core pinning for large current densities. The insets show $v_{exp} = (1-P_{pin})v_m$ for $\xi=0$ (top panel), and the effective $\alpha^*(\sigma)$ for various $\xi$ (bottom panel), respectively.[]{data-label="fig:current"}](./velocity_xi=0.04_3.eps "fig:"){width="8cm"} The second observation is that for larger $j_{ext}$, core pinning is absent. Even if for $j_{ext}>j_{W}(\sigma)$ the antivortex core is constantly moving back and forth across the strip width, it never gets pinned by the voids, strongly contrasting with the field-driven case. To explain this observation, we consider the spatial distribution of the current-induced contribution to $\partial {\bf M}/\partial t$ (middle panel of Fig. \[fig:maps\]), and find that the current acts directly (in contrast to the indirect mechanism in the field-driven case) and strongly on the antivortex core where the magnetization gradients are large, facilitating its propagation along the strip across the energy barriers due to the voids. This is also directly visible in the the LL equation (Eq. (\[eq:1\])), where the current acts on the gradient of ${\bf M}$ rather than on ${\bf M}$ itself. Finally we consider the role of the non-adiabatic spin-transfer torque (bottom panel of Fig. \[fig:current\], where the $\xi=0.04$ case is shown) on the DW dynamics. For $\xi>0$ and $\sigma=0$, there is no intrinsic pinning, and the DW propagates preserving its internal structure with a finite velocity linearly proportional to the current density $j_{ext}$ up to a Walker breakdown current $j_W$. For $j_{ext} > j_W$, an antivortex is again nucleated and propagates across the strip width reversing the polarity of the DW magnetization, and decreasing the average DW velocity. For larger $j_{ext}$, the velocity again increases with $j_{ext}$. Adding disorder induces a finite depinning threshold $j_{dep} (\sigma)$, and pushes the local maximum of $v_m$ or the Walker breakdown to higher $j_{ext}$. At the same time, $v_m$ at $j_{W}(\sigma)$ increases with $\sigma$. Thus, the voids are able to inhibit the antivortex entering the strip, enhancing the DW propagation and structural stability for intermediate current densities, $j_W(\sigma=0) < j_{ext} < j_W(\sigma>0)$. This effect arises as the antivortex core is pushed across the strip width by the effective field terms in Eq. (\[eq:1\]) (notice that the effect of the current is symmetric such that no antivortex displacement along the $y$ direction arises directly due to the current, see the middle panel of Fig. \[fig:maps\]), a mechanism sensitive to the disturbances due to disorder. Again, there is no core pinning for $j_{ext} > j_W(\sigma)$, for the same reason as in the adiabatic ($\xi = 0$) case. For $j_{dep}(\sigma) < j_{ext} < j_W(\sigma)$, $v_m$ depends linearly on $j_{ext}$, and by extrapolating linear fits to the data to $j_{ext}=0$ all the lines cross at $v_m = 0$ (not shown). Thus, we estimate effective values of the damping parameter from the slopes of these linear fits [@MIN-10], as within one-dimensional models [@MOU-07] $v_m \propto (\beta/\alpha) j_{ext}$ for $j_{ext} < j_W$, with $\beta = \xi/(1+\xi^2)$. Our simulations (inset of the lower panel of Fig. \[fig:current\]) with different $\xi$ indicate that the data can be interpreted in terms of an effective $\alpha^*$ increasing with $\sigma$ [@MIN-10]. Also an effective $M_s^* = (1-\sigma Lw/N)M_s$ emerges naturally. Thus we can explain our results with the one-dimensional model in terms of $\sigma$-dependent effective parameters: For instance, $j_W(\sigma) = 4\pi\gamma (M_s^2\Delta |N_y-N_x|)^*\alpha^*/(g\mu_BP|\beta-\alpha^*|)$, with $\Delta$ the DW width and $N_x$ and $N_y$ the demagnetizing factors, and $j_{dep,int}(\sigma) \equiv j_W(\sigma,\xi=0)$ [@MOU-07]. Using the expression for $j_W$ and the values of $\alpha^*$ to estimate $C^*\equiv(\Delta M_s^2|N_y-N_x|)^*$, the scaling of $j_{dep,int}$ with $\sigma$ can be reproduced remarkably well, see Table \[table\]. A similar analysis in the field-driven case, with $H_W = 2\pi\alpha^*(M_s|N_y-N_x|)^*$ and $v_m(H_W)=(\gamma \Delta^*/\alpha^*)H_W$ [@MOU-07], reproduces the observed scaling of both $H_W$ and $v_m(H_W)$ with $\sigma$ (Table \[table\]). Notice that in our case $v_m(H_W)$ depends on $\sigma$ through the $\sigma$-dependent effective parameters, while for systems with only edge roughness $v_m(H_W)$ is independent of the amount of edge roughness [@NAK-03]. $\sigma$ \[$\mu$m$^{-2}$\] $\alpha^*$ $C^*$ \[A$^2$/m\] $j_{dep,int}^{pred}$ \[A/m$^2$\] $j_{dep,int}^{sim}$ \[A/m$^2$\] $H_W^{pred}$ \[mT\] $H_W^{sim}$ \[mT\] $v_m^{pred}(H_W)$ \[m/s\] $v_m^{sim}(H_W)$ \[m/s\] ---------------------------- ------------ ------------------------ ---------------------------------- --------------------------------- --------------------- -------------------- --------------------------- -------------------------- 0 0.0200 2.92 $\times 10^{-10}$ 14 $\times 10^{12}$ 15 $\times 10^{12}$ 2.75 2.75 457 457 1562.5 0.0221 2.52 $\times 10^{-10}$ 12.1 $\times 10^{12}$ 13 $\times 10^{12}$ 3.05 3.0 398 437 3125 0.0238 2.45 $\times 10^{-10}$ 11.7 $\times 10^{12}$ 12 $\times 10^{12}$ 3.25 3.25 389 419 4687.5 0.0258 2,36 $\times 10^{-10}$ 11.3 $\times 10^{12}$ 11.5 $\times 10^{12}$ 3.52 3.25 377 403 6250 0.0283 2.28 $\times 10^{-10}$ 10.9 $\times 10^{12}$ 11 $\times 10^{12}$ 3.78 3.5 368 387 To summarize, we have presented a detailed analysis of the effect of disorder on the field and current-driven transverse DW dynamics in a narrow and thin Permalloy nanostrip. We have identified two fundamentally different pinning mechanisms, acting in different regimes of the DW propagation. The observation that there is no core pinning in the current-driven case whereas it dominates the field driven dynamics for large fields highlights the different nature of the field and current drive in a way that can be observed only in the presence of disorder. In general, we have seen that the pinning mechanisms operating will depend on the details of the DW structure, and thus we expect that the core pinning mechanism is absent for systems with high perpendicular magnetocrystalline anisotropy as there is no (anti)vortex core that could get pinned, but it could play a role in the dynamics of vortex walls occurring in wider soft strips [@MIN-10], possibly also for small applied fields. If only edge roughness is present, no core pinning should occur. Experiments should be performed to systematically study the scale dependence of $P_{pin}$ and $v_{exp}$. Finally, we point out that the observation that disorder tends to stabilize the DW internal structure and increase the maximum DW velocity by suppressing the Walker breakdown in the current-driven case suggests that it could be desirable to deliberately engineer disorder in the system, for instance to replace notches to pin the DW in various technological applications [@BAS-12]. [**Acknowledgments**]{}. Stefano Zapperi is thanked for numerous interesting discussions on DW dynamics and disorder, and Mikko J. Alava for useful comments on the manuscript. We thank Luc Dupré and Daniël De Zutter for supporting this research. LL has been supported by the Academy of Finland through a Postdoctoral Researcher’s Project (project no. 139132) and through the Centres of Excellence Program (project no. 251748). BVdW has been supported by the Flanders Research Foundation FWO. [10]{} S. S. P. Parkin, M. Hayashi, and L. Thomas, Science [**320**]{}, 190 (2008). M. Hayashi [*et al.*]{}, Science [**320**]{}, 290 (2008). D. A. Allwood [*et al.*]{}, Science [**309**]{}, 1688 (2005). V. Lecomte, S. E. Barnes, J.-P. Eckmann, and T. Giamarchi, Phys. Rev. B [**80**]{}, 054413 (2009). G. Durin and S. Zapperi, [*The Barkhausen effect*]{} in The Science of Hysteresis, edited by G. Bertotti and I. Mayergoyz, vol. II pp 181-267 (Academic Press, Amsterdam, 2006). Y. Nakatani, A. Thiaville, and J. Miltat, Nature Mater. [**2**]{}, 521 (2003). E. Martinez, J. Phys.: Condens. Matter [**24**]{}, 024206 (2012). H. Min [*et al.*]{}, Phys. Rev. Lett. [**104**]{}, 217201 (2010). L. Laurson, A. Mughal, G. Durin, and S. Zapperi, IEEE Trans. Magn. [**46**]{}, 262 (2010). L. Laurson, C. Serpico, G. Durin, and S. Zapperi, J. Appl. Phys. [**109**]{}, 07D345 (2011). G. Meier [*et al.*]{}, Phys. Rev. Lett. [**98**]{}, 187202 (2007). M. Kläui [*et al.*]{}, Phys. Rev. Lett. [**95**]{}, 026601 (2005). H. Tanigawa [*et al.*]{}, Phys. Rev. Lett. [**101**]{}, 207203 (2008). X. Jiang [*et al.*]{}, Nat. Commun. [**1**]{}, 25 (2010). A. Vansteenkiste and B. Van de Wiele, J. Magn. Magn. Mater. [**323**]{}, 2585 (2011); arXiv:1102.3069. S. Zhang and Z. Li, Phys. Rev. Lett. [**93**]{}, 127204 (2004). Y. Nakatani, A. Thiaville, and J. Miltat, J. Magn. Magn. Mater. [**290-291**]{}, 750 (2005). A. Thiaville and Y. Nakatani, [*Domain-Wall Dynamics in Nanowires and Nanostrips*]{} in Spin Dynamics in Confined Magnetic Structures III, edited by B. Hillebrands and A. Thiaville, Appl. Physics [**101**]{}, 161 (Springer-Verlag Berlin Heidelberg 2006). To avoid any effect related to the initial acceleration and of the demagnetizing fields at the end of the wire, we actually calculated the average speed between a point at 0.5 $\mu m$ after the initial position and at 0.5 $\mu m$ before the end of the wire. We have checked that our main conclusions remain the same if instead of voids one considers small areas with half the saturation magnetization $M_s$, suggesting that our results are not limited to the specific kind of disorder we study here. Z. Li and S. Zhang, Phys. Rev. B [**70**]{}, 024417 (2004). In fact, studying the true thermally activated creep motion by micromagnetic simulations is very challenging, due to the long time scales needed to observe several repeated pinning-depinning events, a requirement for reliable estimation of the DW velocities in the creep regime. A. Mougin [*et al.*]{}, EPL [**78**]{}, 57007 (2007). M. A. Basith, S. McVitie, D. McGrouther, and J. N. Chapman, Appl. Phys. Lett. [**100**]{}, 232402 (2012).
--- abstract: 'The motion of doped electrons or holes in an antiferromagnetic lattice with strong on-site Coulomb interactions touches one of the most fundamental open problems in contemporary condensed matter physics. The doped charge may strongly couple to elementary spin excitations resulting in a dressed quasiparticle which is subject to confinement. This ’spin-polaron’ possesses internal degrees of freedom with a characteristic ’ladder’ excitation spectrum. Despite its fundamental importance for understanding high-temperature superconductivity, clear experimental spectroscopic signatures of these internal degrees of freedom are scarce. Here we present scanning tunneling spectroscopy results of the spin-orbit-induced Mott insulator Sr$_2$IrO$_{4}$. Our spectroscopy data reveal distinct shoulder-like features for occupied and unoccupied states beyond a measured Mott gap of $\Delta\approx620$ meV. Using the self-consistent Born approximation we assign the anomalies in the unoccupied states to the spin-polaronic ladder spectrum with excellent quantitative agreement and estimate the Coulomb repulsion $U$ = 2.05 ...2.28 eV in this material. These results confirm the strongly correlated electronic structure of this compound and underpin the previously conjectured paradigm of emergent unconventional superconductivity in doped Sr$_2$IrO$_{4}$.' author: - 'Jose M. Guevara' - Zhixiang Sun - 'Ekaterina M. Pärschke' - Steffen Sykora - Kaustuv Manna - Johannes Schoop - Andrey Maljuk - Sabine Wurmehl - Jeroen van den Brink - Bernd Büchner - Christian Hess bibliography: - 'ir214\_spin\_polaron\_14.bib' title: 'Spin-polaron ladder spectrum of the spin-orbit-induced Mott insulator Sr$_2$IrO$_{4}$ probed by scanning tunneling spectroscopy' --- The so-called spin polaron, describes the motion of a single charge (hole or doublon) added to an antiferromagnetic and insulating ground state of an effective correlated background medium. Thereby, the magnetic excitations of the antiferromagnetic (AF) background can be theoretically described by a system of bosons (magnons) which couple to the introduced charge carrier via creating virtual bosonic fluctuations. In this way the charge interacts strongly with its environment of ordered spins and forms a new quasiparticle – the spin polaron. Its excitations have been investigated by, e.g., the self-consistent Born approximation (SCBA) [@Martinez1991; @Kane1989], quantum wave function methods [@Reiter1994] for a single hole within the $t$-$J$ model, and exact diagonalization [@Hamad2008]. These studies show that the spin polaron is characterized by an environment of misaligned spins (Fig. \[Fig:spinpol1\](a-d)) forming an effective confinement potential, where the charge can occupy excited states of different orbital character  [@Wrobel2008] (see Fig. \[Fig:spinpol1\](e-h)). In the one-particle spectral function these excitations manifest themselves by the occurrence of a rather flat and ladder-like structure (Fig. \[Fig:spinpol1\](i)). Due to quantum fluctuations the spin defects can relax and the quasiparticle becomes dispersive. Despite of the proven fundamental importance for rationalizing many open problems in the physics of correlated electron systems, a direct measurement of the internal degrees of freedom of the spin polaron which proves its peculiar confined nature is still lacking. Here we use scanning tunneling spectroscopy (STS) to specifically probe the excited states of the spin polaron in a correlated material. Thereby the employed tunneling current serves as to introduce an extra charge (hole or electron, depending on the polarity) into the antiferromagnetic background. We compare the tunneling spectra with theoretical calculations based on the self-consistent Born approximation for the spin-polaronic ladder spectrum and find excellent agreement. ![image](fig1.pdf) A hallmark of the spin polaron is its connection to the nature of unconventional superconductivity in the cuprates which is believed to emerge from a quasi two-dimensional correlated Mott-insulating antiferromagnetic parent state upon charge doping [@Martinez1991]. The spin polaron and its itinerancy straightforwardly explains the rapid destruction of the antiferromagnetic parent state of the cuprates upon hole doping. Furthermore, it is one key ingredient in many theoretical models which address superconductivity as well as competing phases such as stripe correlations in the underdoped regime of the cuprates’ phase diagram [@Chernyshev99]. In recent years, it has been increasingly noticed that quasi-2D iridium oxides exhibit correlated physics that is quite similar to that of the cuprates. In particular, Sr$_2$IrO$_4$ shares many parallels with the isostructural La$_2$CuO$_4$, a prominent Mott-insulating parent compound of the cuprate high-temperature superconductors [@damascelli2003angle]. The intricate interplay of strong spin-orbit coupling (SOC) and Coulomb repulsion causes the 5$d$ electrons to localize in a state with $J_\mathrm{eff}=1/2$ pseudospins forming the lower Hubbard band of the material with a strong AF exchange interaction [@kim2008novel; @kim2012magnetic]. AF order occurs below 240 K ($T_N$) [@cao1998weak], and quasi-2D magnon excitations have been detected [@kim2012magnetic; @Steckel2016]. In view of these similarities it is reasonable to expect spin polaron physics to be relevant in Sr$_2$IrO$_4$ [@Paerschke2017] and it has been argued that a proper doping scheme can drive the material into a high-temperature superconducting phase [@wang2011twisted]. Low-temperature STM/STS on stoichiometric Sr$_2$IrO$_4$ at $T<10$ K, enabling enhanced spectroscopic resolution, is challenging because samples become too insulating at cryogenic temperatures [@dai2014local; @Nichols2014; @yan2015electron; @chen2015influence; @Battisti2017; @Battisti18]. We therefore used as-grown single crystals of Sr$_2$IrO$_{4-\delta}$ for which a reduced resistivity as compared to the stoichiometric parent compound allows for high-resolution STS measurements even at very low temperature ($T<10$ K) for the first time. Note that the sample is still close to the Mott insulator regime and far away from metallicity, which occurs for high oxygen deficiency [@korneta2010electron], because the resistivity shows a semiconductor-like temperature dependence (see Supplementary Material (SM) Fig. S1). STM data obtained at the cold-cleaved (about 10 K) crystals’ surface (Fig. \[Fig: topo\](a)) yield atomically resolved SrO-terminated flat terraces with several local defects (about 2% with respect to Ir). ![\[Fig: topo\]Topography and the tunneling conductance of the clean area. (a) Topography of the sample surface measured at $T=8.8~$K, with bias voltange $U_\mathrm{bias}=1.0~$V and tunneling current $I_{T}=200~$pA. (b) Representative large scale tunneling conductance spectrum taken on a clean place at $T=8.8~$K, with the tip stabilized at $U_\mathrm{bias}=-900~$mV and $I_{T}=200~$pA. Blue arrows indicate the coarse position of fine structure peaks. ](fig2.pdf){width="\columnwidth"} Fig.  \[Fig: topo\] (b) shows a representative tunneling conductance ($dI/dU$) spectrum taken at 8.8 K at a place free of defects[@note2]. The data reveal a sizeable gap $\Delta\approx620$ meV where $dI/dU\approx0$ between about $-110$ mV and $510$ mV,Which we identify as the signature of the Mott gap of the material in agreement with earlier findings for stoichiometric samples at elevated temperatures [@dai2014local; @yan2015electron]. At bias voltages below and above this gap, the tunneling conductance reveals a shoulder-like increase which straightforwardly can be associated with the density of states of the lower and the upper Hubbard band. At $|U_\mathrm{bias}|\gtrsim 1.3$ V, the $dI/dU$ increases sharply, which we attribute to further high-energy states, in qualitative agreement with optical spectroscopy [@propper2016optical], and a possible energy dependence of the tunneling matrix element. We therefore restrict the following discussion to $U_\mathrm{bias}\lesssim 1.2$ V. Remarkably, the closer inspection of the $dI/dU$ reveals a distinct fine-structure of peak- or shoulder-like anomalies (indicated by arrows in Fig. \[Fig: topo\] (b)) corroborating earlier studies where already signatures of the first peak at positive bias has been reported [@chen2015influence]. These interesting features indicate a direct coupling of the tunneling electrons/holes to specific excitations of the Mott state. They are ubiquitously present in all spectra taken at clean areas of the surface (see SM). Indeed, the theoretical investigation of spin polaron physics of Sr$_2$IrO$_4$, which we discuss in the next section, brings into light that this signature is directly related to the excited states of the confined spin polaron quasiparticles including their inherent ladder spectrum. ![image](fig3.pdf) The clear signatures of the AF Mott physics in the quasi-2D Sr$_2$IrO$_4$  [@kim2012magnetic; @kim2014excitonic] suggest that the underlying electron system can be modeled by a multi-orbital 2D Hubbard model with spin-orbit coupling which has an AF ground state as it is known from the usual one-band Hubbard model  [@Kane1989]. This property simplifies the theoretical treatment by the possibility to apply the well-known single-hole problemto describe the relevant excitations of the magnetically ordered pseudospin ground state [@Schmitt1988; @Ramsak1993]. Constructing a polaronic model to calculate $dI/dU$ spectra, we address separately the positive and negative bias regions since the strong on-site correlations render these two cases very different [@Paerschke2017]. When a negative bias voltage is applied, the electrons are removed from the sample, tunneling towards the tip. This creates an excitation in the $d^5$ configuration, which can be locally described as a $d^4$ configuration with its complicated intrinsic multiplet structure [@Chaloupka2016]. In the lowest energy subspace of the Hilbert space this charge excitation would form a singlet and a triplet state. Therefore, to describe a charge excitation on the negative bias side, we introduce a charge excitation creation operator $\textbf{h}$ with an additional degree of freedom $ |\textbf{h} \rangle \equiv \{ |J=0 \rangle, |J=1, J_z=1 \rangle, |J=1, J_z=0\rangle, J=1, J_z=-1\rangle \}$. Opposite to this, applying positive bias voltage, results in adding an electron to the local Ir site with $d^5$ configuration. Hence, the charge excitation that one has to consider would resemble the filled-shell $d^6$ configuration and can be described by polaronic excitations shown in Fig. \[Fig:spinpol1\]. The motion of the charge excitation on positive(+) and negative(-) sides of the $dI/dU$ spectra is described by the Hamiltonian: $$\begin{aligned} \label{Ham_full_STS} {\mathcal{H}}^{+,-}={\mathcal{H}}_{\rm mag}+{\mathcal{H}}_{t}^{+,-}, \end{aligned}$$ where ${\mathcal{H}}_{\rm mag}$ is the part which includes the low energy excitations of the AF $J_{\mathrm{eff}}=1/2$ ground state. The hopping part of the Hamiltonian, ${\mathcal{H}}_{t}^{+,-}$, describes the kinetic energy of the charge coupled to the magnons, which gives rise to the polaron quasiparticle. The low-energy effective polaron model described here has the same operator structure as the effective polaron model of the *t-J* model but has more components than the latter due to the multiplet structure of the polaron. Interestingly, this additional degree of freedom also allows for more hopping channels, including free (i. e. not coupled to magnons) hopping between first neighbors, see SM for details. The Hamiltonian and its parametrization used here also gives very good quantitative description of the measured ARPES spectra on Sr$_2$IrO$_4$ [@Paerschke2017]. Specifically, we have evaluated the one-particle Green’s function $G ({\bf k},\omega)$ within the self-consistent Born approximation. To relate the described modeling to our measurements we exploit the usual proportionality between the tunneling differential conductance $dI/dU$ and the density of states and calculate the $dI/dU$ using the relation $$\begin{aligned} \label{SW} \frac{dI}{dU}(\omega)\backsim -\frac{1}{2\pi} \sum\limits_{{\bf k}} {\rm Im}G ({\bf k},\omega),\end{aligned}$$ where the time evolution in $G ({\bf k},\omega)$ is determined by the Hamiltonian . Fig. \[Fig: comp\] shows the results in direct comparison with the experimental data. In the inset of Fig. \[Fig: comp\], one clearly sees that the calculated spectra on the positive bias side possess ladder spectral features which are similar to those of the much more simplistic spin polaron in terms of the *t-*$J_z$ model (see Fig. \[Fig:spinpol1\](i)) in a remarkable way. Indeed, the ladder structure on the positive bias side can be clearly identified in the experimental data by the shoulder-like anomalies at $730 \pm 100$ meV and $970 \pm 100$ meV (see SI for more information about the determination of the position of the structures). We point out that our calculations do not include free parameters. Thus, the almost perfect match between experiment and theory concerning the distance between the ladder peaks corroborates our assignment. Note further, that the experimental tunneling spectra are subject of significant broadening, presumably by electron-phonon scattering processes and higher-order tunneling processes, which account for the differences between the theoretical and the experimental results. Not surprisingly however, such a ladder spectrum is not present on the negative bias side – the polaron motion on the negative voltage is additionally greatly complicated by the internal degrees of freedom of the charge excitation, which not only creates additional interacting channels but also provides a possibility for a nearest-neighbor free hopping of the polaronic quasiparticle. Therefore, the polaron quasiparticle becomes more dispersive and a considerable amount of spectral weight is transferred to the incoherent part of the one-particle spectrum. Altogether these two effects lead in the momentum summation in Eq.  to a more complex $dI/dU$ on the negative bias side (black line in Fig. \[Fig: comp\]). Nevertheless, one can study the different contributions to the total Green’s function which are carried by spin polarons of the two different values of the total quantum momentum, $J=0$ and $J=1$. The calculated contributions are shown in Fig. \[Fig: comp\] in blue and red, correspondingly. Apparently, unlike on the positive bias side, the most salient spectral features correspond separately to singlet and triplet polarons. More specifically, the lowest energy sharp peak is of singlet character whereas the peak at higher energies arises from the triplet polaron. Accordingly, in the experimental data we assign the shoulder at about -300 meV a primary singlet and the peak at about -600 meV a primary triplet character. After having assigned these most salient aspects of the tunneling spectrum of Sr$_2$IrO$_4$ to essential spin polaron physics, we finally mention that a careful analysis of the experimental and theoretical spectra shown on Figs.  \[Fig: topo\] (b) and  \[Fig: comp\] also allows to extract the value of Coulomb repulsion $U$ [@note1]. It is connected to the Mott gap value $\Delta^{\mathrm{Mott}}$ as $$\begin{aligned} \label{CoulombMott} \Delta^{\mathrm{Mott}} = U - E^{\mathrm{pol}}_{\mathrm{hole}} - E^{\mathrm{pol}}_{\mathrm{electron}},\end{aligned}$$ where $E^{\mathrm{pol}}_{\mathrm{hole}}$ ($E^{\mathrm{pol}}_{\mathrm{electron}}$) is the binding energy of the polaron formed when a hole (electron) is added to the ground state of the system. We estimate polaron binding energies by performing SCBA calculations setting the hopping part of Hamiltonian Eq. (\[Ham\_full\_STS\]) to zero separately for positive and negative bias cases. In this way the polaron is artificially fully localized and its spectral function is simply a delta function. The binding energies are then given by a relative shift between these delta function peaks and the quasiparticle peaks of the full calculation (Fig. \[Fig: comp\]). From such a consideration the particular energy values are estimated to be $$\begin{aligned} \label{bindingEn} E^{\mathrm{pol}}_{\mathrm{hole}} = 0.57\,\mathrm{eV}, \; \; \; E^{\mathrm{pol}}_{\mathrm{electron}} = 0.81\,\mathrm{eV},\end{aligned}$$ Then the Coulomb repulsion $U$ takes a value between $2.05\,\,\mathrm{eV}$ and $2.18\,\,\mathrm{eV}$ since the Mott gap $\Delta^{\mathrm{Mott}}\approx\Delta$ correct to the lowest quasiparticle peak bandwidth (both on positive and negative bias sides). Our findings experimentally and theoretically confirm the important role of the spin-polaronic quasiparticle for the physics of Sr$_2$IrO$_4$. More specifically, our data for positive bias voltage, which correspond to electron doping of the AF, reveal clear-cut signatures for the prototypical spin polaron ladder spectrum, i.e., the fingerprints of the internal degrees of freedom of the electron being confined within the AF background. Thus, in the very low electron doping regime our study reveals yet another similarity between Sr$_2$IrO$_4$ and La$_2$CuO$_4$ concerning electronic correlations and the inherent consequences, in particular, a possible proximity to superconductivity. More specifically, one may expect that the electron-doped regime of Sr$_2$IrO$_4$ at higher doping levels in analogy to the hole-doped cuprates [@wang2011twisted] is very promising to exhibit similar phenomena as the hole doped cuprates. Indeed, the reported signatures of a $d$-wave gap [@kim2016observation; @yan2015electron] and of stripe-like correlations [@Battisti2017] indicate a phenomenology that can be traced back to the spin polaron physics [@Chernyshev99; @kim2014excitonic]. Thus, the current experimental and theoretical efforts  [@wang2011twisted; @kim2012magnetic; @de2015collapse; @yan2015electron; @kim2016observation; @Battisti2017] to find the route to unconventional superconductivity in this material are strongly supported by our study. The situation is, however, less clean for the hole-doped regime where the physics is more complicated. Since here the spectral features are dominated by both singlet and triplet polarons, with singlet states being of lower energy with respect to the Fermi level, the analogy to the cuprates is not present. Nevertheless, if chemically achievable, an intricate and fascinating doping evolution governed by the interplay of the singlet and triplet polarons may be expected in the hole-doped regime, too. Finally, we point out that the ubiquitousness of spin polaron physics in all families of Mott insulators makes a pertinent investigation of high energy spectra of other iridiate [@lupke15] and cuprate systems [@ye2013] particularly interesting. We thank K. Wohlfeld, C. Renner, B.J. Kim and M. Allan for helpful discussions and D. Baumann and U. Nitzsche for technical assistance. The project is supported by the Deutsche Forschungsgemeinschaft through SFB 1143 (projects C05, C07, A03, and B01) and by the Emmy Noether programme (S.W. project WU595/3-3). Furthermore, this project has received funding from the European Research Council (ERC) under the European Unions’ Horizon 2020 research and innovation programme (grant agreement No 647276 – MARS – ERC-2014-CoG). Supplementary Materials ======================= Sample preparation ------------------ SrCO$_{3}$ and IrO$_{2}$ powders (with 4N purity) were taken in stoichiometric ratio and mixed with SrCl$_{2}$ flux with a 1:5 sample-to-flux weight ratio. The mixture was heated to 1210 $^\circ$C for 12 h and then slowly cooled to 1000 $^\circ$C with a cooling rate of 4 $^\circ$C/h, followed by a rapid cool-down to room temperature at 150 $^\circ$C/h. Crystals of Sr$_2$IrO$_{4-\delta}$ with diameter up to 5 mm were filtered from the residue after dissolving the flux in hot water. The crystals were grown in a Pt crucible (50 ccm) with a lid to reduce a flux evaporation. The crystals were characterized regarding microstructure (scanning electron microscopy), composition (energy dispersive x-ray analysis), crystallographic structure (single crystal diffraction), magnetic properties (magnetometry), and resistivity. Resistivity data of the Sr$_2$IrO$_{4}$ samples ----------------------------------------------- The in-plane resistance of the as grown Sr$_2$IrO$_{4-\delta}$ single crystals was measured using a standard 4-probe technique (5 K - 300 K). Here, $\delta$ accounts for a possible oxygen deficiency. In the as-grown single crystals, $\delta$ is not controlled which is known to lead to a variation of the resistivity [@korneta2010electron]. Fig. \[Fig:S1\] shows representative resistivity data for samples with very different temperature dependencies of the resistivity, where the more insulating characterisitics (labelled ’Insulating Sample’ in Fig. \[Fig:S1\]) can be attributed to an almost stoichiometric oxygen content ($\delta\approx0$) of the corresponding sample [@de2015collapse]. On the other hand, the only weakly insulating character of the other sample implies a small amount of oxygen vacancies [@korneta2010electron]. In order to be able to perform high-resolution tunneling spectroscopy at low temperature, we took advantage of the reduced resistivity of this sample (labelled ’STM Sample’ in Fig. \[Fig:S1\]) and used it for the tunneling experiments in our study. Note that the observed impurity amount of about 2% with respect to Ir in the topographic data shown in Fig.2(A) is consistent with an oxygen deficiency $\delta$ of the same order [@korneta2010electron]. Hamiltonian of the motion of the charge excitation. --------------------------------------------------- The motion of the charge excitation on positive(+) and negative(-) sides of the $dI/dU$ spectra is described by the Hamiltonian: $$\begin{aligned} \label{Ham_full_STS_methods} {\mathcal{H}}^{+,-}={\mathcal{H}}_{\rm mag}+{\mathcal{H}}_{t}^{+,-}, \end{aligned}$$ where ${\mathcal{H}}_{\rm mag}$ describes low energy excitations of the AF $J_{eff}=1/2$ ground state. It is given by $$\begin{aligned} \label{HamHeisenberg} {\mathcal{H}}_{\rm mag} = \sum\limits_{{\bf k}} \omega_{\bf k} (\alpha^\dag_{\bf k} \alpha_{\bf k} + \beta^\dag_{\bf k} \beta_{\bf k}),\end{aligned}$$ where $\omega_{\bf k}$ is the dispersion of the (iso)magnons represented by the quasiparticle states $| \alpha_{\bf k} \rangle$ and $ | \beta_{\bf k} \rangle $. The hopping part of the Hamiltonian, ${\mathcal{H}}_{t}^{+,-}$, describes the transfer of the charge excitation in the bulk coupled to the magnons, which we will also address as polaron quasiparticle. It is given by: \[Hamdparts1\] \_[t]{}\^[+]{} = \_[[**k**]{}]{}[V\^[0]{}\_[[**k**]{}]{} (d\^\_[[**k**]{} A]{}d\_[[**k**]{} A]{} + d\^\_[[**k**]{} B]{}d\_[[**k**]{} B]{})]{}+\ \_[[**k**]{},[**q**]{}]{} V\_[[**k**]{},[**q**]{}]{} (d\^\_[[**k-q**]{} B]{}d\_[[**k**]{} A]{} \_[**q**]{}\^+d\^\_[[**k-q**]{} A]{}d\_[[**k**]{} B]{} \_[**q**]{}\^ +h.c. ), $$\begin{aligned} \label{Hamdparts2} {\mathcal{H}}^{-}_{t}\!= \! \sum\limits_{{\bf k}} \left( {\bf h}_{{\bf k} A}^{\dagger}\hat{V}^{0}_{{\bf k}} {\bf h}_{{\bf k} A}\! +\!{\bf h}_{{\bf k} B}^{\dagger}\hat{V}^{0}_{\bf k} {\bf h}_{{\bf k} B} \right)\! +\! \nonumber\\ \sum\limits_{{\bf k}, {\bf q}} \left( {\bf h}_{{\bf k-q} B}^{\dagger} \hat{V}^{\alpha}_{{\bf k},{\bf q}} {\bf h}_{{\bf k} B} \alpha_{\bf q}^{\dagger} \!+\! {\bf h}_{{\bf k-q} A}^{\dagger} \hat{V}^{\beta}_{{\bf k},{\bf q}} {\bf h}_{{\bf k} B} \beta_{\bf q}^{\dagger} \!+\! h.c. \right)\!, $$ Where $A, B$ are the two AF sublattices. The dispersions $\propto V^{0}_{{\bf k}}$, ( $\propto \hat{V}^{0}_{{\bf k}}$) describe the nearest, next nearest, and third neighbor free hopping. The terms $\propto V_{{\bf k},{\bf q}}$ ($\propto \hat{V}^{\alpha}_{{\bf k},{\bf q}}$ and $\propto \hat{V}^{\beta}_{{\bf k},{\bf q}}$) are vertices describing the polaronic hopping of the charge excitation on the positive (negative) side and are given explicitly (also see Ref. [@Paerschke2017]). All the vertices were obtained analytically in a limit of strong on-site Coulomb repulsion and depend on the five hopping parameters of the minimal tight-binding model obtained as the best fit of the latter to the LDA calculations. The model used here is based on the polaronic model we used to calculate ARPES spectra on Sr$_2$IrO$_4$, see Ref. [@Paerschke2017] for details. Ladder spectrum in the *t-*Jz model ----------------------------------- Assuming that the ground state at half-filling can be described by a classical Néel state with spin excitations, one can describe the motion of the hole in the AF background by an effective Hamiltonian which naturally follows from an anisotropic *t-J* model [@Kane1989] by assuming a finite ratio of the exchange parameters, $\alpha = J_\perp / J_z$, \[anisotropictJmodel\] = \_[**q**]{} \_[**q**]{} a\_[**q**]{}\^ a\_[**q**]{}+\ \_[**k**,**q**]{} [a\_[**q**]{}\^ h\_[**k**]{}\^ h\^\_[**k** + **q**]{} ( u\_[**q**]{}\_[**k-q**]{}+v\_[**q**]{}\_[**k**]{})+ ]{}, where a charge excitation is represented by a spinless fermion with creation operator $h_{\textbf{k}}^{\dagger}$ and spin excitations are represented by the boson operators $a_{\textbf{q}}^{\dagger}$. The spin-wave dispersion $\omega_{\textbf{q}} = z s J\left(1-\delta\right)^{2} \nu_{\textbf{q}}$ where $s=1/2$ is the spin and $z$ is the coordination number of the underlying square lattice. The Bogoliubov factors are given by \_[**q**]{} &= ,\ u\_[**q**]{} &= ,\ v\_[**q**]{} &= -(\_[**q**]{}). The coupling of the hole to magnons is described by $\gamma_{\textbf{q}} =\frac{1}{z}\sum\limits_{\vec{\tau}}{cos \textbf{q} \cdot \vec{\tau}}$. Substituting $\alpha = 0$ to Eq. (\[anisotropictJmodel\]) we obtain the polaron representation of the *t-J*$_{z}$ model, \[tJzmodel\] = \_[**q**]{} a\_[**q**]{}\^ a\_[**q**]{} + \_[**k**,**q**]{} \_[**k**]{} a\_[**q**]{}\^ h\_[**k**]{}\^ h\^\_[**k** + **q**]{}+, where the coefficients become $\textbf{q}$-independent: $\omega = szJ_{z}$ and $\gamma_{\textbf{k}} = \frac{1}{2}(\cos k_{x} + \cos k_{y})$. To show that indeed different excitations in the ladder spectrum of the *t-J*$_{z}$ model can be directly observed in the tunneling spectroscopy experiment, we map the polaronic Hamiltonian (\[anisotropictJmodel\]) onto an effective system of free fermions and bosons \[freeel\] = \_[**q**]{} \_[**q**]{}\_[**q**]{}\^ \_[**q**]{} + \_[**k**]{} \_[**k**]{} f\_[**k**]{}\^ f\^\_[**k**]{}, where the new effective Hamiltonian $\tilde{\mathcal{H}}$ is related to the original Hamiltonian via a unitary transformation, $$\begin{aligned} \label{UT} \tilde{\mathcal{H}} = e^{X} \mathcal{H} e^{-X}.\end{aligned}$$ Such a method enables to study the polaron excitations as projected on the effective free particle, which can directly couple to the tunneling electrons in an STS experiment. The unitary transformation in general renormalizes the spin excitations of the background (first term of Eq. (\[freeel\])) and generates the second term of Eq. (\[freeel\]). Since the Eq. (\[freeel\]) has the quadratic diagonal form and the transformation is unitary, the energy quantities $\tilde{\varepsilon}_{\textbf{k}}$ and $\tilde{\omega}_{\textbf{q}}$ can be seen as eigenenergies of the original model. The transformation can be constructed and numerically carried out by using the projective renormalization method (PRM) (see Ref. [@Cho2016]). Within this method the polaronic term of the Hamiltonian is integrated out in steps (1500 in the actual calculation), leading to the renormalization of the fermion and boson energy parameters. Using the unitary transformation , the one-particle spectral function can be calculated immediately, \[PRM\] A\_[**k**]{}(E) = |\^[0]{}\_[**k**]{}|\^2 (E - \_[**k**]{}) +\ \_[**q**]{} |\^[1]{}\_[**k**,**q**]{}|\^[2]{} (E - \_[**k**-**q**]{} - \_[**q**]{}) + …, where $\tilde{a}^{0}_{\textbf{k}}$ and $\tilde{a}^{1}_{\textbf{k},\textbf{q}}$ are calculated in the renormalization process described above and represent the spectral weight of the particular polaron excitation. The internal excitations of the polaron can also be visualized by its wave function. Following [@Reiter1994], we write it in the form \[Reiter\] |\_[**k**]{} = a\^[0]{}(**k**) f\_[**k**]{}\^ |0 + \_[**q**]{} a\^[1]{}(**k**,**q**) f\_[**k**-**q**]{}\^ \_[**q**]{}\^ |0 + …. Here, $|0 \rangle$ is the product of the hole vacuum and the spin-wave vacuum, and $\tilde{a}^{0}_{\textbf{k}}$, $\tilde{a}^{1}_{\textbf{k},\textbf{q}}$ are so-called Reiter coefficients. We see that the wave function of the doped hole can in principle be approximated by a superposition of the wave function of a free hole and $m$ wave functions of the hole dressed with $m$ magnons. Fig. \[fig:s3pol\] shows the calculated effective dispersion of the spin polaron for different values of the ratio $\alpha = J_{\perp} / J_z$. For small values of $\alpha$, i.e. close to the Ising limit, one clearly sees that the energy of the polaron increases as a function of its momentum $\textbf{k}$ in stair-step fashion: as soon as the momentum $\textbf{k}$ is sufficiently large to produce a magnon, the polaron is raised to the next excited level. The ground state of the polaron is characterized by momentum states around $\textbf{k} = 0$ where only a finite range of momentum values is occupied. Since the values of $\alpha$ examined in Fig. \[fig:s3pol\] are small ($\alpha<0.25$), the magnon dispersion $\tilde{\omega}_{\textbf{q}}$ is almost momentum-independent and approximately equal to the magnon dispersion in the *t-J*${_z}$ model (\[tJzmodel\]). For smaller values of $\alpha=0.028$ (shown in Fig. \[fig:s3pol\] with red circles), the polaron dispersion within each rung is quite flat, whereas for larger values of $\alpha=0.25$ (green circles), the polaron becomes more dispersive. Overall, the polaron becomes less localized with the ratio $t/\omega$ decreasing. The possibility to map the spin polaron model to an effective model of free charge carriers (dressed with characteristic ladder-like quasiparticle dispersion) indicates that it must indeed be possible to detect internal excitations of spin polarons in an STM experiment. To get a better understanding of the nature of the polaron states shown on the Fig.1, we calculate the first two Reiter coefficients $a^{0}(\textbf{k})$ and $a^{1}(\textbf{k},\textbf{q})$ from Eq. (\[Reiter\]) using perturbation theory with respect to the parameter $t/\omega$ assuming $\omega \gg t$ (strong coupling limit): & a\^[0]{}(**k**) = 1 - \_[**q**]{}\ & a\^[1]{}(**k**,**q**) = . In this approximation, the spectral function of the hole has the form (similar to Eq.( \[PRM\])) A\_[**k**]{}(E) = \[a\^[0]{}(**k**)\]\^[2]{} (E) + \_[**q**]{} (E - ). This equation includes two different types of internal excitations. As one can see from the momentum dependence of the Reiter coefficients, the lowest excitation $a^{0}(\textbf{k})$ has $s$-wave character and represents a rather localized state of the hole. The second excitation $a^{1}(\textbf{k},\textbf{q})$ is spatially more extended due to its proportionality to $\cos$-functions, and the sign of the coefficient changes as a function of momentum, which means that it is orthogonal to the first term. Relevance of the *t-*Jz ladder physics to Sr$_2$IrO$_4$ ------------------------------------------------------- To show the relevance of the above discussed theory to the case of Sr$_2$IrO$_4$ we have calculated the scaling of the energy spacing between first and second excited states on the positive side of the tunneling conductance as a function of $J/t$ ratio for the material specific model . It is known [@Bulaevskii1968] that for the *t-*$J_z$ model this energy spacing scales as $t(J_z/t)^{2/3}$, see fig. \[fig:tJ\_tJz\](a). As one can see on the Fig. \[fig:tJ\_tJz\](b), the energy gap calculated for model follows the same law in the region of parameters relevant to the real material (shown in light gray). Spin-polaron ladder spectrum signatures in STS ---------------------------------------------- In order to demonstrate the representative character of the $dI/dU$ spectrum shown in Fig. 2 (b), different spectra are shown in this section. These spectra were taken at different points in different topographies. For guidance the spin-polaron features present in the data are marked with arrows as shown in Figs. 2 (b). In Fig.  \[fig:si1\], three spectra taken at the atomically resolved surface are shown. The spectra were taken at different locations of the surface, darker area (yellow curve), clean area (green curve) and on top of a frequent defect (red curve). The features of the spin-polaron spectrum spectra can be distinguished in the clean and dark area. In both cases the characteristic ladder-like signature is present on the positive bias side. On top of the defect spectral intensity is found inside the gap, in agreement with previous high temperature findings [@yan2015electron]. In Fig.  \[fig:si2\] a different topography location is shown. In this case the topography presents a terrace between steps in the cleaved surface of Sr$_{2}$Ir$_{4}$. Even without the best experimental conditions, the $dI/dU$ measured with large bias values ($-1.5$ V to $1.5$ V) probe that the features of the spin polaron are still present. In Fig.  \[fig:si3\] an atomically resolved surface (measured with some experimental noise) is plotted for the same location as the $dI/dU$ spectrum shown in Fig. 2 (b). We show three additional spectra referring to the darker area (yellow curve), and clean area (green and red curve). The features of the ladder spectra can be clearly distinguished in the clean area. In general it can be noted that the signatures of the confined spin polaron are more distinguishable at the positive bias. In the negative side the dispersive character and internal degrees of freedom of the charge excitation make it harder to recognize them. In Fig.  \[fig:si5\] an spectroscopy-imaging STM (SI-STM) data is shown in (a) at 300 meV (inside the gap) where the deffects can be atomically resolved. In (b) we plot the local density of states along the yelow line over a clean area in (a). In all spectra the first spin-polaron peak as well as the beggining of the second peak are clearly recognizable. Thisis a proof of the universality of the spin-polaron in the clean areas of the sample. Determination of the peaks positions ------------------------------------ In order to determine the position of the peaks corresponding to the spin polaron we use a fitting function $F(x) = F_B(x) + \sum_i F_i(x)$ for the measured spectrum which consists of a bosonic background $$\begin{aligned} \label{fit_fucn} F_{B}(x)= \frac{a}{(e^{b/x}-1)} \end{aligned}$$ and Gauss functions for the intensity of the peaks, $$\begin{aligned} \label{gaussian} F_i(x) = A_{i} \exp \left (\frac{-(x-\bar{x}_{i})^{2}}{2 \sigma_{i}^{2}}\right),\end{aligned}$$ where $A_{i}$, $\bar{x}_{i}$, and $\sigma_{i}$ are independent parameters for each peak. The position of the spin-polaron peaks is given by $\bar{x}_{i}$. The resulting plots can be seen in Fig. \[fig:si6\] The function (\[fit\_fucn\]) is a phenomenological description of a bosonic background which is introduced to simulate the dissipation of the energy which is transferred to the system through the tunneling current. Thus, under the assumption that the background is fully described by a system of bosons the background contribution to the tunneling response takes the form of a boson distribution function where the temperature plays the role of dissipated energy which is set proportional to the bias voltage. In addition to this background the intrinsic excitations of the spin polaron are fitted by Gauss functions where the corresponding positions, amplitudes, and widths are extracted. For the spectrum in Fig. \[fig:si6\](a) we find three gaussian peaks with the fitting parameters given by the following table:\ [|l|l|l| ]{}\ & a & 277.594 (a.u.)\ & b & 6.416 (meV)\ & Mean &0.730 (meV)\ &$ \sigma$ & 0.100 (meV)\ & Amplitude & 0.500 (a.u.)\ & Mean & 0.966 (meV)\ & $\sigma $& 0.100 (meV)\ & Amplitude & 0.500 (a.u.)\ & Mean &1.403 (meV)\ & $\sigma $& 0.031 (meV)\ & Amplitude & 0.828 (a.u.)\ The spectra (b), (c) and (d) in Fig.  \[fig:si6\] were taken under the same tunneling conditions. They do no present the higher energy peak present in the spectrum (a) at 1.4 eV. Since this peak is absent we use two gaussian peaks for the spin-polaron ground state and first excitation in the fitting procedure. Fitting parameters: [|l|l|l| ]{}\ & a & 100.0 (a.u.)\ & b & 4.502 (meV)\ & Mean &0.740 (meV)\ &$ \sigma$ & 0.118 (meV)\ & Amplitude & 0.910 (a.u.)\ & Mean & 0.965 (meV)\ & $\sigma $& 0.120 (meV)\ & Amplitude & 2.000 (a.u.)\ [|l|l|l| ]{}\ & a & 35.058 (a.u.)\ & b & 3.4 (meV)\ & Mean &0.698 (meV)\ &$ \sigma$ & 0.113 (meV)\ & Amplitude & 1.373 (a.u.)\ & Mean & 0.975 (meV)\ & $\sigma $& 0.150 (meV)\ & Amplitude & 1.780 (a.u.)\ [|l|l|l| ]{}\ & a & 100.0 (a.u.)\ & b & 4.675 (meV)\ & Mean &0.735 (meV)\ &$ \sigma$ & 0.113 (meV)\ & Amplitude & 1.502 (a.u.)\ & Mean & 0.992 (meV)\ & $\sigma $& 0.149 (meV)\ & Amplitude & 1.786 (a.u.)\ The background parameter $a$ is the amplitude of the bosonic background and $b$ has the role of the bosonic energy in the system. The similar values of $b$ for the different spectra indicate a uniform applicability of our used background fitting function . Further consistency of our fitting method is shown by very similar positions of the same peak in different spectra. The slight variation of the peak width within the same spectrum could be caused by many-body phenomena. ![\[Fig:S1\] Normalized in-plane resistivity of selected Sr$_2$IrO$_{4-\delta}$ samples. The strong reduction of the low-temperature upturn of the resistivity of the sample labelled ’STM Sample’ evidences a significant amount of oxygen vacancies.](figS1.pdf){width="\columnwidth"} ![\[fig:tJ\_tJz\](a,b) Comparison of energy spacing between first and second excitation state of polaron scaling as a function of $J/t$ ratio: (A) *t-*$J_z$ model, (B) material-specific *t-J* model defined by ${\cal H}^+$ in Eq. . In light gray the region of the $J/t$ values relevant for the Sr$_2$IrO$_4$ is shown: $J=0.06$ eV, first neighbor hoppings takes values from $0.224$ eV to $0.373$ eV depending on the orbital character.](figS2.pdf){width="\columnwidth"} ![\[fig:s3pol\] Polaronic quasiparticle dispersion for the effective Hamiltonian of the anisotropic *t-J* model given by Eq. calculated for three different values of the ratio $\alpha = J_\perp / J_z$. The value of $t$ is fixed throughout the calculation. The spectrum becomes more ladder-like as $\alpha$ approaches the Ising limit $\alpha = 0$. For the lowest value of $\alpha$ (red circles) the value of $t/\omega$ lies in the relevant for Sr$_2$IrO$_4$ parameter region (as indicated in Fig. \[fig:tJ\_tJz\](b)) and the energy spacing between the first and second excitation is of the order of $J$.](figS3.pdf){width="\columnwidth"} ![image](si_1.pdf){width="\linewidth"} ![image](si_2.pdf){width="\linewidth"} ![image](si_3.pdf){width="\linewidth"} ![image](si_55.pdf){width="\linewidth"} ![image](si_6.pdf){width="\linewidth"}
--- abstract: 'We extend the study of velocity quantization phenomena recently found in the classical motion of an idealized 1D model solid lubricant – consisting of a harmonic chain interposed between two periodic sliding potentials \[Phys. Rev. Lett. [**97**]{}, 056101 (2006)\]. This quantization is due to one slider rigidly dragging the [*commensurate*]{} lattice of kinks that the chain forms with the other slider. In this follow-up work we consider finite-size chains rather than infinite chains. The finite size (i) permits the development of robust velocity plateaus as a function of the lubricant stiffness, and (ii) allows an overall chain-length re-adjustment which spontaneously promotes single-particle [*periodic*]{} oscillations. These periodic oscillations replace the quasi-periodic motion produced by general incommensurate periods of the sliders and the lubricant in the infinite-size model. Possible consequences of these results for some real systems are discussed.' address: - '$^a$Department of Physics, University of Milan, Via Celoria 16, 20133 Milan, Italy' - '$^b$CNR-INFM National Research Center S3, and Department of Physics, University of Modena and Reggio Emilia, Via Campi 213/A, 41100 Modena, Italy' - '$^c$International School for Advanced Studies (SISSA), and INFM-CNR Democritos National Simulation Center, Via Beirut 2-4, I-34014 Trieste, Italy' - '$^d$International Centre for Theoretical Physics (ICTP), P.O.Box 586, I-34014 Trieste, Italy' author: - 'Marco Cesaratto$^{a}$' - 'Nicola Manini$^{a}$' - 'Andrea Vanossi$^{b}$' - 'Erio Tosatti$^{c,d}$, and' - 'Giuseppe E. Santoro$^{c,d}$' title: 'Kink plateau dynamics in finite-size lubricant chains' --- , , , Introduction ============ The present paper extends the study of a one-dimensional (1D) non-linear model, inspired by the tribological problem of two sliding surfaces with a thin solid lubricant layer in between [@Vanossi06], to the case of a lubricant 1D “island” of finite size. Previous work [@Vanossi06; @Santoro06] found robust, universal and exactly quantized asymmetric velocity plateaus in the classical dynamics of an infinite-size chain subject to two relatively sliding periodic potentials. The infinite chain size was managed – in the general incommensurate case – by means of periodic boundary conditions (PBC) and finite-size scaling. The plateaus of chain velocity as a function of several model parameters were shown to be due to motion of kinks (topological solitons), which generally exist in a chain submitted to a periodic potential. These nonlinear excitations generated by the first, stationary periodic potential are set into motion by the external driving which is provided by the second periodic potential, sliding with velocity $v_{\rm ext}$. While the chain kinks are thus dragged with velocity $v_{\rm ext}$, the overall chain velocity is smaller, and fixed by the kinks nature and density. That in turn depends only on the ratio of the period of one slider to that of the chain, which is dictated by the interparticle spacing, and enforced by the PBC. The present work considers a finite open-boundary chain, such as for example a hydrocarbon chain, or a graphite flake interposed between two sliding crystal faces [@Dienwiebel04]. Unlike the infinite chain, the open-boundary chain can elongate or shorten, at the cost of some harmonic potential energy, effectively modifying its linear density, and thus the kink density which is the relevant length ratio. We find that it does indeed elongate or shorten so as to realize a precise [*commensurate*]{} relation to the other slider. This adaptive relaxation is such as to produce perfectly periodic oscillations of the single particles superposed with their average drift at the quantized velocity. The model ========= Using the same language and notation of previously studied confined models [@Rozman96; @Rozman98; @Zaloj98; @Urbakh; @VanossiPRL], we represent a solid lubricant layer as a chain of $N$ harmonically interacting particles interposed between two rigid generally (but not necessarily) incommensurate sinusoidal substrates (the two “sliding crystals”, sketched in Fig. \[model:fig\]) externally driven at a constant relative velocity $v_{\rm ext}$. The equation of motion of the $i$-th particle is: $$\begin{aligned} \label{eqmotion:eqn} m\ddot{x}_i &=& -\frac{1}{2} \left[ F_+ \sin{k_+ (x_i-v_+t)} + F_- \sin{k_-(x_i-v_-t)}\right] \nonumber \\ &+& K (x_{i+1}+x_{i-1}-2x_i) - \gamma \sum_{\pm} (\dot{x}_i - v_{\pm}) \;,\end{aligned}$$ where $m$ is the mass of the $N$ particles, $K$ is the chain spring constant, and $k_{\pm}=2\pi/a_{\pm}$ are the wave-vector periodicities of potentials representing the two sliders, moving at velocities $v_{\pm}$. We set, in full generality, $v_+$ = 0 and $v_- = v_{\rm ext}$. $\gamma$ is a phenomenological parameter substituting for various sources of dissipation, required to achieve a stationary state, but otherwise playing no major role in the following. $F_{\pm}$ are the force amplitudes representing the sinusoidal corrugation of the two sliders (we will commonly use $F_-/F_+=1$ but we checked that our results are more general). We take $a_+=1$, $m=1$, and $F_+=1$ as our basic units, and all quantities are measured in suitable combinations thereof. The relevant length ratios [@vanErp99; @Vanossi00] are defined by $r_{\pm}=a_{\pm}/a_0$; we assume, without loss of generality, $r_->r_+$. The inter-particle equilibrium length $a_0$ enters explicitly the equations of motion (\[eqmotion:eqn\]) of the first ($i=1$) and last ($i=N$) particle whose restoring force terms in Eq. (\[eqmotion:eqn\]) are $K (x_{2}-x_1-a_0)$ and $K (x_{N-1}-x_{N}+a_0)$, respectively; this implements open boundary conditions (OBC). Upon sliding the substrates, $v_{\rm ext} \neq 0$, the lubricant chain slides too. Despite the apparent generic symmetry between the two sliders, the time-averaged chain velocity $w=v_{\rm cm}/v_{\rm ext}$, is generally [*asymmetric*]{}, namely different from $1/2$. In a previous study on this model [@Vanossi06; @Santoro06] it was shown that, for an infinite chain and periodic boundary conditions (PBC), $w$ is exactly quantized, for large parameter intervals, to plateau values that depend solely on the chosen commensurability ratios $(r_+,r_-)$. As the present finite-size OBC simulations will show, the PBCs are not crucial to the plateau quantization, which occurs even for a lubricant of finite and not particularly large size $N$. The main difference between OBC and PBC is that, while in PBC the chain length, and thus the length ratios $r_\pm$, are fixed, in OBC the chain can lengthen or shorten with respect to its equilibrium size. We find that during sliding the chain length gradually reaches a natural attractor value and oscillates around it. We define a new effective inter-particle length and corresponding length ratio $$\label{r+eff} a_0^{\rm eff}=\frac L{N-1}\,, \qquad r_+^{\rm eff}=\frac{a_+}{a_0^{\rm eff}} \,,$$ where $L=\langle x_{N}-x_{1}\rangle$ is the average chain length after the initial transient. The effective length ratio $r_+^{\rm eff}$ plays a central role in the understanding of the velocity plateaus of the finite-size lubrication model. Results and theory {#results:sec} ================== The driven dynamics of the lubricant is studied by integrating the equations of motion (\[eqmotion:eqn\]) starting from fully relaxed springs ($x_i=i\, a_0$, $\dot{x}_i=v_{\rm ext}/2$), using a standard fourth-order Runge-Kutta method. After an initial transient where length relaxation takes place, the system reaches its dynamical stationary state, at least so long as $\gamma$ is not exactly zero. Figure \[w\_K:fig\] shows the resulting time-averaged center-of-mass (CM) velocity $v_{\rm cm}$ as a function of the chain stiffness $K$, for an irrational choice of $(r_+,r_-)$, and two values of $N$, defining a relatively short chain ($N=15$) and one of intermediate length ($N=100$). We find that $w$ is generally a complicated function of $K$, with flat plateaus and regimes of continuous evolution, not unlike what is found in the infinite-size limit studied through PBC simulations [@Vanossi06]. To investigate the peculiarities brought about by finite size, we analyze the dynamics for a large number of values of $(r_+,r_-)$ and $K$, and observe that: (i) one or more velocity plateaus as a function of $K$ occur for large ranges of $(r_+,r_-)$; (ii) the velocity ratio $w$ of most plateaus satisfies $$\label{weff} w=1-\frac 1{r_+^{\rm eff}} .$$ This result should be compared with the relation ($w=1-r_+^{-1}$) valid for the main plateau of PBC calculations [@Vanossi06], where the length ratios $(r_+,r_-)$ are fixed: in Eq. (\[weff\]) the new effective length ratio $r_+^{\rm eff}$ of Eq. (\[r+eff\]) replaces $r_+$. Figure \[w\_r:fig\] collects the observed plateau velocity ratios for a range of values of $r_+$ and for fixed $r_-=\sqrt{101}$. Clear trends emerge: plateau velocities depend continuously on $r_+$, thus indicating that also the effective chain length ratio $r_+^{\rm eff}$ evolves continuously with $r_+$ in finite ranges. Several plateaus appear to follow different curves. These data are conveniently organized and understood by plotting $r_+^{\rm eff}$ rather than $w$, as a function of $r_+$, as is done in Fig. \[reff:fig\].[^1] All points fit perfect straight lines through $(r_+,r_+^{\rm eff})=(0,1)$. Most lines have slope $q/r_-$ with integer $q$. Occasional plateaus fit this relation with half-integer $q$ ($-5/2$, $7/2$, and $9/2$, in the calculations of Fig. \[reff:fig\]). Several calculations carried out with different values of $r_-$ confirm that in general the ratio $r_+^{\rm eff}$ satisfies $$\label{reff:eq} r_+^{\rm eff}=1+q \, \frac{r_+}{r_-}$$ with $q$ taking simple fraction (often integer) values. This general behavior indicates that the plateau dynamics leads the finite-size lubricant toward a dynamical configuration where not only its velocity but also its length is quantized. We can understand this phenomenology in terms of kinks (i.e., local compressions of the chain with substrate potential minima holding more than just one particle [@BraunBook]), as described in [@Vanossi06]. Assume initially integer $q$. The basic hypothesis explaining the relation (\[reff:eq\]) are: (i) the particles tend to singly occupy the $a_+$-spaced minima of the bottom potential, with occasional kinks to release the spring tension; (ii) kinks group in bunches, each sitting in a period $a_-$ of the top substrate; (iii) kink bunches are $q$-fold, i.e. they collect $q$ individual kinks (negative $q$ indicates the number of anti-kinks). After the initial transient the chain length becomes on average very close to $L = (N-N_{kink}-1)\,a_+$. The number of kinks thus equals the number $q$ of kinks per bunch times the total number $L/a_-$ of bunches in the chain: $N_{kink}= q\,L/a_-$. By eliminating $N_{kink}$, we obtain $$L=\frac{a_+a_-(N-1)}{a_-+q\,a_+} \,.$$ This is consistent with an average inter-particle distance $$a_0^{\rm eff}=\frac{L}{N-1} = \frac{a_+a_-}{a_-+q\,a_+} =a_+\,\frac{r_-}{r_-+q\,r_+} =a_+\,\frac{1}{1+q\,\frac {r_+}{r_-}} \,,$$ and thus with the effective length ratio of Eq. (\[reff:eq\]). In general, for non-integer $q=n_k/n_-$ values, this interpretation remains valid: bunches of a total of $n_k$ kinks distribute themselves in $n_-$ minima of the $a_-$ lattice. $q$ indicates therefore the density (coverage fraction) of kinks on the $a_-$ lattice. The dynamically stable plateau attractors of the open-boundary chain are therefore characterized by a lattice of kinks perfectly commensurate to the $a_-$ lattice. In the infinite-size PBC model, a rational kink coverage corresponds to commensurate encounter frequencies $f_+$ and $f_-$ of the generic lubricant particle with the two substrates, which in turn occurs for very special values of $r_\pm$ [@Vanossi06]: these $(r_+,r_-)$ values are characterized by perfectly periodic single-particle dynamics. It is rather remarkable that the open-chain model, without the necessity of any careful fine tuning of $(r_+,r_-)$ realizes a self-organized commensurate kink lattice automatically producing perfectly periodic single-particle oscillations in a generally incommensurate context. Note that the points of Fig. \[reff:fig\] lying along the $q=0$ line indicate perfect commensuration between the lubricant chain and the bottom substrate, to which the chain remains pinned: this can be realized by paying a moderate harmonic-energy cost only close to $r_+=1$. For the same reason, all plateaus appear close to the $r_+=r_+^{\rm eff}$ line (dot-dashed line in Fig. \[reff:fig\]). For the parameters of Fig. \[w\_r:fig\], nontrivial plateaus are found only for $0.7<r_+< 1.6$. Outside this range, relation (\[reff:eq\]) produces $r_+^{\rm eff}$, for small integer $q$, very much different from $r_+$. Smaller values of $r_-\simeq 2$ (rather $r_-\simeq 10$ as in Fig. \[w\_r:fig\]) generates an analogous set of plateaus for $r_+> 1.6$. Size dependence =============== As Fig. \[w\_K:fig\] suggests, strong size effects are observed, especially for large $K$. In particular, the “natural” symmetric large-$K$ limit $w=1/2$ found with PBC in Ref. [@Vanossi06] is rarely reached using OBC: for $N=100$ the chain is pinned to the bottom substrate ($v_{\rm cm}=0$), while for $N=15$ it moves at $v_{\rm cm}=v_{\rm ext}$. Figure \[K1000:fig\] shows the dependence of the velocity ratio $w$ on the particle number $N$, for a fixed length ratio $r_+$ and two different values of $r_-$, in the large-$K$ limit. We observe that the chain is pinned to the $a_+$ substrate when $N$ occurs to be (nearly) a multiple of the length ratio $r_-$. In all other cases, the chain follows the $a_-$ substrate, at velocity $v_{\rm ext}$. This changing behavior can be understood as follows. For large spring stiffness $K$, the kink dynamics is suppressed, the lubricant particles placing themselves at nearly regular distances $\simeq a_0$. It is energetically favorable for a chain shorter than $a_-$ to sit in a minimum of the top potential and then stick to it. Even if the chain is longer than $a_-$, its length is generally not an exact multiple of the periodicity of the top substrate. For this reason, a finite end part of the chain remains similarly trapped in the minima of the top substrate: this is what occurs for most $N$ values in Fig. \[K1000:fig\]. On the other hand, when $N\,a_0$ is a multiple of the $a_-$ period (i.e. when $N$ is close to a multiple of $r_-=a_-/a_0$), minima and maxima of the upper potential compensate each other, so that there is no preferential relative position of chain and top substrate. For such special sizes, the chain remains weakly pinned to the bottom substrate, as illustrated in Fig. \[K1000:fig\]. The values $N=15$ and $N=100$ of Fig. \[w\_K:fig\] represent the two situations. Other intermediate values occur for specific sizes, but the finite-size scaling for large $N$ is obviously non-trivial. While Fig. \[w\_K:fig\] indicates that for small and moderate $K$, size effects, if any, are very small, at large $K$ they affect the dynamics substantially. Further work is needed to understand the large-$K$ size-scaling in detail. Discussion and Conclusions ========================== We have shown that chains of finite and even small size, driven in between two periodic sliders, move with characteristic quantized velocities, much like the infinite-size ones do. We find that a finite-size chain length re-adjusts in such a way as to realize a lattice of topological solitons (kinks) commensurate to the period of the smoother slider. A consequence of this self-commensuration is a periodic single-particle motion, even for incommensurate initial choices of the periods. The likely reason behind this phenomenology is as follows. Consider initially a single periodic slider of length ratio $r_+$ and periodic boundary conditions. The lubricant chain will then form a regular lattice of kinks, which repel each other. When the second slider is introduced, the lattice of kinks will be generally incommensurate with the period $a_-$, and this brings an irregular distribution of bunches of kinks, which on average reconstruct the correct density of kinks. An OBC finite chain is able to relax, by paying some extra harmonic strain, in such a way as to enforce an optimal local density of kinks which satisfies both periodic sliders. The phenomena just described for a model 1D system are unique, and it would be interesting if they could be observed in real systems. Nested carbon nanotubes [@Zhang], or confined one-dimensional nanomechanical systems [@Toudic_06], are one possible arena for the phenomena described. Though speculative at this stage, one obvious question is what aspects of the phenomenology just described might survive in two-dimensions (2D), where tribological realizations, such as the sliding of two hard crystalline faces with, e.g., an interposed graphite flake, are conceivable. Our results suggests that the lattice of discommensurations – a Moiré pattern– formed by the flake on a substrate, could be dragged by the other sliding crystal face, in such a manner that the speed of the flake as a whole would be smaller, and quantized. This would amount to the slider “ironing” the kinks onward. Dienwiebel [*et al.*]{} [@Dienwiebel04] demonstrated how incommensurability may lead to virtually friction-free sliding in such a case, but no measure was obtained for the flake relative sliding velocity. Real substrates are, unlike our model, not rigid, subject to thermal expansion, etc. Nevertheless the ubiquity of plateaus shown in Fig. \[w\_K:fig\], and their topological origin, suggests that these effects would not remove the phenomenon. Acknowledgments {#acknowledgments .unnumbered} =============== This research was partially supported by PRRIITT (Regione Emilia Romagna), Net-Lab “Surfaces & Coatings for Advanced Mechanics and Nanomechanics” (SUP&RMAN) and by MIUR Cofin 2004023199, FIRB RBAU017S8R, and RBAU01LX5H. [10]{} . . . . . . . . . . . . . [^1]: For reasons of numerical stability, this figure reports $r_+^{\rm eff}$ obtained by inversion of Eq. (\[weff\]), but the same plot could have been obtained directly by applying the definition (\[r+eff\]). This equivalence is confirmed by the comparison of the two methods shown in Fig. \[reff:fig\] for 2 points.
[Invariants of ideals generated by pfaffians]{} [**Emanuela De Negri**]{}. [Università di Genova, Dipartimento di Matematica, Via Dodecaneso 35, IT-16146 Genova, Italia. [ *email*]{}: denegri@dima.unige.it]{}\ [**Elisa Gorla**]{}. [Universität Basel, Departement Mathematik, Rheinsprung 21, CH-4051 Basel, Switzerland. [*email*]{}: elisa.gorla@unibas.ch]{} [. Ideals generated by pfaffians are of interest in commutative algebra and algebraic geometry, as well as in combinatorics. In this article we compute multiplicity and Castelnuovo-Mumford regularity of pfaffian ideals of ladders. We give explicit formulas for some families of ideals, and indicate a procedure that allows to recursively compute the invariants of any pfaffian ideal of ladder. Our approach makes an essential use of liaison theory. ]{} Introduction {#introduction .unnumbered} ============ Pfaffians are the natural analogue of minors when working with skew-symmetric matrices. Ideals generated by pfaffians are studied in the context of commutative algebra and algebraic geometry, as well as in combinatorics. Many are the reasons for such an interest, e.g., many ideals generated by pfaffians are Gorenstein (see, e.g., [@KL] and [@D]). Conversely, due to a famous result ([@BE]) of Buchsbaum and Eisenbud, any Gorenstein ideal of height $3$ of a polynomial ring over a field is generated by the maximal pfaffians of a suitable skew-symmetric matrix of homogeneous forms. Ideals generated by pfaffians arise naturally in algebraic geometry as, e.g., ideals of pfaffians in a generic skew-symmetric matrix define Schubert cells in orthogonal Grassmannians. Moreover, some Grassmannians are defined by pfaffians, as well as some of their secant varieties. In this article, we compute numerical invariants of pfaffian ideals of ladders. Pfaffian ideals of ladders are, informally speaking, ideals generated by pfaffians which only involve indeterminates in a ladder of a skew-symmetric matrix of indeterminates. The size of the pfaffians is allowed to vary in different regions of the ladder. This family was introduced by the authors in [@DGo], and contains the classically studied ideals of $2t$-pfaffians of a matrix or of a ladder. It is a very large family, and a natural one to study from the point of view of liaison theory, since all the ideals in this family arise from ideals of $2t$-pfaffians in a ladder while performing elementary G-biliaisons. In [@DGo] we proved that these ideals are prime, normal and Cohen-Macaulay. The main result of the paper was a proof that any pfaffian ideal of ladder can be obtained from an ideal generated by indeterminates via a finite sequence of ascending G-biliaisons. In particular they are glicci, i.e., they belong to the G-liaison class of a complete intersection. The G-biliaison steps were described very explicitly. Therefore, as a biproduct, it is possible to recursively compute numerical invariants of pfaffian ideals of ladders such as the multiplicity, the Hilbert function, the $h$-vector, as well as a graded free resolution. In some cases it is also possible to compute the graded Betti numbers and in particular the Castelnuovo Mumford regularity. Although it is possible to perform these computations in any specific example, it is in general hard to produce explicit formulas. In this paper, we derive explicit formulas for some classes of pfaffian ideals of ladders. The paper is organized as follows. In Section 1 we fix the notation and define the classes that we study. We also recall the main result of [@DGo] on which our approach is based. In Section 2 we give explicit or recursive formulas for the multiplicity of the ideals that we study. In Theorem \[prod\] we give a simple numerical condition which forces the multiplicity of a pfaffian ideal of ladder to decompose as the product of the multiplicities of two pfaffian ideals relative to subladders. In Section 3 we compute Castelnuovo-Mumford regularities. In Section 4 we show how to use our approach to compute the graded Betti numbers of ideals of pfaffians of maximal size of a generic skew-symmetric matrix. We also give a simple proof that the $h$-vectors of these ideals are of decreasing type. The ideals generated by pfaffians of maximal size of a generic skew-symmetric matrix are Gorenstein ideals of height $3$, so the results are well-known. However, we are able to give a very simple proof, which can be easily specialized to any Gorenstein ideal of height $3$. [. The second author was supported by the Swiss National Science Foundation under grant no. 123393. Part of this work was done while the authors were attending the conference “PASI 2009 in Commutative Algebra and its Connections to Geometry, honoring Wolmer Vasconcelos”, which took place in Olinda (Brazil) in August 2009. The authors wish to thank the organizers, the speakers and the participants to the conference for the stimulating working environment that they created. ]{} Some classes of pfaffian ladder ideals ====================================== Let $X=(x_{ij})$ be an $n\times n$ skew-symmetric matrix of indeterminates. In other words, the entries $x_{ij}$ with $i<j$ are indeterminates, $x_{ij}=-x_{ji}$ for $i>j$, and $x_{ii}=0$ for all $i=1,...,n$. Let $R=K[X]=K[x_{ij} \;|\; 1\leq i<j\leq n ]$ be the polynomial ring associated to $X$. \[ladd\] A [*ladder*]{} $\mathcal Y$ of $X$ is a subset of the set $\{(i,j)\in{{\mathbb N}}^2 \;|\; 1\le i,j\le n\}$ with the following properties : 1. if $(i,j)\in {\mathcal Y}$ then $(j,i)\in {\mathcal Y}$, 2. if $i<h,j>k$ and $(i,j),(h,k)$ belong to $\mathcal Y$, then $(i,k),(i,h),(h,j),(j,k)$ belong to $\mathcal Y$. We do not assume that a ladder ${\mathcal{Y}}$ is connected, nor that $X$ is the smallest skew-symmetric matrix having ${\mathcal{Y}}$ as ladder. We can assume without loss of generality that the ladder ${\mathcal{Y}}$ is symmetric. It is easy to see that any ladder can be decomposed as a union of square subladders $$\label{decomp} {\mathcal{Y}}={\mathcal{X}}_1\cup\ldots\cup {\mathcal{X}}_s$$ where $${\mathcal{X}}_k=\{(i,j)\;|\; a_k\le i,j \le b_k\},$$ for some integers $1\leq a_1\leq\ldots\leq a_s\leq n$ and $1\leq b_1\leq\ldots\leq b_s\leq n$ such that $a_k<b_k$ for all $k$. We say that ${\mathcal{Y}}$ is the ladder with [*upper corners*]{} $(a_1,b_1),\ldots,(a_s,b_s)$, and that ${\mathcal{X}}_k$ is the square subladder of ${\mathcal{Y}}$ with upper outside corner $(a_k,b_k)$. We allow two upper corners to have the same first or second coordinate, but we assume that no two upper corners coincide. Notice that with this convention a ladder does not have a unique decomposition of the form (\[decomp\]). In other words, a ladder does not correspond uniquely to a set of upper corners $(a_1,b_1),\ldots,(a_s,b_s)$. However, the upper corners determine the subladders ${\mathcal{X}}_k$, hence the ladder ${\mathcal{Y}}$ according to (\[decomp\]). Let $t$ be a positive integer. A $2t$-pfaffian is the pfaffian of a $2t\times 2t$ submatrix of $X$. Given a ladder $\mathcal Y$ we set $Y=\{x_{ij}\in X\;|\; (i,j)\in {\mathcal Y},\; i<j\}$. We let $I_{2t}(Y)$ denote the ideal generated by the set of the $2t$-pfaffians of $X$ which involve only indeterminates of $Y$. In particular $I_{2t}(X)$ is the ideal generated by the $2t$-pfaffians of $X$. We regard all the ideals as ideals in $K[X]$. Whenever we consider a ladder ${\mathcal{Y}}$, we assume that it comes with its set of upper corners and the corresponding decomposition as a union of square subladders as in (\[decomp\]). The following family of ideals has been introduced and studied in [@DGo]: \[ideal\] Let ${\mathcal{Y}}={\mathcal{X}}_1\cup\ldots\cup {\mathcal{X}}_s$ be a ladder as in Definition \[ladd\].Let $X_k=\{x_{ij}\;|\; (i,j)\in{\mathcal{X}}_k,\; i<j\}$ for $k=1,\dots,s$. Fix a vector ${\bf t}=(t_1,\ldots,t_s)$, ${\bf t}\in \{1,\ldots,\lfloor\frac{n}{2}\rfloor\}^s$. The [*pfaffian ideal*]{} $I_{2{\bf t}}(Y)$ is by definition the sum of pfaffian ideals $I_{2t_1}(X_1)+\ldots+I_{2t_s}(X_s)\subseteq K[X]$. We refer to these ideals as [*pfaffian ideals of ladders*]{}. We can assume without loss of generality that $$2t_k\leq b_k-a_k+1,\;\;\;\mbox{for}\; 1\leq k\leq s.$$ Moreover, we can assume that $$a_k-a_{k-1}>t_{k-1}-t_k \;\;\;\mbox{and}\;\;\; b_k-b_{k-1}>t_k-t_{k-1}$$ for $2\leq k\leq s$. In [@DGo], pfaffian ideals of ladders are proved to be prime, normal, and Cohen-Macaulay. A formula for their height is given. \[ladderheight\] For a ladder ${\mathcal{Y}}$ with upper corners $(a_1,b_1),\ldots,(a_s,b_s)$ and ${\bf t}=(t_1,\ldots,t_s)$, we denote by $\tilde{{\mathcal{Y}}}$ the ladder with upper corners $(a_1+t_1-1,b_1-t_1+1),\ldots,(a_s+t_s-1,b_s-t_s+1)$. The ladder $\tilde{{\mathcal{Y}}}$ computes the height of the ideal $I_{2{\bf t}}(Y)$ as follows: Let ${\mathcal{Y}}$ be the ladder with upper corners $(a_1,b_1), \ldots,$ $(a_s,b_s)$ and ${\bf t}=(t_1,\ldots,t_s)$. Let $\tilde{{\mathcal{Y}}}$ be as in Notation \[ladderheight\]. Then the height of $I_{2{\bf t}}(Y)$ equals the cardinality of $\{(i,j)\in\tilde{{\mathcal{Y}}} \;|\; i<j\}$. We now recall the definition of biliaison. \[gbil\] Let $I,I',J$ be homogeneous, saturated ideals in $K[X]$, with $\hgt(I)=\hgt(I')=\hgt(J)+1.$ Assume that $R/J$ is Cohen-Macaulay and generically Gorenstein, i.e., $(R/J)_P$ is Gorenstein for any minimal associated prime $P$ of $J$. We say that $I$ is obtained from $I'$ by a [*G-biliaison of height $\ell$*]{} on $J$ if $I/J$ and $I'/J(\ell)$ represent the same element in the ideal class group of $K[X]/J$. In other words, $I$ is obtained from $I'$ by a G-biliaison of height $\ell$ on $J$ if there exist homogeneous polynomials $f,g\in R$ with $\deg(g)=\deg(f)+\ell$, such that $fI+J=gI'+J$ as ideals of $R$. The main result of [@DGo] is that ladder pfaffian ideals belong to the G-biliaison class of a complete intersection. In particular, they are glicci. We briefly recall the single G-biliaison step which is described in the proof of [@DGo Theorem 2.3]. With the notation of Definition \[ideal\], let $\mathcal Y'$ be the subladder of $\mathcal Y$ with upper corners $$(a_1,b_1),\ldots, (a_{k-1},b_{k-1}), (a_k+1,b_k-1), (a_{k+1},b_{k+1}),\ldots, (a_s,b_s),$$ and let ${\bf t'}=(t_1,\ldots,t_{k-1},t_k-1,t_{k+1},\ldots,t_s)$. Let ${\mathcal{Z}}$ be the subladder of ${\mathcal{Y}}$ obtained by removing the entry $(a_k,b_k)$ and its symmetric. Equivalently, ${\mathcal{Z}}$ is the ladder with upper corners $$(a_1,b_1),\ldots, (a_{k-1},b_{k-1}), (a_k,b_k-1), (a_k+1,b_k), (a_{k+1},b_{k+1}),\ldots, (a_s,b_s).$$ Let ${\bf u}=(t_1,\ldots,t_{k-1},t_k,t_k,t_{k+1},\ldots,t_s)$. One has: \[step\] Let $I=I_{2{\bf t}}(Y)$, $I'=I_{2{\bf t'}}(Y')$ and $J=I_{2{\bf u}}(Z)$ be ideals of $K[X]$. Then $I$ is obtained from $I'$ via an elementary G-biliaison of height $1$ on $J$. More precisely, with the above notation we have $$fI+J=gI'+J$$ where $f\in I'$ is a $2(t_k-1)$-pfaffian, $g\in I$ is a $2t_k$-pfaffian, and $f,g\not\in J$. When discussing biliaison, we will refer without distinction to the ideals and to the varieties associated to them. In this paper we deal with special classes of pfaffian ideals of ladders, and we compute some of their numerical invariants using the biliaison step described in Theorem \[step\]. The same technique gives a recursive procedure to determine such invariants for any pfaffian ideal of ladder. However, it is in general hard to deduce explicit formulas. We now introduce the classes we are going to study. First we consider the ideal $L_t^n=I_{2{\bf t}}(Y)$ where ${\mathcal{Y}}$ is the ladder with upper corners $(1,n-1)$ an $(2,n)$ and ${\bf t}=(t,t)$. Clearly $L_t^n$ is generated by the $2t$-pfaffians of the ladder obtained from $X$ by deleting the entries $(1,n)$ and $(n,1)$. (-2,1)(14,14) (1,12)(11,12)(11,12)[$\bullet$]{}(12.5,13)[$(1,n-1)$]{} (11,12)(11,11) (11,11)(12,11)(12,11)[$\bullet$]{}(13.5,11.5)[$(2,n)$]{} (12,11)(12,1) (1,12)(1,2) (1,2)(2,2) (2,2)(2,1) (2,1)(12,1) (6,7)[ $2t$-pfaffians]{} (-2,6)[:]{} Then we restrict our attention to some ideals generated by pfaffians whose size is maximal or submaximal, in a sense that we are going to specify. In particular, we consider the ideals generated by maximal and by submaximal pfaffians of a skew-symmetric matrix of indeterminates. More precisely, we denote by $M_t$ the ideal generated by the $2t$-Pfaffians of a $(2t+1)\times(2t+1)$ matrix and by $SM_t$ the ideal generated by the $2t$-pfaffians of a $(2t+2)\times(2t+2)$ matrix. Moreover we consider ideals generated by pfaffians of two different sizes in different regions of a matrix. Here we regard [*nested*]{} matrices as a ladder. In particular, we consider $N_t=I_{2{\bf t}}(Y)$ where ${\mathcal{Y}}$ is the ladder with upper corners $(1,2t-1)$ and $(1,2t+1)$, and ${\bf t}=(t-1,t)$. So $N_t$ is the ideal generated by the $2t$-pfaffians of a skew-symmetric matrix of size $2t+1$ and the $(2t-2)$-pfaffians of its first $2t-1$ rows and columns. We denote by $SN_t$ the ideal $I_{2{\bf t}}(Y)$ where ${\mathcal{Y}}$ is the ladder with upper corners $(1,2t-1)$ and $(1,2t+2)$, and ${\bf t}=(t-1,t)$. This is the ideal generated by the $2t$-pfaffians of a skew-symmetric matrix of size $2t+2$ and the $(2t-2)$-pfaffians of its first $2t-1$ rows and columns. (-2,-1)(14,14) (1,12)(12,12) (10,12)[$\bullet$]{}(10,12.8) (12,12)[$\bullet$]{}(15,12.5) (12,12)(12,1) (1,12)(1,1) (1,1)(12,1) (10,12)(10,3) (10,3)(1,3) (6,7)[$(2t-2)$-pfaff.]{} (9,2)[ $2t$-pfaff.]{} (-2,6)[:]{} 1.5truecm (-1,-0)(13,13) (1,13)(13,13) (10,13)[$\bullet$]{}(10,13.8) (13,13)[$\bullet$]{}(16,13.5) (13,13)(13,1) (1,13)(1,1) (1,1)(13,1) (10,13)(10,4) (10,4)(1,4) (6,8)[$(2t-2)$-pfaff.]{} (9,2)[ $2t$-pfaff.]{} (-2,6)[:]{} We let $L_t(k)=I_{2{\bf t}}(Y)$, where ${\mathcal{Y}}$ is the ladder with upper corners $(1,2t+1),(2,2t+2),(3,2t+3),\ldots,(k,2t+k)$, and ${\bf t}=(t,\dots, t)$. Notice that $L_t(1)=M_t$, and $L_t(2)=L_t^{2t+2}.$ (-2,-1)(14,14) (1,12)(11,12)(11,12)[$\bullet$]{}(13.5,13)[$(1,2t+1)$]{} (11,12)(11,11) (11,11)(12,11)(12,11)[$\bullet$]{}(14.8,11.5)[$(2,2t+2)$]{} (12,11)(12,1) (1,12)(1,2) (1,2)(2,2) (2,2)(2,1) (2,1)(12,1) (6,7)[$2t$-pfaffians]{} (-2,6)[:]{} 1truecm (-2,-1)(14,14) (1,12)(8,12)(8,12)[$\bullet$]{}(10.5,13)[$(1,2t+1)$]{} (8,12)(8,11) (8,11)(9,11)(9,11)[$\bullet$]{}(12,11.5)[$(2,2t+2)$]{} (9,11)(9,10) (9,10)(10,10)(10,10)[$\bullet$]{} (10,10)(10,9) (10,9)(11,9)(11,9)[$\bullet$]{}(13.5,10.5)[$\ddots$]{} (10,9)(11,9) (11,9)(11,8) (11,8)(12,8)(12,8)[$\bullet$]{}(15,8.5)[$(k,2t+k)$]{} (12,8)(12,1) (1,12)(1,5) (1,5)(2,5) (2,5)(2,4) (2,4)(3,4) (3,4)(3,3) (3,3)(4,3) (4,3)(4,2) (4,2)(5,2) (5,2)(5,1) (5,1)(12,1) (6,7)[$2t$-pfaffians]{} (-2,6)[:]{} Moreover, given two integers $j$ and $k$ we let ${\mathcal{Y}}_{jk}$ be the ladder with the $j+k$ upper outside corners $(1,2t-1),(2,2t),(3,2t+1),\dots,(j,2t+j-2),(j,2t+j),(j+1,2t+j+1),\dots,(j+k-1,2t+j+k-1).$ We consider the ideal $$L_t(j,k):=I_{2{\bf t}}(Y_{jk}), \ \mbox{ where \ \ }{\bf t} =(\underbrace{t-1,\dots,t-1}_{j},\underbrace{t, \dots,t}_k).$$ Notice that $L_t(0,k)=L_{t+1}(k,0)$. Moreover, this class contains most of the classes that we have already introduced. More precisely: $L_t(k)=L_t(0,k)$, $M_t=L_t(0,1)$, $SM_t=L_t(1,0)$, and $N_t=L_t(1,1)$. (-2,-1)(14,14) (1,12)(5,12)(5,12)[$\bullet$]{}(7,12.5)[$(1,2t-1)$]{} (5,12)(5,11) (5,11)(6,11)(6,11)[$\bullet$]{}(7,11.5)[$(2,2t)$]{} (6,11)(6,10) (6,10)(9,10)(7,10)[$\bullet$]{}(7.5,10.5) (9,10)[$\bullet$]{}(10.8,10.5)[$(3,2t+3)$]{} (9,10)(9,9) (9,9)(10,9)(10,9)[$\bullet$]{}(11.5,9.5)[$(4,2t+4)$]{} (10,9)(10,8) (10,8)(11,8)(11,8)[$\bullet$]{}(12.8,8.3)[$(5,2t+5)$]{} (11,8)(11,7) (11,7)(12,7)(12,7)[$\bullet$]{}(13.7,7.5)[$(6,2t+6)$]{} (12,7)(12,1) (1,12)(1,8) (1,8)(2,8) (2,8)(2,7) (2,7)(3,7) (3,7)(3,4) (3,4)(4,4) (4,4)(4,3) (4,3)(5,3) (5,3)(5,2) (5,2)(6,2) (6,2)(6,1) (6,1)(12,1) (7,10)(7,6) (7,6)(3,6) (9,4)[$2t$-pfaffians]{} (3.8,9)[$(2t-2)$-pfaff.]{} (-2,6)[:]{} (-2,4) Given two integers $j$ and $k$, we let ${\mathcal{Z}}_{jk}$ be the ladder with the $j+k$ upper outside corners $(1,2t-1),(2,2t),(3,2t+1),\dots,(j,2t+j-2),(j+1,2t+j+1),\dots,(j+k,2t+j+k).$ We consider the ideal $$H_t(j,k):=I_{2{\bf t}}(Z_{jk}), \ \mbox{ where \ \ }{\bf t} =(\underbrace{t-1,\dots,t-1}_{j},\underbrace{t, \dots,t}_k).$$ It is $L_t(k)=H_t(0,k)=H_{t+1}(k,0)$. (-2,-1)(14,14) (1,12)(5,12)(5,12)[$\bullet$]{}(7,12.5)[$(1,2t-1)$]{} (5,12)(5,11) (5,11)(6,11)(6,11)[$\bullet$]{}(7,11.5)[$(2,2t)$]{} (6,11)(6,10) (6,10)(7,10)(7,10)[$\bullet$]{}(8.5,10.5) (7,10)(7,9) (7,9)(10,9)(10,9)[$\bullet$]{}(11.5,9.5)[$(4,2t+4)$]{} (10,9)(10,8) (10,8)(11,8)(11,8)[$\bullet$]{}(12.8,8.3)[$(5,2t+5)$]{} (11,8)(11,7) (11,7)(12,7)(12,7)[$\bullet$]{}(13.7,7.5)[$(6,2t+6)$]{} (12,7)(12,1) (1,12)(1,8) (1,8)(2,8) (2,8)(2,7) (2,7)(3,7) (3,7)(3,6) (3,6)(4,6) (4,6)(4,4) (4,4)(4,3) (4,3)(5,3) (5,3)(5,2) (5,2)(6,2) (6,2)(6,1) (6,1)(12,1) (7,9)(7,6) (7,6)(4,6) (9,4)[$2t$-pfaffians]{} (3.8,9)[$(2t-2)$-pfaff.]{} (-2,6)[:]{} (-2,4) Multiplicity of pfaffian ladder ideals ====================================== In this section we give some formulas for the multiplicity of the ideals introduced in the previous section. Throughout the section, we denote by $e(I)$ the multiplicity of $R/I$ for any ideal $I\subset R=K[X]$. All the formulas that we produce are obtained as a finite sum of positive contributions. Therefore they are well suited to give lower bounds for the multiplicity. In the sequel we will need the following well know fact, which we prove for completeness. \[liaisonMultiplicity\] Let $H,I,J\subset K[X]$ be homogeneous, saturated, unmixed ideals. Assume that $H$ is Cohen-Macaulay and that $I$ is obtained from $J$ via an elementary G-biliaison of height $\ell\in{{\mathbb Z}}$ on $H$. Then $$e(I)=e(J)+\ell e(H).$$ Let $U,S,T$ be the schemes associated to $H,I,J$, respectively. Under our assumptions, $U$ is arithmetically Cohen-Macaulay and $S,T$ are generalized divisors on $U$. Moreover, $S$ is linearly equivalent to $T+\ell h$ as generalized divisors on $U$, where $h$ denotes the hyperplane section class on $U$. In particular $$e(I)=\deg(S)=\deg(T)+\ell\deg(U)=e(J)+\ell e(H).$$ We denote by $I_{t}^n$ the ideal generated by the $2t$-pfaffians of an $n\times n$ skew-symmetric matrix of indeterminates. In [@K Theorem 7] Krattenthaler proved that $$\label{krattenthaler} e(I_{t}^n )=\prod_{1\le i \le j \le n-2t+1} \frac{2(t-1)+i+j}{i+j}.$$ In particular for the ideals $M_t$ and $SM_t$ one has: $$e(M_t) =\prod_{1\le i \le j \le 2} \frac{2(t-1)+i+j}{i+j}, \ \ \ e(SM_t) =\prod_{1\le i\le j \le 3} \frac{2(t-1)+i+j}{i+j}.$$ From the results in [@DGo] one can easily deduce a formula for the multiplicity of the ideal $L_t^n$. $$\mbox{$e(L_t^n)=$\Large $ \frac{(n-2t+2)!}{(2n-4t+4)!} \left[\frac{(2n-2t+2)!}{n!}-\frac{(n-1)!}{(2t-3)!}\right]$ } \prod_{1\le i\le j\le n-2t+2} \frac{2(t-1)+i+j}{i+j}$$ By Theorem \[step\] the ideal $I_t^{n+1}$ is obtained from $I_{t-1}^{n-1}$ via an elementary G-biliaison of height $1$ on $L_t^n$. Hence by Proposition \[liaisonMultiplicity\] $$e(L_t^n)=e(I_t^{n+1})-e(I_{t-1}^{n-1}).$$ Substituting (\[krattenthaler\]) we obtain $e(L_t^n)=$ $$\displaystyle\prod_{1\le i \le j \le n-2t+2} \frac{1}{i+j} \Big[\displaystyle\prod_{1\le i \le j \le n-2t+2}(2t-2+i+j)- \displaystyle\prod_{1\le i \le j \le n-2t+2}(2t-4+i+j)\Big]. $$ Since $$\displaystyle\prod_{1\le i \le j \le n-2t+2}(2t-4+i+j)= \prod_{0\le i \le j \le n-2t+1}(2t-2+i+j)$$ by means of direct computation one gets $$\begin{array}{l} \displaystyle\prod_{1\le i \le j \le n-2t+2}(2t-2+i+j)-\displaystyle\prod_{1\le i \le j \le n-2t+2}(2t-4+i+j)= \\ \displaystyle\prod_{1\le i \le j \le n-2t+1} (2(t-1)+i+j)\Big[\displaystyle\prod_{1\le i\le n-2t+2}(n+i)-\displaystyle\prod_{0\le j\le n-2t+1}(2t-2+j)\Big] \end{array}.$$ The result now follows from the equality $$\begin{array}{l} \displaystyle\prod_{1\le i \le j \le n-2t+2} \frac{1}{i+j}\displaystyle\prod_{1\le i \le j \le n-2t+1} (2(t-1)+i+j)= \\ \displaystyle\prod_{1\le i \le j \le n-2t+1} \frac{(2(t-1)+i+j)}{i+j}\displaystyle\prod_{1\le i \le n-2t+2} \frac{1}{n-2t+2+i}. \end{array}$$ The case of ideals generated by maximal pfaffians of a matrix has been extensively studied. In particular it is well known that $$\label{gorcod3} e(L_t(1))=e(M_t)=1+2^2+3^2+\cdots+t^2$$ (see [@HTV Section 6], and [@HT Theorem 5.6 and the following example]). We deduce the following formulas from Theorem \[step\]. \[f\_2(t)\] $$e(L_t(2))=1+\sum_{s=2}^t [ 2s(1+2^2+\cdots +s^2)-s^3]$$ and $$e(N_t)= 1+\sum_{s=2}^{t-1}[2s(1+2^2+\cdots +s^2)-s^3]+t(1+2^2+\cdots +(t-1)^2).$$ By Theorem \[step\] the ideal $L_t(2)$ is obtained from $N_t$ via an elementary G-biliaison of height $1$ on $M_t+(f)$, where $f$ is a $2t$-pfaffian which is regular modulo $M_t$. Thus by Proposition \[liaisonMultiplicity\] one has $$\label{lt2_first} e(L_t(2))=e(N_t)+e(M_t+(f))=e(N_t)+te(M_t).$$ Moreover the ideal $N_t$ is obtained from $L_{t-1}(2)$ via an elementary G-biliaison of height $1$ on $M_{t-1}+(g)$, where $g$ is a $2t$-pfaffian which is regular modulo $M_{t-1}$. Therefore $$\label{combin} e(N_t)=e(L_{t-1}(2))+te(M_{t-1})$$ and combining (\[lt2\_first\]) and (\[combin\]) one gets $$\label{lt2} e(L_t(2))=e(L_{t-1}(2))+te(M_{t-1})+te(M_t).$$ Finally by (\[lt2\]) and (\[gorcod3\]), after solving the recursion one obtains $$e(L_t(2))=1+\sum_{s=2}^t [s(e(M_{s-1})+e(M_s))]=1+\sum_{s=2}^t [2s(1+2^2+\cdots +s^2)-s^3].$$ The formula for $e(N_t)$ follows from substituting the formula for $e(L_t(2))$ and (\[gorcod3\]) in (\[combin\]). We now deduce a formula for the multiplicity of ideals generated by submaximal pfaffians. \[e(t)\] $$e(SM_t)=t+\sum_{r=2}^t\sum_{s=2}^r [ 2s(1+2^2+\cdots +s^2)-s^3].$$ Since $SM_t$ is obtained from $SM_{t-1}$ via an elementary G-biliaison of height $1$ on $L_t(2)$, one has $e(SM_t)=e(SM_{t-1})+e(L_t(2))$. By solving the recursion and using Proposition \[f\_2(t)\], one obtains the result. Let ${\mathcal{Y}}={\mathcal{Y}}_1\cup{\mathcal{Y}}_2$ be a ladder which is union of two smaller ladders. Let $I_1=I_{2{\bf t_1}}(Y_1)$ and $I_2=I_{2{\bf t_2}}(Y_2)$ be pfaffian ideals associated to the ladders ${\mathcal{Y}}_1$ and ${\mathcal{Y}}_2$, and let the upper corners of ${\mathcal{Y}}$ be the union of the upper corners of ${\mathcal{Y}}_1$ and ${\mathcal{Y}}_2$. Let ${\bf t}={\bf t_1}\oplus {\bf t_2}$ be the vector obtained by appending the vector ${\bf t_2}$ to the vector ${\bf t_1}$ and let $I=I_{2t}(Y)=I_1+I_2$ be the pfaffian ideal associated to the ladder ${\mathcal{Y}}$. If ${\mathcal{Y}}_1\cap{\mathcal{Y}}_2=\emptyset$, one can easily show that $$\label{product} e(I)=e(I_1)e(I_2).$$ The following theorem gives a sufficient condition on the ladder so that (\[product\]) holds. \[prod\] Let ${\mathcal{Y}},{\mathcal{Y}}_1,{\mathcal{Y}}_2$ be ladders, ${\mathcal{Y}}={\mathcal{Y}}_1\cup{\mathcal{Y}}_2$. Let $I_1=I_{2{\bf t_1}}(Y_1)$ and $I_2=I_{2{\bf t_2}}(Y_2)$ be pfaffian ideals of ladders associated to ${\mathcal{Y}}_1$ and ${\mathcal{Y}}_2$. Let ${\bf t}={\bf t_1}\oplus {\bf t_2}$ and let $I=I_{2{\bf t}}(Y)=I_1+I_2$ be the corresponding pfaffian ideal of ladder. Let $\tilde{{\mathcal{Y}}},\tilde{{\mathcal{Y}}_1},\tilde{{\mathcal{Y}}_2}$ be defined as in Notation \[ladderheight\], and let $\tilde{Y},\tilde{Y_1},\tilde{Y_2}$ be the corresponding sets of indeterminates. If $\tilde{Y_1}\cap\tilde{Y_2}=\emptyset$, then $$e(I)=e(I_1)e(I_2).$$ Let ${\mathcal{Z}}={\mathcal{Y}}_1\cap{\mathcal{Y}}_2$, $R_1=K[Y_1]/I_1$, and $R_2=K[Y_2]/I_2$. We have $$K[Y]/I\cong R_1\otimes_K R_2/J$$ where $J$ is generated by $|Z|$ linear forms (which identify the corresponding indeterminates in $Y_1$ and $Y_2$). If $\tilde{Y_1}\cap\tilde{Y_2}=\emptyset$, then $$\hgt I=\hgt I_1+\hgt I_2$$ hence $$\hgt J=\dim R_1\otimes R_2-\dim K[Y]/I=|Y_1|-\hgt I_1+|Y_2|-\hgt I_2-|Y|+\hgt I=|Z|.$$ Since $R_1\otimes R_2$ is a Cohen-Macaulay ring, $J$ is generated by a regular sequence and $$e(I)=e(R_1\otimes_K R_2/J)=e(I_1)e(I_2).$$ We now give an example of a family of pfaffian ideals of ladders whose multiplicity can be computed directly from Theorem \[prod\]. \[htjk\] $$e(H_t(j,k))=e(L_{t-1}(j))e(L_t(k)).$$ Let ${\mathcal{Y}}={\mathcal{Z}}_{jk}$ be the ladder with the $j+k$ upper corners $(1,2t-1),(2,2t),$ $\ldots,(j,2t+j-2),(j+1,2t+j+1),\ldots,(j+k,2t+j+k).$ Let ${\mathcal{Y}}_1$ be the ladder with the $j$ upper corners $(1,2t-1),\ldots,(j,2t+j-2)$ and let ${\mathcal{Y}}_2$ be the ladder with the $k$ upper corners $(j+1,2t+j+1),\ldots,(j+k,2t+j+k).$ Clearly ${\mathcal{Y}}={\mathcal{Y}}_1\cup{\mathcal{Y}}_2$. Let $${\bf t_1}=(\underbrace{t-1,\dots,t-1}_{j}),\; {\bf t_2}=(\underbrace{t,\dots,t}_k),\; {\bf t}={\bf t_1\oplus t_2}=(\underbrace{t-1,\dots,t-1}_{j},\underbrace{t, \dots,t}_k).$$ Then $\tilde{{\mathcal{Y}}_1}$ is the ladder with upper outside corners $(t-1,t+1),(t,t+2),\ldots,(t+j-2,t+j)$ and $\tilde{{\mathcal{Y}}_2}$ is the ladder with upper outside corners $(t+j,t+j+2),\ldots,(t+j+k-1,t+j+k+1).$ Hence $\tilde{{\mathcal{Y}}_1}\cap\tilde{{\mathcal{Y}}_2}= \{(t+j,t+j)\}$ and $Y_1\cap Y_2=\emptyset$. By Theorem \[prod\] it follows that $$e(H_t(j,k))=e(I_1)e(I_2)$$ where $I_1=I_{2{\bf t_1}}(Y_1)$ and $I_2=I_{2{\bf t_2}}(Y_2)$. The thesis follows from the observation that $I_1=L_{t-1}(j)$ and $I_2=L_t(k)$. Combining Proposition \[htjk\] and Proposition \[f\_n(t)\], we obtain a formula for the multiplicity of the ideals $L_t(j,k)$. \[ltjk\] For $j,k\geq 1$ we have $$e(L_t(j,k))=e(L_{t-1}(j+k))+te(L_{t-1}(j+k-1))+\sum_{l=1}^{k-1}e(L_{t-1}(j+k-1-l))e(L_t(l)).$$ We proceed by induction on $k\geq 1$. By Theorem \[step\], $L_t(j,1)$ is obtained from $L_{t-1}(j+1)$ via an elementary G-biliaison on $L_{t-1}(j)+(f)$, where $f$ is a $2t$-pfaffian which does not belong to $L_{t-1}(j)$. Hence by Proposition \[liaisonMultiplicity\] $$e(L_t(j,1))=e(L_{t-1}(j+1))+te(L_{t-1}(j)).$$ This proves the thesis for $k=1$. To establish the formula for $k\geq 2$, observe that $L_t(j,k)$ is obtained from $L_t(j+1,k-1)$ via an elementary G-biliaison of height $1$ on $H_t(j,k-1)$. Hence by Proposition \[liaisonMultiplicity\] and Proposition \[htjk\] $$\label{lh} e(L_t(j,k))=e(L_t(j+1,k-1))+e(L_{t-1}(j))e(L_t(k-1)).$$ By induction hypothesis $e(L_t(j+1,k-1))=$ $$e(L_{t-1}(j+k))+te(L_{t-1}(j+k-1))+ \sum_{l=1}^{k-2}e(L_{t-1}(j+k-1-l))e(L_t(l))$$ and the thesis follows. Explicit formulas for $e(L_t(1))$ and $e(L_t(2))$ were given in (\[gorcod3\]) and in Proposition \[f\_2(t)\]. Since $L_1(k)$ is generated by indeterminates, $e(L_1(k))=1$ for any $k$. The following formula allows us to calculate $e(L_t(k))$ recursively, for $t\geq 2$ and $k\ge 3$. \[f\_n(t)\] For $t,k\geq 2$ we have $e(L_t(k))=$ $$e(L_{t-1}(k))+t[e(L_t(k-1))+e(L_{t-1}(k-1))]+ \sum_{l=1}^{k-2}e(L_{t-1}(k-1-l))e(L_t(l)).$$ By Theorem \[step\], $L_t(k)$ is obtained from $L_t(1,k-1)$ via an elementary G-biliaison of height $1$ on $L_t(k-1)+(f)$, where $f$ is a $2t$-pfaffian which does not belong to $L_t(k-1)$. Hence by Proposition \[liaisonMultiplicity\] and Proposition \[ltjk\] $$e(L_t(k))=L_t(1,k-1)+te(L_t(k-1))=$$ $$e(L_{t-1}(k))+t[e(L_t(k-1))+e(L_{t-1}(k-1))]+\sum_{l=1}^{k-2}e(L_{t-1}(k-1-l))e(L_t(l)).$$ 1. Proposition \[f\_n(t)\] allows us to compute the multiplicity of the ideals $L_t(k)$ for any values of $t$ and $k$. This can in fact be done recursively, using as a starting point that $e(L_1(k))=1$ for any $k$, and the explicit formulas for the multiplicities of $L_t(1)=M_t$ and $L_t(2)$ which appear in (\[gorcod3\]) and in Proposition \[f\_2(t)\], respectively. 2. Proposition \[ltjk\] allows us to compute the multiplicity of the ideals $L_t(j,k)$ for any values of $t,j,k$. One can in fact use Proposition \[f\_n(t)\] to compute the multiplicities of $L_t(1),\ldots,L_t(k-1)$ and $L_{t-1}(j),\ldots,L_{t-1}(j+k)$. 3. Since $L_t(k)=L_t(0,k)$, the multiplicity of $L_t(j,k)$ for $j=0$ is computed in Proposition \[f\_n(t)\]. In fact, the formula obtained in Proposition \[f\_n(t)\] corresponds to the formula computed in Proposition \[ltjk\] for $j=0$, taken “cum grano salis”. 4. The formula given in Proposition \[ltjk\] is false for $k=0$. Finally, we express the multiplicity of $SN_t$ in terms of the multiplicities of $SM_t$ and $L_t(1,2)$. The latter two can be computed by Proposition \[e(t)\] and Proposition \[ltjk\]. \[snt\] For $t\geq 1$ we have $$e(SN_t)=\sum_{s=2}^t e(L_s(1,2))+\sum_{s=2}^{t-1} s\,e(SM_s)+1.$$ We proceed by induction on $t$. If $t=1$, then $SN_1$ is generated by indeterminates and $e(SN_1)=1$. Let ${\mathcal{Y}}$ denote the ladder with upper corners $(1,2t-1)$ and $(2,2t+1)$. Then $I_{2(t-1)}(Y)$ is the ideal generated by the $2(t-1)$-pfaffians of ${\mathcal{Y}}$. By Theorem \[step\], $SN_t$ is obtained from $I_{2(t-1)}(Y)$ via an elementary G-biliaison of height $1$ on $L_t(1,2)$. In turn, $I_{2(t-1)}(Y)$ is obtained from $SN_{t-1}$ via an elementary G-biliaison of height $1$ on $SM_{t-1}+(f)$, where $f$ is a $2(t-1)$-pfaffian which does not belong to $SM_{t-1}$. Therefore, by Proposition \[liaisonMultiplicity\] $$e(SN_t)=e(L_t(1,2))+(t-1)e(SM_{t-1})+e(SN_{t-1})$$ and the thesis follows by induction hypothesis. From the proof of Proposition \[snt\] it also follows that $$e(I_{2(t-1)}(Y))=e(SN_t)-e(L_t(1,2))=\sum_{s=2}^{t-1}[e(L_s(1,2)+se(M_s)]+1.$$ Castelnuovo-Mumford regularity ============================== In this section we use biliaison to compute the Castelnuovo-Mumford regularity of some of the ideals considered in the previous section. For an ideal $I$ of $R=K[X]$, we denote by $\beta_{i,j}(I) $ the $(i,j)-$th graded Betti number of $I$, regarded as an $R$-module. The Castelnuovo-Mumford regularity of a Cohen-Macaulay ideal $I$ of height $h=\hgt(I)$ is $$\reg(I)=\max\{ j\mid\beta_{h-1,j}(I)\neq 0\}-h+1.$$ It is well known that $\reg(M_t)=2t-1$. The following result allows us to recursively compute the Castelnuovo Mumford regularities of ideals obtained one from the other by biliaison. \[liaisonRegularity\] Let $H,I,J\subset R$ be homogeneous, Cohen-Macaulay ideals. Assume that $I$ is obtained from $J$ via an elementary G-biliaison of height $\ell\in{{\mathbb Z}}$ on $H$. If $\reg(J)<\reg(H)$, then $$\reg(I)=\reg(H)+\ell-1.$$ Since $I$ is obtained from $J$ via an elementary G-biliaison of height $\ell\in{{\mathbb Z}}$ on $H$, there are homogeneous polynomials $f,g$ with $\deg(f)+\ell=\deg(g)=:t$ such that $$fI+H=gJ+H\subset R.$$ Let $h=\hgt I=\hgt J=\hgt H+1$. Applying the Mapping Cone construction to the short exact sequence $$0{\longrightarrow}H[-t]{\longrightarrow}H\oplus J[-t] {\longrightarrow}gJ+H{\longrightarrow}0$$ we have that $$\reg(gJ+H)=\max\{j\mid \beta_{h-1,j}(gJ+H)\neq 0\}-h+1=$$ $$\max\{\reg(H)+h-2,\reg(J)+h-1\}+t-h+1=\reg(H)+t-1.$$ The last equality follows from the assumption that $\reg(J)<\reg(H)$. The previous equality follows from the observation that, since $J$ and $H$ are Cohen-Macaulay ideals, $$\max\{j\mid \beta_{h-2,j}(H)\neq 0\}=\reg(H)+h-2\geq$$ $$\reg(J)+h-1>\max\{j\mid \beta_{h-2,j}(J)\neq 0\}$$ therefore no cancellation involving a direct summand $R[-reg(H)+h-2]$ can take place in the free resolution of $gJ+H$. In an analogous fashion, we can produce a free resolution for $gJ+H=fI+H$ by applying the Mapping Cone construction to the short exact sequence $$0{\longrightarrow}H[-t+\ell]{\longrightarrow}H\oplus I[-t+\ell] {\longrightarrow}fI+H{\longrightarrow}0.$$ Since $$\max\{j\mid \beta_{h-1,j}(fI+H)\neq 0\}=\reg(H)+t+h-2>$$ $$\reg(H)+h-2+t-\ell=\max\{j\mid \beta_{h-2,j}(H[-t+\ell])\neq 0\},$$ it must be $$\reg(H)+t+h-2=\max\{j\mid \beta_{h-1,j}(I[-t+\ell])\neq 0\}= \reg(I)+h-1+t-\ell,$$ hence $$\reg(I)=\reg(H)+\ell-1.$$ We now derive formulas for the Castelnuovo-Mumford regularity of some pfaffian ideals of ladders. They are all easy consequences of Theorem \[liaisonRegularity\]. \[lt2\_reg\] For $t\geq 1$ we have $$\reg(L_t(2))=3t-2$$ and for $t\geq 2$ $$\reg(N_t)=3t-4.$$ We compute the regularity of $L_{t-1}(2)$ and $N_t$ for $t\geq 2$. We proceed by induction on $t\geq 2$. If $t=2$, $L_1(2)$ is generated by indeterminates, hence $\reg(L_1(2))=1.$ By Theorem \[step\], $N_2$ is obtained from $L_1(2)$ via an ascending G-biliaison of height $1$ on $M_1+(p)$, where $p$ is a $4$-pfaffian which is regular modulo $M_1$. Since $\reg(L_1(2))=1<2=\reg(H)$, by Theorem \[liaisonRegularity\] we have $$\reg(N_2)=2.$$ We now assume by induction hypothesis that $\reg(L_{t-2}(2))=3t-8$ and $\reg(N_{t-1})=3t-7$, and compute the regularity of $L_{t-1}(2)$ and $N_t$. By Theorem \[step\], the ideal $L_{t-1}(2)$ is obtained from $N_{t-1}$ via an elementary G-biliaison of height $1$ on $M_{t-1}+(f)$, where $f$ is a $2(t-1)$-pfaffian which is regular modulo $M_{t-1}$. Since $\reg(N_{t-1})=3t-7<3t-5=\reg(M_{t-1}+(f))$, by Theorem \[liaisonRegularity\] $$\reg(L_{t-1}(2))=3t-5.$$ By Theorem \[step\], the ideal $N_t$ is obtained from $L_{t-1}(2)$ via an elementary G-biliaison of height $1$ on $M_{t-1}+(g)$, where $g$ is a $2t$-pfaffian which is regular modulo $M_{t-1}$. Since $reg(L_{t-1}(2))=3t-5<3t-4=\reg(M_{t-1}+g)$, by Theorem \[liaisonRegularity\] we have $$\reg(N_t)=3t-4.$$ For $t\geq 1$ we have $$\reg(SM_t)=3t-2.$$ We proceed by induction on $t\geq 1$. If $t=1$, $SM_1$ is generated by indeterminates, hence $\reg(SM_1)=1.$ By Theorem \[step\] the ideal $SM_t$ is obtained from $SM_{t-1}$ via an elementary G-biliaison of height $1$ on $L_t(2)$. By induction hypothesis and Proposition \[lt2\_reg\] $$\reg(SM_{t-1})=3t-5<3t-2=\reg{L_t(2)}.$$ Therefore, by Theorem \[liaisonRegularity\] $$\reg(SM_t)=\reg(L_t(2))=3t-2.$$ The Gorenstein height $3$ case ============================== The ideal $M_t$ generated by the $2t$-pfaffians of a generic skew-symmetric matrix of size $2t+1$ is a Gorenstein ideal of height $3$. A classical result due to Buchsbaum and Eisenbud [@BE] states that any Gorenstein ideal of height $3$ is obtained by specialization from $M_t$, for some $t$. An alternative proof for many classically known results on Gorenstein ideals of height $3$ can therefore be given by combining specialization with a liaison approach analogous to what we have done in the previous sections. In this section we wish to give a taste of what can be obtained following such an approach. In particular, we use G-biliaison to compute the graded Betti numbers of the ideal $M_t$ and to prove that its $h$-vector is of decreasing type. We start by recalling some definitions and fixing the notation. Let $I$ be a homogeneous ideal of $R=K[X]$. The [*Hilbert function*]{} of $R/I$ is defined as $$\HF_I(m)=\dim_K(R/I)_m$$ for every integer $m$. Clearly $\HF_I(m)=0$ for $m<0$. The formal power series $$\HS_I(z)=\sum_{m\in{{\mathbb Z}}}\HF_I(m)z^m$$ is called the [*Hilbert series*]{} of $R/I$. It is well known that the Hilbert series of $R/I$ is of the form $$\HS_I(z)=\frac {h_I(0)+h_I(1)z+\ldots+h_I(s) z^s}{(1-z)^d},$$ where $d=\dim (R/I)$ and $h_i\in{{\mathbb Z}}$ for every $i$. The vector $$h_I=(h_I(0),\dots,h_I(s))\in {{\mathbb Z}}^s$$ is called [ *$h$-vector*]{} of $I$. Moreover we denote by $\Delta H_I$ the [*first difference*]{} of $H_I$, that is $$\Delta H_I(m)= H_I(m)-H_I(m-1).$$ Let $h=(h_0,h_1,\dots,h_s)\in {\bf Z}^s$ [a)]{} $h$ is [*unimodal*]{} if there exists $t\in\{1,\dots s\}$ such that $h_1\le h_2 \le \dots \le h_t\ge h_{t+1}\ge \dots \ge h_s$. [b)]{} $h$ is of [*decreasing type*]{} if whenever $h_t> h_{t+1},$ then $h_j>h_{j+1}$ for every $j>t$. Notice that every $h$-vector of decreasing type is unimodal. The $h$-vector of $M_t$ is of decreasing type. Let $X$ be a $(2t+1)\times (2t+1)$ skew-symmetric matrix of indeterminates and let $R=K[X]$ be the corresponding polynomial ring. Denote by $h_{(t,t)}(m)$ the $m$-th entry of the $h$-vector of a complete intersection generated by two forms of degree $t$. We follow the notation of Section 1 and consider the ideals $M_t$, $M_{t-1}$ and $L_t^{2t+1}=:I$. It is clear that $I$ is generated by two $2t$-pfaffians which form a complete intersection. By Theorem \[step\], $M_{t-1}$ is obtained from $M_t$ via an elementary G-biliaison of height $1$ on $I$. In other words, there are homogeneous polynomials $f,g$ of degree $t-1$ and $t$ respectively, such that $$fM_t+I=gM_{t-1}+I\subset R.$$ By the additivity of the Hilbert function on the two short exact sequences $$0{\longrightarrow}I[-t+1]{\longrightarrow}I\oplus M_t[-t+1] {\longrightarrow}fM_t+I{\longrightarrow}0$$ $$0{\longrightarrow}I[-t]{\longrightarrow}I\oplus M_{t-1}[-t] {\longrightarrow}gM_{t-1}+I {\longrightarrow}0$$ one obtains that $H_{M_t}(d-t+1)-H_I(d-t+1)=H_{M_{t-1}}(d-t)-H_I(d-t)$ for any $d\in{{\mathbb Z}}$. By setting $m=d-t+1$, we get $$H_{M_t}(m)=H_{M_{t-1}}(m-1)+\Delta H_I(m).$$ Since $\dim R/I-1=\dim R/M_{t-1}=\dim R/M_t$, one has $$h_{M_t}(m)=h_{M_{t-1}}(m-1)+h_{(t,t)}(m).$$ Solving the recursion one obtains $$h_{M_t}(m)=h_{M_1}(m-t+1)+\sum_{j=2}^{t}h_{(j,j)}(m-t+j).$$ This proves that the $h$-vector of $M_t$ is obtained by summing the $h$-vectors of suitable complete intersections. Notice that the $h$-vectors involved in the summation are shifted in such a way, that the maximum is always attained at the same point. Therefore, their sum $h_{M_t}$ is of decreasing type. We can easily compute the graded Betti numbers of $M_t$ as follows. A minimal free resolution of $M_t$ has the form $$0{\longrightarrow}R[-2t-1]{\longrightarrow}R[-t-1]^{2t+1}{\longrightarrow}R[-t]^{2t+1}{\longrightarrow}M_t{\longrightarrow}0.$$ We prove the statement by induction on $t\geq 1$. If $t=1$, the ideal $M_1$ is generated by three distinct indeterminates, hence a minimal free resolution has the form $$0{\longrightarrow}R[-3]{\longrightarrow}R[-2]^3{\longrightarrow}R[-1]^3{\longrightarrow}M_1{\longrightarrow}0.$$ Assume now that $t\geq 2$ and consider the ideals $M_t$, $M_{t-1}$ and $L_t^{2t+1}$. We denote $L_t^{2t+1}$ by $I$ for brevity. It is clear that $I$ is generated by two $2t$-pfaffians which form a complete intersection. By Theorem \[step\], $M_{t-1}$ is obtained from $M_t$ via an elementary G-biliaison of height $1$ on $I$. Moreover, there are homogeneous polynomials $f,g$ of degree $t-1$ and $t$ respectively, such that $$fM_t+I=gM_{t-1}+I\subset R.$$ By induction hypothesis $M_{t-1}$ has a minimal free resolution of the form $$0{\longrightarrow}R[-2t+1]{\longrightarrow}R[-t]^{2t-1}{\longrightarrow}R[-t+1]^{2t-1}{\longrightarrow}M_{t-1}{\longrightarrow}0.$$ Let $$0{\longrightarrow}{{\mathbb F}}_3{\longrightarrow}{{\mathbb F}}_2{\longrightarrow}{{\mathbb F}}_1{\longrightarrow}M_t{\longrightarrow}0$$ be a minimal free resolution of $M_t$. Applying the Mapping Cone to the two short exact sequences $$0{\longrightarrow}I[-t+1]{\longrightarrow}I\oplus M_t[-t+1] {\longrightarrow}fM_t+I{\longrightarrow}0$$ $$0{\longrightarrow}I[-t]{\longrightarrow}I\oplus M_{t-1}[-t] {\longrightarrow}gM_{t-1}+I {\longrightarrow}0$$ one obtains free resolutions for the ideal $J=fM_t+I=gM_{t-1}+I$ of the form $$\begin{array}{ccccc} & R[-3t+1] & & R[-t]^2 & \\ 0{\longrightarrow}& \oplus & {\longrightarrow}R[-2t]^{2t+2}{\longrightarrow}& \oplus & {\longrightarrow}J{\longrightarrow}0 \\ & R[-3t] & & R[-2t+1]^{2t-1} & \end{array}$$ and $$\begin{array}{ccccccc} & R[-3t+1] & & R[-2t]\oplus R[-2t+1]^2 & & R[-t]^2 & \\ 0{\rightarrow}& \oplus & {\rightarrow}& \oplus & {\rightarrow}& \oplus & {\rightarrow}J{\rightarrow}0.\\ & {{\mathbb F}}_3[-t+1] & & {{\mathbb F}}_2[-t+1] & & {{\mathbb F}}_1[-t+1] & \end{array}$$ The first free resolution must be minimal, hence $$\label{mfr}{{\mathbb F}}_3\supseteq R[-2t-1], {{\mathbb F}}_2\supseteq R[-2t]^{2t+1}, \mbox{ and } {{\mathbb F}}_1\supseteq R[-t]^{2t+1}.$$ Since no cancellation is possible among ${{\mathbb F}}_1[-t+1],{{\mathbb F}}_2[-t+1]$ and ${{\mathbb F}}_3[-t+1]$ in the second free resolution of $J$, we deduce that all the containments in (\[mfr\]) must be equalities. [11111111]{} L. Avramov. “A class of factorial domains”, [ *Serdica*]{} [**5**]{} (1979), 378–379. D. Buchsbaum, D. Eisenbud. “Algebra structures for finite free resolutions, and some structure theorems for ideals of codimension $3$”, [*Amer. J. Math.*]{} [**99**]{} (1977), no. 3, 447–485. N. Budur, M. Casanellas, E. Gorla. “Hilbert functions of irreducible arithmetically Gorenstein schemes”, J. Algebra [**272**]{} (2004), no. 1, 292–310. A. Conca, “Gröbner bases of ideals of minors of a symmetric matrix”, J. Algebra [**166**]{} (1994), no. 2, 406–421. A. Corso, U. Nagel. “Monomial and toric ideals associated to Ferrers graphs”, [*Trans. Amer. Math. Soc.*]{} [ **361**]{} (2009), no. 3, 1371–1395. E. De Negri. “Pfaffian ideals of ladders”, J. Pure Appl. Alg.[**125**]{} (1998), 141–153. E. De Negri. “Some results on Hilbert series and $a$-invariant of Pfaffian ideals”, [*Math. J. Toyama Univ.*]{} [**24**]{} (2001), 93–106. E. De Negri, E. Gorla. “G-Biliaison of ladder Pfaffian varieties”, J. Algebra 321 (2009), no. 9, 2637–2649. C. Krattenthaler, “The major counting of nonintersecting lattice paths and generating functions for tableaux”, [*Mem. Amer. Math. Soc.*]{} [**115**]{} (1995). J. Herzog, N. V. Trung. “Gröbner bases and multiplicity of determinantal and Pfaffian ideals”, [ *Adv. Math.*]{} [**96**]{} (1992), 1–37. J. Herzog, N. V. Trung, G. Valla. “On hyperplane sections of reduced irreducible varieties of low codimension”, [*J. Math. Kyoto Univ.*]{} [**34**]{} (1994), no. 1, 47–72. H. Kleppe, D. Laksov. “The algebraic structure and deformation of Pfaffian schemes”, [*J. Algebra*]{} [**64**]{} (1980), 167–189.
--- abstract: 'A multi–state life insurance model is naturally described in terms of the intensity matrix of an underlying (time–inhomogeneous) Markov process which describes the dynamics for the states of an insured person. Between and at transitions, benefits and premiums are paid, defining a payment process, and the technical reserve is defined as the present value of all future payments of the contract. Classical methods for finding the reserve and higher order moments involve the solution of certain differential equations (Thiele and Hattendorf, respectively). In this paper we present an alternative matrix–oriented approach based on general reward considerations for Markov jump processes. The matrix approach provides a general framework for effortlessly setting up general and even complex multi–state models, where moments of all orders are then expressed explicitly in terms of so–called product integrals (matrix–exponentials) of certain matrices. As Thiele and Hattendorf type of theorems can be retrieved immediately from the matrix formulae, this methods also provides a quick and transparent approach to proving these classical results. Methods for obtaining distributions and related properties of interest (e.g. quantiles or survival functions) of the future payments are presented from both a theoretical and practical point of view (via Laplace transforms and methods involving orthogonal polynomials).' author: - | Mogens Bladt$^1$, Søren Asmussen$^2$, and Mogens Steffensen$^{3}$,\ 1. University of Copenhagen, Department of Mathematical Sciences; bladt@math.ku.dk\ 2. Aarhus University, Department of Mathematics; asmus@math.au.dk\ 3. University of Copenhagen, Department of Mathematical Sciences; mogens@math.ku.dk bibliography: - 'PHBib.bib' title: 'Matrix calculations for inhomogeneous Markov reward processes, with applications to life insurance and point processes' --- Introduction ============ In this paper we consider the distribution and moments of the total reward generated by an time inhomogeneous Markov process with a finite state space. Rewards may be earned in three different ways. During sojourns in a fixed state, rewards can be earned at a deterministic (time–dependent) rate and as deterministic (time–dependent) lump sums which arrive according to a non–homogeneous Poisson process. Finally, at the times of transition deterministic (time-dependent) lump sums, which also depends on the type of transition, may be earned w.p. 1 or at random with some probability. We are particularly interested in the case of discounted rewards which have applications in life insurance. Here the rewards (premiums and benefits) are discounted by a deterministic (though time–dependent) interest rate. This setting is slightly more general than the standard life insurance set–up, and is inspired by the parametrisation of the general Markovian Arrival Process (MAP). In this way we also achieve the calculation of rewards and moments in the MAP and its time–inhomogeneous extension. Our method for deriving distributions and moments uses probabilistic (sample path) arguments and matrix algebra. In particular, the matrices of interest are readily derived from the intensity matrix of the underlying Markov process and a matrix of payments. This is true for both pure and discounted rewards, where in the latter case the interest rate may be accommodated conveniently into the intensity matrix. The Laplace transform for the total (discounted) reward is obtained as a product integral (which is a matrix exponential in the time–homogeneous case) involving these matrices. Concerning the moments, all moments up to order $k$ are obtained by a product integral of a $(k+1)\times (k+1)$ block matrix build upon the aforementioned matrices. The product integrals may be evaluated in a number of ways. Generally, the product integral satisfies a system of linear differential equation of the Kolmogorov type from which we retrieve both Thiele’s differential equation and Hattendorff type of theorems. If the intensities and payments are piecewise constant, which may often be the case in practical implementations, the product integral reduces to a product of matrix–exponentials which may be evaluated numerically by efficient methods like e.g. uniformisation. While higher order moments are rarely used in life insurance, our approach to general accessibility of all orders and their numerical computability up to quite high orders suggest that they can also be used for approximating the cumulative distribution function (c.d.f.) of the distribution, and thereby the calculation of quantiles (values at risk or confidence intervals) which could provide valuable information concerning the actual risk. We provide a first example along this line by proposing a Gram-Charlier expansion to approximating both the density (p.d.f.) and the c.d.f. of the discounted future payment distribution when this is absolutely continuous. While this requires at least one continuous payment stream (e.g.  a premium) to be present, we will see that shape of the distribution can be challenging, particularly for the case of the p.d.f. The idea of using multi-state (time inhomogeneous) Markov processes as a model in life insurance dates back at least to the 1960’s and was put into a modern context by [@Hoem1969], in which also Thiele’s differential equations for the state–wise reserves are derived. Variance formulas for the future payments can e.g. be found in [@Ramlau1988] whereas for higher order moments we refer to [@NorbergRagnar1994Defm]. A differential equation (Thiele) approach to calculating the c.d.f. of the discounted future payments has been considered in [@HESSELAGER1996]. If one considers only unit rewards on all jumps and Poisson arrivals (no continuous rewards) in a time–homogeneous Markov jump process, then the total undiscounted reward up to time $t$ defines a point process which is known as a Markovian Arrival Process (MAP). The moments in the MAP have been shown to satisfy certain integral equations (which are equivalent to the differential equations from life insurance) as shown in [@bo-Uffe-2007]. Apart from defining a tractable class of point process with numerous applications in applied probability, the MAPs form a simple dense class of point processes on on the positive reals (see [@Asmussen:1993tn]). The specific contributions of the paper are as follows. The Laplace transform of the total rewards (Theorem \[Th:4.5a\]) generalises a similar result of [@bla:02] for time–homogeneous Markov processes. The matrix representation of moments depends on a crucial result given in Lemma \[lemma:van-loan\], which generalises a similar result proved in [@VanLoan:1978tq] for the case of constant matrices. The reserves and moments we deal with in the analysis are the so–called partial reserves and moments, which are defined as the expected value of the (powers) of the future (discounted) payments contingent on the terminal state. These reserves and moments may well be of interest on their own. The matrix representation of all moments provides a unifying approach to the explicit solution of Thiele’s and Hattendorff type of theorems. Working with solutions rather than the corresponding differential equations may greatly simplify subsequent analysis and applications. The rest of the paper is organised as follows. In Section \[sec:background\] we review the basic properties of the product integral, which will play an important role in the paper. The basic model and notation is set up in Section \[sec:model\] and in Section \[sec:Thiele\] we use a probabilistic argument to prove a slightly extended version of Thiele’s differential equation. An important technical result regarding the calculation of certain ordinary integrals via product integrals is proved in Section \[sec:matrix-reserves\]. The main construction takes place in Sections \[sec:laplace-transform\] and \[sec:moments\] where we derive explicit matrix representations for the Laplace transform and higher order moments of the discounted future payments (total reward). An slightly extended version of Hattendorff’s theorem is derived as a consequence in the end of Section \[sec:moments\]. As an example, we also calculate the higher order moments and factorial moments in a (time–homogeneous) Markovian Arrival Process. Since the moments of up to high orders are easily calculated, in Section 9 we explore the possibility of calculating the p.d.f. and c.d.f. for the total discounted future payments by means of orthogonal polynomial expansions based on central moments. In Section 9 we provide a numerical example, and in Section 10 we conclude the paper. Some relevant background {#sec:background} ======================== Let ${\boldsymbol{\bm A}}(x)=\{ a_{ij}(x) \}_{i,j=1,...,p}$ be a $p\times p$ matrix function. The product integral of ${\boldsymbol{\bm A}}(x)$, written as $${\boldsymbol{\bm F}}(s,t) = \prod_s^t ({\boldsymbol{\bm I}}+ {\boldsymbol{\bm A}}(x)\,{\mathrm{d}}x) ,$$ where ${\boldsymbol{\bm I}}$ denotes the identity matrix, may be defined in a number of equivalent ways. It is e.g. the solution to Kolmogorov forward differential equation $$\frac{\partial}{\partial s}{\boldsymbol{\bm F}}(s,t)=-{\boldsymbol{\bm A}}(s){\boldsymbol{\bm F}}(s,t), \ \ \ {\boldsymbol{\bm F}}(t,t)={\boldsymbol{\bm I}},$$ which by integration and repeated substitutions also yields the Peano–Baker series representation $$\prod_s^t ({\boldsymbol{\bm I}}+{\boldsymbol{\bm A}}(x)\,{\mathrm{d}}x)={\boldsymbol{\bm I}} + \sum_{n=1}^\infty \int_s^t\int_s^{x_n}\cdots \int_s^{x_{2}}{\boldsymbol{\bm A}}(x_1){\boldsymbol{\bm A}}(x_2)\cdots {\boldsymbol{\bm A}}(x_n)\,{\mathrm{d}}x_1\,{\mathrm{d}}x_2\cdots \,{\mathrm{d}}x_n , \label{eq:Peano-Baker} .$$ The product integral exists if ${\boldsymbol{\bm A}}(x)$ is Riemann integrable, which will henceforth be assumed. From the Peano–Baker representation one may prove that $$\begin{aligned} \prod_s^t ({\boldsymbol{\bm I}}+{\boldsymbol{\bm A}}(x)\,{\mathrm{d}}x)&=&\prod_s^x ({\boldsymbol{\bm I}}+{\boldsymbol{\bm A}}(u)\,{\mathrm{d}}u)\prod_x^t ({\boldsymbol{\bm I}}+{\boldsymbol{\bm A}}(u)\,{\mathrm{d}}u) \label{eq:product-rule}\end{aligned}$$ holds true for any order of $s,t$ and $x$ (not only $s\leq x\leq t$). In particular, the inverse of a product integral then exists and is given by $$\left( \prod_s^t ({\boldsymbol{\bm I}}+{\boldsymbol{\bm A}}(x)\,{\mathrm{d}}x)\right)^{-1} =\prod_t^s ({\boldsymbol{\bm I}}+{\boldsymbol{\bm A}}(x)\,{\mathrm{d}}x) \label{eq:inverse-prod-int} .$$ If the matrices ${\boldsymbol{\bm A}}(x)$ all commute, then $$\prod_s^t ({\boldsymbol{\bm I}}+{\boldsymbol{\bm A}}(x)\,{\mathrm{d}}x) =\exp \left( \int_s^t {\boldsymbol{\bm A}}(x)\,{\mathrm{d}}x \right) .\label{eq:prod-int-commute}$$ In particular, if ${\boldsymbol{\bm A}}(x)={\boldsymbol{\bm A}}$ for all $x$, then $$\prod_s^t ({\boldsymbol{\bm I}}+{\boldsymbol{\bm A}}(x)\,{\mathrm{d}}x) =\exp \left( {\boldsymbol{\bm A}}(t-s) \right) . \label{eq:prod-int-constant}$$ This last observation may be useful in connection with piecewise constant matrices ${\boldsymbol{\bm A}}(x)$ $${\boldsymbol{\bm A}}(x) = {\boldsymbol{\bm A}}_i,\ \ \ x_{i-1}\leq x \leq x_i$$ for $i=1,2,...$ and where $x_0=0$. Then, using , $$\prod_s^t ({\boldsymbol{\bm I}}+{\boldsymbol{\bm A}}(x)\,{\mathrm{d}}x) = \exp \left( {\boldsymbol{\bm A}}_i(x_i-s) \right) \left( \prod_{k=i+1}^{j-1} \exp \left( {\boldsymbol{\bm A}}_k(x_k-x_{k-1}) \right) \right) \exp \left( {\boldsymbol{\bm A}}_{j}(t-x_{j-1}) \right) \label{eq:piecewise-constant}$$ , where $i$ and $j$ are such that $s\in [x_{i-1},x_{i}]$ and $t\in [x_{j-1},x_{j}]$. Matrix–exponentials can be calculated in numerous ways (see [@Moler:1978vp] and [@Loan:2003un]) and are typically available in standard software package though at varying level of sophistication. If the exponent is an intensity (or sub–intensity matrix, i.e. row sums are non–positive) then either a Runge–Kutta method or uniformisation (see e.g.  [@bladt2017matrix], p. 51) are competitive and among the most efficient. Product integrals may also be used in the construction of time–inhomogeneous Markov processes if the primitive is the intensity matrices. Indeed, if ${\boldsymbol{\bm A}}(x)$ are intensity matrices (i.e. off diagonal elements are non–negative and rows sum to 0), then their product integrals are transition matrices, and by , Chapman–Kolmogorov’ equations are then satisfied which implies the Markov property. For further details we refer to [@johansen-1986]. The general model {#sec:model} ================= Consider a time–inhomogeneous Markov process $\{ Z(t)\}_{t\geq 0}$ with state space $E=\{1,2,...,p\}$ and intensity matrices ${\boldsymbol{\bm M}}(t)=\{ \mu_{ij}(t)\}_{i,j\in E}$. Let ${\boldsymbol{\bm P}}(s,t) = \{ p_{ij}(s,t) \}_{i,j=1,..,p}$ denote the corresponding transition matrix. Assume that $${\boldsymbol{\bm M}}(s) = {\boldsymbol{\bm C}}(s)+{\boldsymbol{\bm D}}(s) ,$$ where ${\boldsymbol{\bm D}}(s)=\{ d_{ij}(s)\}_{i,j\in E}$ denote a $p\times p$ matrix with $d_{ij}(s)\geq 0$ and ${\boldsymbol{\bm C}}(s)=\{ c_{ij}(s)\}_{i,j\in E}$ is a sub–intensity matrix, i.e. its rows sums are non–positive. We define a reward structure on the Markov process in the following way. At jumps from $i$ to $j$, lump sums of $b^{ij}(s)$ are obtained with probability $d_{ij}(s)/(d_{ij}(s)+c_{ij}(s))$. When $Z(s)=i$ there may be two kind of rewards: a continuous rate of $b^i(s)$, so that $b^i(s)\,{\mathrm{d}}s$ is earned during $[s,s+{\mathrm{d}}s)$, and lump sums of $b^{ii}(s)$ at the events of a Poisson process with rate $d_{ii}(s)$ while in state $i$. The total reward obtained during $[s,t]$ is then given by $$\begin{aligned} R^T(s,t) \ &=\ \sum_i \bar{R}^i(s,t) + \sum_{i\neq j} \bar{N}^{ij}(s,t) \label{eq:def:total-reward} \\ \intertext{where} \bar{N}^{ij}(s,t)\ &=\ \int_s^t b^{ij}(x)\,{\mathrm{d}}N^{ij}(x) \label{def:Nbar}, \\ \bar{R}^i(s,t)\ &=\ \int_s^t b^i(x)1\{ Z(x)=i\}\,{\mathrm{d}}x \label{def:Rbar}\end{aligned}$$ and where $N^{ij}(x)$ is the counting process which increases by $+1$ upon transition from $i$ to $j$ in $Z(t)$ for $i\neq j$, or a Poisson process with rate $d_{ii}(t)$ if $i=j$. Our principal application is to life insurance, where the states $i\in E$ are the different conditions of an insured individual (e.g. active, unemployed, disabled or dead). Here we are interested in studying the discounted rewards, $U(s,t)$, during a time interval $[s,t]$ defined by $$U(s,t) = \int_s^t {\mathrm{e}}^{-\int_s^u r(x)\,{\mathrm{d}}x} \,{\mathrm{d}}B(u) ,$$ where $r(x)$ is a deterministic (instantaneous) interest rate at time $x$ and $B$ is a payment process $$\begin{aligned} {\mathrm{d}}B (t)&=&b^{Z(t)}(t)\,{\mathrm{d}}t + \sum_{j=1}^p b^{Z(t-)j}(t)\,{\mathrm{d}}N^{Z(t-)j}(t) . \label{def:payment-process} $$ Here the continuous rates may e.g. be premiums (negative) or compensations (periodic unemployment payments). Lump sums $b^{ij}(t)$ will be paid out at transitions $i$ to $j$ at time $t$ with probability $d_{ij}(t)/(d_{ij}(t)+c_{ij}(t))$, while other lump sums of $b^{ii}(t)$ will be paid out while the insured is in condition $i$ at random times that will appear according to a time–inhomogeneous Poisson process at rate $d_{ii}(t)$. If $c_{ij}(s)=0$ for all $i\neq j$ and $d_{ii}(s)=0$ for all $i$, then we recover the standard multi–state Markov model in life insurance as in e.g. [@Hoem1969] or [@norberg1991]. We are interested in calculating the moments of $R^T(s,t)$ and more generally of $U(s,t)$. To this end we define the slightly more general quantities $$m_{ij}^{(k)}\ =\ {\mathds{E}}\Bigl[1\{ Z(t)=j \} R^T(s,t)^k \,\Big|\, Z(s)=i \Bigr]$$ and $${\boldsymbol{\bm m}}^{(k)}(s,t) = \big\{m_{ij}^{(k)} \big\}_{i,j\in E} \label{eq:def-m}$$ and more generally, $$v_{ij}^{(k)}(s,t)\ =\ {\mathds{E}}\Bigl[ 1\{ Z(t)=j\} U(s,t)^k \,\Big|\, Z(s)=i \Bigr] \label{eq:moments-future-payments}$$ and $${\boldsymbol{\bm V}}^{(k)}(s,t)\ =\ \bigl\{ v_{ij}^{(k)}(s,t) \bigr\}_{i,j\in E} \label{eq:moments-future-payments-matrix}$$ for $k\in \mathbb{N}$. Define $$\begin{aligned} {\boldsymbol{\bm B}}(t) &=& \{ b^{ij}(t) \}_{i,j\in E} \\ {\boldsymbol{\bm b}}(t)&=&(b^1(t),...,b^{p}(t))^\prime \\ {\boldsymbol{\bm R}}(t)&=&{\boldsymbol{\bm D}}(t)\bullet {\boldsymbol{\bm B}}(t) + {\boldsymbol{\bm \Delta}}({\boldsymbol{\bm b}}(t)) \\ {\boldsymbol{\bm C}}^{(k)}(t)&=&{\boldsymbol{\bm D}}(t)\bullet {\boldsymbol{\bm B}}^{\bullet k}(t),\ \ k\geq 2 \label{def-symbol:C}\end{aligned}$$ where ${\boldsymbol{\bm \Delta}}({\boldsymbol{\bm b}}(t))$ denotes the diagonal matrix with the vector ${\boldsymbol{\bm b}}(t)$ as diagonal, $\bullet$ is the Schur (entrywise) matrix product, i.e. $\{ a_{ij} \}\bullet \{ b_{ij} \} = \{ a_{ij}b_{ij} \}$ and ${\boldsymbol{\bm B}}^{\bullet k}(t) = {\boldsymbol{\bm B}}(t)\bullet \cdots \bullet {\boldsymbol{\bm B}}(t)$ ($n$ terms). The state–wise prospective reserves are defined as ${\mathds{E}}[U(s,t)|Z(s)=i]$ for all $i\in E$, which are then the elements of the vector ${\boldsymbol{\bm V}}^{(1)}(s,t){\boldsymbol{\bm e}}$, where ${\boldsymbol{\bm e}}$ is the column vector of ones (see ). We shall say that the matrix ${\boldsymbol{\bm V}}^{(1)}(s,t)$ contains the [*partial (state–wise prospective) reserves*]{} and the refer to matrix itself as such. Though the partial reserve may have its own merit, it is introduced primarily for mathematical convenience. Partial reserves and Thiele’s differential equations {#sec:Thiele} ==================================================== First we start with an integral representation of the first order moment $ {\boldsymbol{\bm m}}^{(1)}(s,t)$. \[lemma:rewards\] $\displaystyle {\boldsymbol{\bm m}}^{(1)}(s,t) \ =\ \int_s^t {\boldsymbol{\bm P}}(s,u){\boldsymbol{\bm R}}(u){\boldsymbol{\bm P}}(u,t)\,{\mathrm{d}}u $. The probability that there is a jump from $k$ to $\ell$ in $[u,u+{\mathrm{d}}u)$ given that $Z(s)=i$ is $$\begin{aligned} p_{ik}(s,u)\mu_{k\ell}(u)\,{\mathrm{d}}u &=& {\boldsymbol{\bm e}}_i^\prime {\boldsymbol{\bm P}}(s,u){\boldsymbol{\bm e}}_k \mu_{k\ell}(u)\,{\mathrm{d}}u . \end{aligned}$$ Hence, the probability that there is a jump from $k$ to $\ell$ in $[u,u+{\mathrm{d}}u)$ and that $Z(t)=j$, given that $Z(s)=i$, is consequently $$\begin{aligned} p_{ik}(s,u)\mu_{k\ell}(u)\,{\mathrm{d}}u p_{\ell j}(u,t) &=& {\boldsymbol{\bm e}}_i^\prime {\boldsymbol{\bm P}}(s,u){\boldsymbol{\bm e}}_k \mu_{k\ell}(u)\,{\mathrm{d}}u {\boldsymbol{\bm e}}_\ell^\prime {\boldsymbol{\bm P}}(u,t) {\boldsymbol{\bm e}}_j. \end{aligned}$$ This serves as the density for a jump at $u$. The reward $b^{k\ell}(u)$ is the value to be paid out with probability $$\frac{d_{k\ell}(u)}{d_{k\ell}(u)+c_{k\ell}(u)} = \frac{d_{k\ell}(u)}{\mu_{k\ell}(u)} .$$ Hence the expected reward in $[s,t]$ due to jumps from $k$ to $\ell$ amount to $$\int_s^t {\boldsymbol{\bm e}}_i^\prime {\boldsymbol{\bm P}}(s,u){\boldsymbol{\bm e}}_k \mu_{k\ell}(u) \frac{d_{k\ell}(u)}{\mu_{k\ell}(u)} b^{k\ell}(u) {\boldsymbol{\bm e}}_\ell^\prime {\boldsymbol{\bm P}}(u,t) {\boldsymbol{\bm e}}_j \,{\mathrm{d}}u = \int_s^t {\boldsymbol{\bm e}}_i^\prime {\boldsymbol{\bm P}}(s,u){\boldsymbol{\bm e}}_k d_{k\ell}(u) b^{k\ell}(u) {\boldsymbol{\bm e}}_\ell^\prime {\boldsymbol{\bm P}}(u,t) {\boldsymbol{\bm e}}_j \,{\mathrm{d}}u .$$ This settles the case for the jumps. Concerning the linear rewards, consider state $k$. Here its contribution amounts to $$\begin{aligned} \lefteqn{{\mathds{E}}\left[ \int_s^t\left. 1\{ Z(t)=j \} b^k(u)1\{ Z(u)=k \}\,{\mathrm{d}}s \right| Z(s)=i \right]}~~~~~~~~~~~~~~~~~~\\ &=& \int_s^t b^{k}(u) {\mathds{P}}(Z(t)=j, Z(u)=k | Z(s)=i)\,{\mathrm{d}}s \\ &=& \int_s^t b^{k}(u){\boldsymbol{\bm e}}_i^\prime {\boldsymbol{\bm P}}(s,u){\boldsymbol{\bm e}}_k {\boldsymbol{\bm e}}_k^\prime {\boldsymbol{\bm P}}(u,t){\boldsymbol{\bm e}}_j \,{\mathrm{d}}s . \end{aligned}$$ For the case of lump sums from Poisson arrivals in state $k$, the contribution is $$\int_s^t {\boldsymbol{\bm e}}_i^\prime {\boldsymbol{\bm P}}(s,u){\boldsymbol{\bm e}}_k d_{kk}(u) b^{k\ell}(u) {\boldsymbol{\bm e}}_k^\prime {\boldsymbol{\bm P}}(u,t) {\boldsymbol{\bm e}}_j \,{\mathrm{d}}u .$$ The total reward is then obtained by summing over all three different types of reward, which in matrix notation exactly yields the result. Next we consider the partial reserve. For a fixed time horizon (the terminal date) $T$ we define $${\boldsymbol{\bm V}}(t) = {\boldsymbol{\bm V}}^{(1)}(t,T) . \label{def:partial-reserve-1}$$ From Lemma \[lemma:rewards\] we have that $${\mathds{E}}\bigl[{\mathrm{d}}B(u)1\{ Z(T)=j \} \,\big|\, Z(t)=i\bigr]\ =\ {\boldsymbol{\bm e}}_i^\prime {\boldsymbol{\bm P}}(t,u){\boldsymbol{\bm R}}(u){\boldsymbol{\bm P}}(u,T){\boldsymbol{\bm e}}_j \,{\mathrm{d}}u .$$ We denote the elements of ${\boldsymbol{\bm V}}(t)$ by $v_{ij}(t)$. Then we have the following Thiele type of differential equation for the partial reserve. \[th:thiele-partial\] ${\boldsymbol{\bm V}}(t)={\boldsymbol{\bm V}}^{(1)}(t,T)$ satisfies $$\frac{\partial}{\partial t} {\boldsymbol{\bm V}}(t) = r(t) {\boldsymbol{\bm V}}(t)-{\boldsymbol{\bm M}}(t) {\boldsymbol{\bm V}}(t) - {\boldsymbol{\bm R}}(t){\boldsymbol{\bm P}}(t,T)$$ with terminal condition ${\boldsymbol{\bm V}}(T)={\boldsymbol{\bm 0}}$. First we see that $$\begin{aligned} v_{ij}(t) &=&\int_t^T {\mathrm{e}}^{-\int_s^u r(u)\,{\mathrm{d}}u} {\mathds{E}}\bigl[1\{ Z(T)=j \} \,{\mathrm{d}}B(u) \,\big|\, Z(t)=i \bigr] \nonumber \\ &=&\int_t^T {\mathrm{e}}^{-\int_t^u r(s)\,{\mathrm{d}}s} {\boldsymbol{\bm e}}_i^\prime {\boldsymbol{\bm P}}(t,u){\boldsymbol{\bm R}}(u){\boldsymbol{\bm P}}(u,T){\boldsymbol{\bm e}}_j \,{\mathrm{d}}u\nonumber \\ &=&{\boldsymbol{\bm e}}_i^\prime \int_t^T \prod_t^u (1 -r(s)\,{\mathrm{d}}s) \prod_t^u ({\boldsymbol{\bm I}} + {\boldsymbol{\bm M}}(s)\,{\mathrm{d}}s) {\boldsymbol{\bm R}}(u) \prod_u^T ({\boldsymbol{\bm I}} + {\boldsymbol{\bm M}}(s)\,{\mathrm{d}}s))\,{\mathrm{d}}u {\boldsymbol{\bm e}}_j \nonumber\\ &=& {\boldsymbol{\bm e}}_i^\prime \int_t^T \prod_t^u ({\boldsymbol{\bm I}} + ({\boldsymbol{\bm M}}(s)-r(s){\boldsymbol{\bm I}})\,{\mathrm{d}}s) {\boldsymbol{\bm R}}(u) \prod_u^T ({\boldsymbol{\bm I}} + {\boldsymbol{\bm M}}(s)\,{\mathrm{d}}s))\,{\mathrm{d}}u {\boldsymbol{\bm e}}_j . \nonumber\end{aligned}$$ In matrix notation, $${\boldsymbol{\bm V}}(t) = \int_t^T \prod_t^u ({\boldsymbol{\bm I}} + ({\boldsymbol{\bm M}}(s)-r(s){\boldsymbol{\bm I}})\,{\mathrm{d}}s) {\boldsymbol{\bm R}}(u) \prod_u^T ({\boldsymbol{\bm I}} + {\boldsymbol{\bm M}}(s)\,{\mathrm{d}}s))\,{\mathrm{d}}u . \label{eq:extend-reserve-int-rep}$$ Thus $$\begin{aligned} \lefteqn{\frac{\partial}{\partial t} {\boldsymbol{\bm V}}(t)=\frac{\partial}{\partial t} \int_t^T \prod_t^u ({\boldsymbol{\bm I}} + ({\boldsymbol{\bm M}}(s)-r(s){\boldsymbol{\bm I}})\,{\mathrm{d}}s) {\boldsymbol{\bm R}}(u) \prod_u^T ({\boldsymbol{\bm I}} + {\boldsymbol{\bm M}}(s)\,{\mathrm{d}}s))\,{\mathrm{d}}u }~~~~\\ &=& \int_t^T \left[\frac{\partial}{\partial t} \prod_t^u ({\boldsymbol{\bm I}} + ({\boldsymbol{\bm M}}(s)-r(s){\boldsymbol{\bm I}})\,{\mathrm{d}}s)\right] {\boldsymbol{\bm R}}(u) \prod_u^T ({\boldsymbol{\bm I}} + {\boldsymbol{\bm M}}(s)\,{\mathrm{d}}s))\,{\mathrm{d}}u \\ &&- {\boldsymbol{\bm I}}\cdot {\boldsymbol{\bm R}}(t) \prod_t^T ({\boldsymbol{\bm I}} + {\boldsymbol{\bm M}}(s)\,{\mathrm{d}}s)) \\ &=& - ({\boldsymbol{\bm M}}(t)-r(t){\boldsymbol{\bm I}})) \int_t^T \prod_t^u ({\boldsymbol{\bm I}} + ({\boldsymbol{\bm M}}(s)-r(s){\boldsymbol{\bm I}})\,{\mathrm{d}}s) {\boldsymbol{\bm R}}(u) \prod_u^T ({\boldsymbol{\bm I}} + {\boldsymbol{\bm M}}(s)\,{\mathrm{d}}s))\,{\mathrm{d}}u \\ &&- {\boldsymbol{\bm R}}(t) \prod_t^T ({\boldsymbol{\bm I}} + {\boldsymbol{\bm M}}(s)\,{\mathrm{d}}s)) \\ &=&- ({\boldsymbol{\bm M}}(t)-r(t){\boldsymbol{\bm I}})) {\boldsymbol{\bm V}}(t) - {\boldsymbol{\bm R}}(t) \prod_t^T ({\boldsymbol{\bm I}} + {\boldsymbol{\bm M}}(s)\,{\mathrm{d}}s)) \\ &=&- ({\boldsymbol{\bm M}}(t)-r(t){\boldsymbol{\bm I}})) {\boldsymbol{\bm V}}(t) - {\boldsymbol{\bm R}}(t){\boldsymbol{\bm P}}(t,T) ,\end{aligned}$$ i.e. $$\frac{\partial}{\partial t} {\boldsymbol{\bm V}}(t) =r(t){\boldsymbol{\bm V}}(t)- {\boldsymbol{\bm M}}(t){\boldsymbol{\bm V}}(t) - {\boldsymbol{\bm R}}(t){\boldsymbol{\bm P}}(t,T), \label{eq:ext-thiele}$$ with obvious boundary condition ${\boldsymbol{\bm V}}(T)={\boldsymbol{\bm 0}}$. As an immediate consequence, using that ${\boldsymbol{\bm P}}(t,T){\boldsymbol{\bm e}} ={\boldsymbol{\bm e}}$ being a transition probability matrix, we recover the usual Thiele differential equation. The vector of prospective reserves $ {\boldsymbol{\bm V}}_{th}(t)={\boldsymbol{\bm V}}^{(1)}(t,T){\boldsymbol{\bm e}}$ satisfies $$\begin{aligned} \frac{\partial}{\partial t}{\boldsymbol{\bm V}}_{th}(t) &=&r(t) {\boldsymbol{\bm V}}_{th}(t) - {\boldsymbol{\bm M}}(t) {\boldsymbol{\bm V}}_{th}(t) - {\boldsymbol{\bm R}}(t){\boldsymbol{\bm e}}, \label{eq:thiele}\end{aligned}$$ with terminal condition $ {\boldsymbol{\bm V}}_{th}(T)={\boldsymbol{\bm 0}}$. Matrix representation of the reserve {#sec:matrix-reserves} ==================================== In this section we will provide a matrix representation of the reserve. We start with an important general result which extends a result by [@VanLoan:1978tq] from matrix–exponentials to product integrals. Here the matrices ${\boldsymbol{\bm A}}(x)$ and ${\boldsymbol{\bm C}}(x)$ must be square but not necessarily of the same dimension. \[lemma:van-loan\] For matrix functions ${\boldsymbol{\bm A}}(x)$, ${\boldsymbol{\bm B}}(x)$ and ${\boldsymbol{\bm C}}(x)$, we have that $$\prod_s^t \left( {\boldsymbol{\bm I}} + \begin{pmatrix} {\boldsymbol{\bm A}}(x) & {\boldsymbol{\bm B}}(x) \\ {\boldsymbol{\bm 0}} & {\boldsymbol{\bm C}}(x) \end{pmatrix} \,{\mathrm{d}}x \right) = \begin{pmatrix} \displaystyle\prod_s^t ({\boldsymbol{\bm I}}+{\boldsymbol{\bm A}}(x)\,{\mathrm{d}}x) & \displaystyle \int_s^t \prod_s^u ({\boldsymbol{\bm I}}+{\boldsymbol{\bm A}}(x)\,{\mathrm{d}}x){\boldsymbol{\bm B}}(u)\prod_u^t ({\boldsymbol{\bm I}}+{\boldsymbol{\bm C}}(x)\,{\mathrm{d}}x)\,{\mathrm{d}}u \\ {\boldsymbol{\bm 0}} & \displaystyle\prod_s^t ({\boldsymbol{\bm I}}+{\boldsymbol{\bm C}}(x)\,{\mathrm{d}}x) \end{pmatrix} .$$ First notice that $$\begin{aligned} \lefteqn{ \frac{\partial}{\partial t}\int_s^t \prod_s^u ({\boldsymbol{\bm I}}+{\boldsymbol{\bm A}}(x)\,{\mathrm{d}}x){\boldsymbol{\bm B}}(u)\prod_u^t ({\boldsymbol{\bm I}}+{\boldsymbol{\bm C}}(x)\,{\mathrm{d}}x)\,{\mathrm{d}}u}~~~\\ &=& \int_s^t \prod_s^u ({\boldsymbol{\bm I}}+{\boldsymbol{\bm A}}(x)\,{\mathrm{d}}x){\boldsymbol{\bm B}}(u)\frac{\partial}{\partial t}\prod_u^t ({\boldsymbol{\bm I}}+{\boldsymbol{\bm C}}(x)\,{\mathrm{d}}x)\,{\mathrm{d}}u + \prod_s^t ({\boldsymbol{\bm I}}+{\boldsymbol{\bm A}}(u)){\boldsymbol{\bm B}}(t) \prod_t^t ({\boldsymbol{\bm I}}+{\boldsymbol{\bm C}}(x)\,{\mathrm{d}}x) \\ &=& \int_s^t \prod_s^u ({\boldsymbol{\bm I}}+{\boldsymbol{\bm A}}(x)\,{\mathrm{d}}x){\boldsymbol{\bm B}}(u)\prod_u^t ({\boldsymbol{\bm I}}+{\boldsymbol{\bm C}}(x)\,{\mathrm{d}}x)\,{\mathrm{d}}u\ {\boldsymbol{\bm C}}(t) + \prod_s^t ({\boldsymbol{\bm I}}+{\boldsymbol{\bm A}}(u)){\boldsymbol{\bm B}}(t)\end{aligned}$$ Let ${\boldsymbol{\bm B}}(s,t)$ denote the matrix on the right hand side in the Lemma. Then $$\begin{aligned} \frac{\partial}{\partial t}{\boldsymbol{\bm B}}(s,t)&=& \begin{pmatrix} \displaystyle\frac{\partial}{\partial t} \prod_s^t ({\boldsymbol{\bm I}}+{\boldsymbol{\bm A}}(x)\,{\mathrm{d}}x) & \displaystyle\frac{\partial}{\partial t}\int_s^t \prod_s^u ({\boldsymbol{\bm I}}+{\boldsymbol{\bm A}}(x)\,{\mathrm{d}}x){\boldsymbol{\bm B}}(u)\prod_u^t ({\boldsymbol{\bm I}}+{\boldsymbol{\bm C}}(x)\,{\mathrm{d}}x)\,{\mathrm{d}}u \\ {\boldsymbol{\bm 0}} & \displaystyle\frac{\partial}{\partial t} \prod_s^t ({\boldsymbol{\bm I}}+{\boldsymbol{\bm C}}(x)\,{\mathrm{d}}x) \end{pmatrix} \\ &=& \begin{pmatrix} \displaystyle\prod_s^t ({\boldsymbol{\bm I}}+{\boldsymbol{\bm A}}(x)\,{\mathrm{d}}x) & \displaystyle \int_s^t \prod_s^u ({\boldsymbol{\bm I}}+{\boldsymbol{\bm A}}(x)\,{\mathrm{d}}x){\boldsymbol{\bm B}}(u)\prod_u^t ({\boldsymbol{\bm I}}+{\boldsymbol{\bm C}}(x)\,{\mathrm{d}}x)\,{\mathrm{d}}u \\ {\boldsymbol{\bm 0}} & \displaystyle\prod_s^t ({\boldsymbol{\bm I}}+{\boldsymbol{\bm C}}(x)\,{\mathrm{d}}x) \end{pmatrix} \begin{pmatrix} {\boldsymbol{\bm A}}(t) & {\boldsymbol{\bm B}}(t) \\ {\boldsymbol{\bm 0}} & {\boldsymbol{\bm C}}(t) \end{pmatrix} \\ &=& {\boldsymbol{\bm B}}(s,t) \begin{pmatrix} {\boldsymbol{\bm A}}(t) & {\boldsymbol{\bm B}}(t) \\ {\boldsymbol{\bm 0}} & {\boldsymbol{\bm C}}(t) \end{pmatrix} .\end{aligned}$$ Also, ${\boldsymbol{\bm B}}(t,t)={\boldsymbol{\bm I}}$. Therefore, $${\boldsymbol{\bm B}}(s,t) = \prod_s^t \left( {\boldsymbol{\bm I}}+ \begin{pmatrix} {\boldsymbol{\bm A}}(u) & {\boldsymbol{\bm B}}(u) \\ {\boldsymbol{\bm 0}} & {\boldsymbol{\bm C}}(u) \end{pmatrix}\,{\mathrm{d}}u\right) .$$ Consider an $np\times np$ block matrix on the form $${\boldsymbol{\bm A}}(x) = \begin{pmatrix} {\boldsymbol{\bm A}}_{11}(x) & {\boldsymbol{\bm A}}_{12}(x) & \cdots & {\boldsymbol{\bm A}}_{1n}(x) \\ {\boldsymbol{\bm 0}} & {\boldsymbol{\bm A}}_{22}(x) & \cdots & {\boldsymbol{\bm A}}_{2n}(x) \\ {\boldsymbol{\bm 0}} & {\boldsymbol{\bm 0}} & \cdots & {\boldsymbol{\bm A}}_{3n}(x) \\ \vdots & \vdots & \vdots\vdots\vdots & \vdots \\ {\boldsymbol{\bm 0}} & {\boldsymbol{\bm 0}} & \cdots & {\boldsymbol{\bm A}}_{nn}(x) \end{pmatrix}$$ and write $${\boldsymbol{\bm A}}(x) =\prod_s^t ({\boldsymbol{\bm I}} + {\boldsymbol{\bm A}}(x)\,{\mathrm{d}}x) = \begin{pmatrix} {\boldsymbol{\bm B}}_{11}(s,t) & {\boldsymbol{\bm B}}_{12}(s,t) & \cdots & {\boldsymbol{\bm B}}_{1n}(s,t) \\ {\boldsymbol{\bm 0}} & {\boldsymbol{\bm B}}_{22}(s,t) & \cdots & {\boldsymbol{\bm B}}_{2n}(s,t) \\ {\boldsymbol{\bm 0}} & {\boldsymbol{\bm 0}} & \cdots & {\boldsymbol{\bm B}}_{3n}(s,t) \\ \vdots & \vdots & \vdots\vdots\vdots & \vdots \\ {\boldsymbol{\bm 0}} & {\boldsymbol{\bm 0}} & \cdots & {\boldsymbol{\bm B}}_{nn}(s,t) \end{pmatrix} .$$ Then Lemma \[lemma:van-loan\] implies that $$\begin{aligned} \lefteqn{\big( {\boldsymbol{\bm B}}_{12}(s,t),{\boldsymbol{\bm B}}_{13}(s,t),...,{\boldsymbol{\bm B}}_{1n}(s,t) \big)=}~~~\\ && \int_s^t \prod_s^x ({\boldsymbol{\bm I}}+{\boldsymbol{\bm A}}_{11}(u)\,{\mathrm{d}}u) \left[ {\boldsymbol{\bm A}}_{12}(x),...,{\boldsymbol{\bm A}}_{1n}(x) \right] \prod_x^t \left({\boldsymbol{\bm I}} + \begin{pmatrix} {\boldsymbol{\bm A}}_{22}(u) & {\boldsymbol{\bm A}}_{23}(u) & \cdots & {\boldsymbol{\bm A}}_{2n}(u) \\ {\boldsymbol{\bm 0}}& {\boldsymbol{\bm A}}_{33}(u) & \cdots & {\boldsymbol{\bm A}}_{3n}(u) \\ {\boldsymbol{\bm 0}} & {\boldsymbol{\bm 0}} & \cdots & {\boldsymbol{\bm A}}_{4n}(u) \\ \vdots & \vdots & \vdots\vdots\vdots & \vdots \\ {\boldsymbol{\bm 0}} & {\boldsymbol{\bm 0}} & \cdots & {\boldsymbol{\bm A}}_{nn}(u) \end{pmatrix} \,{\mathrm{d}}u \right)\end{aligned}$$ so that $${\boldsymbol{\bm B}}_{1n}(s,t) = \int_s^t \prod_s^x ({\boldsymbol{\bm I}}+{\boldsymbol{\bm A}}_{11}(u)\,{\mathrm{d}}u) \left[ {\boldsymbol{\bm A}}_{12}(x),...,{\boldsymbol{\bm A}}_{1n}(x) \right] \begin{pmatrix} {\boldsymbol{\bm B}}_{2n}(x,t)\\ {\boldsymbol{\bm B}}_{3n}(x,t)\\ \vdots \\ {\boldsymbol{\bm B}}_{nn}(x,t) \end{pmatrix}\ \,{\mathrm{d}}x$$ which can be written as $${\boldsymbol{\bm B}}_{1n}(s,t) = \sum_{i=2}^n \int_s^t \prod_s^x ({\boldsymbol{\bm I}}+{\boldsymbol{\bm A}}_{11}(u)\,{\mathrm{d}}u) {\boldsymbol{\bm A}}_{1i}(x){\boldsymbol{\bm B}}_{in}(x,t)\,{\mathrm{d}}x .$$ Applying Lemma \[lemma:van-loan\] to \[eq:extend-reserve\] from downwards and up, we then get that $$\begin{aligned} {\boldsymbol{\bm B}}_{nn}(s,t)&=&\prod_s^t ({\boldsymbol{\bm I}}+{\boldsymbol{\bm A}}_{nn}(x)\,{\mathrm{d}}x) \\ {\boldsymbol{\bm B}}_{n-1,n}(s,t)&=&\int_s^t \prod_s^x ({\boldsymbol{\bm I}}+{\boldsymbol{\bm A}}_{n-1,n-1}(u)\,{\mathrm{d}}u){\boldsymbol{\bm A}}_{n-1,n}(x)\prod_x^t ({\boldsymbol{\bm I}}+{\boldsymbol{\bm A}}_{nn}(u)\,{\mathrm{d}}u)\\ &=&\int_s^t \prod_s^x ({\boldsymbol{\bm I}}+{\boldsymbol{\bm A}}_{n-1,n-1}(u)\,{\mathrm{d}}u){\boldsymbol{\bm A}}_{n-1,n}(x){\boldsymbol{\bm B}}_{nn}(x,t)\,{\mathrm{d}}x \end{aligned}$$ and more generally, $${\boldsymbol{\bm B}}_{i,n}(s,t) = \sum_{j=i+1}^n \int_s^t \prod_s^x ({\boldsymbol{\bm I}}+{\boldsymbol{\bm A}}_{ii}(u)\,{\mathrm{d}}u){\boldsymbol{\bm A}}_{ij}(x){\boldsymbol{\bm B}}_{jn}(x,t)\,{\mathrm{d}}x . \label{eq:recursion}$$ The partial reserve defined in , ${\boldsymbol{\bm V}}(t)$, (and the transition matrix) can be calculated through the product integral $$\begin{aligned} \prod_t^T \left( {\boldsymbol{\bm I}} + \begin{pmatrix} {\boldsymbol{\bm M}}(u)-r(u){\boldsymbol{\bm I}} & {\boldsymbol{\bm R}}(u) \\ {\boldsymbol{\bm 0}} & {\boldsymbol{\bm M}}(u) \end{pmatrix} \,{\mathrm{d}}u \right) &=&\begin{pmatrix} \displaystyle\prod_t^T ({\boldsymbol{\bm I}} + ({\boldsymbol{\bm M}}(u)-r(u){\boldsymbol{\bm I}})\,{\mathrm{d}}u) & {\boldsymbol{\bm V}}(t) \\ {\boldsymbol{\bm 0}} & {\boldsymbol{\bm P}}(t,T) \end{pmatrix} .\end{aligned}$$ Applying Lemma \[lemma:van-loan\] to we get that $$\begin{aligned} \lefteqn{ \prod_t^T \left( {\boldsymbol{\bm I}} + \begin{pmatrix} {\boldsymbol{\bm M}}(u)-r(u){\boldsymbol{\bm I}} & {\boldsymbol{\bm R}}(u) \\ {\boldsymbol{\bm 0}} & {\boldsymbol{\bm M}}(u) \end{pmatrix} \,{\mathrm{d}}u \right)} \nonumber \\ &=& \begin{pmatrix} \displaystyle\prod_t^T ({\boldsymbol{\bm I}} + ({\boldsymbol{\bm M}}(u)-r(u){\boldsymbol{\bm I}})\,{\mathrm{d}}u) & \displaystyle\int_t^T \prod_t^u ({\boldsymbol{\bm I}} + ({\boldsymbol{\bm M}}(s)-r(s){\boldsymbol{\bm I}})\,{\mathrm{d}}s) {\boldsymbol{\bm R}}(u) \prod_u^T ({\boldsymbol{\bm I}} + {\boldsymbol{\bm M}}(s)\,{\mathrm{d}}s))\,{\mathrm{d}}u \\ {\boldsymbol{\bm 0}} & \displaystyle\prod_t^T ({\boldsymbol{\bm I}} + {\boldsymbol{\bm M}}(u)\,{\mathrm{d}}u) . \end{pmatrix}\nonumber \\ &=&\begin{pmatrix} \displaystyle\prod_t^T ({\boldsymbol{\bm I}} + ({\boldsymbol{\bm M}}(u)-r(u){\boldsymbol{\bm I}})\,{\mathrm{d}}u) & {\boldsymbol{\bm V}}(t) \\ {\boldsymbol{\bm 0}} & {\boldsymbol{\bm P}}(t,T) . \end{pmatrix} . \label{eq:Thiele-explicit-sol}\end{aligned}$$ So both the partial reserve ${\boldsymbol{\bm V}}(t)$ and ${\boldsymbol{\bm P}}(t,T)=\{ p_{ij}(t,T)\}$ are calculated trough one single calculation of the product integral. This is convenient if we are interested in the calculating of the expected future payments conditional on $Z(t)=i$ and $Z(T)=j$, since $${\mathds{E}}\bigl[B(T)\,\big|\, Z(0)=i,Z(T)=j\bigr]\ =\ \frac{v_{ij}(t)}{p_{ij}(t,T)} .$$ Laplace transform of rewards and future payments {#sec:laplace-transform} ================================================ Recall the definition of the total reward $R^T(s,t)$ which is the undiscounted future payments in an insurance context. Let $$F_{ij}(x;s,t) = {\mathds{P}}(Z(t)=j, R^T(s,t)\leq x \ | \ Z(s)=i)$$ and $$F^*_{ij}(\theta;s,t) = \int_{-\infty}^\infty {\mathrm{e}}^{-\theta x}\,{\mathrm{d}}F_{ij}(x;s,t)$$ which is assumed to exist. Define $${\boldsymbol{\bm F}}^*(\theta;s,t) = \{ F^*_{ij}(\theta;s,t) \}_{i,j=1,...,p} .$$ \[Th:4.5a\] The distribution of the total reward $R^T(s,t)$ has Laplace–Stieltjes transform given by $${\boldsymbol{\bm F}}^*(\theta;s,t) = \prod_{s}^t \left( {\boldsymbol{\bm I}} + \left[ {\boldsymbol{\bm D}}(u)\bullet \big\{ {\mathrm{e}}^{-\theta b^{k\ell}(u)} \big\}_{k,\ell}+{\boldsymbol{\bm C}}(u) - \theta {\boldsymbol{\bm \Delta}}({\boldsymbol{\bm b}}(u)) \right] \,{\mathrm{d}}u \right) .$$ Conditioning on the time $u$ of the first jump, if any, in $[s,t]$, we get $$\begin{aligned} F_{ij}(x;s,t)&=&\delta_{ij}\exp \left( \int_s^t \mu_{ii}(u)\,{\mathrm{d}}u \right)1\left\{ \int_s^t b^i(u)\,{\mathrm{d}}u\leq x \right\} \\ &&+ \sum_{k=1}^p \int_s^t \exp \left( \int_s^u \mu_{ii}(r)\,{\mathrm{d}}dr \right)d_{ik}(u)F_{kj}\left(x-\int_s^ub^i(r)dr-b^{ik}(u);u,t\right) \,{\mathrm{d}}u \\ &&+ \sum_{k\neq p} \int_s^t \exp \left( \int_s^u \mu_{ii}(r)\,{\mathrm{d}}r \right)c_{ik}(u)F_{kj}\left(x-\int_s^ub^i(r)\,{\mathrm{d}}r;u,t\right) \,{\mathrm{d}}u . \end{aligned}$$ Here the first line corresponds to the case where there are no Poisson arrivals or jumps in $[s,t]$, the second line corresponds to a jump with reward for $k\neq i$ or a Poisson arrival rewards for $k=i$, while the third line corresponds to a jump without rewards. Consider the Laplace transform $$F^0_{ij}(\theta ; s,t)= \int_{-\infty}^\infty {\mathrm{e}}^{-\theta x}F_{ij}(x;s,t)\,{\mathrm{d}}x .$$ Then the contribution from the first term amounts to $$\begin{aligned} \int_{-\infty}^\infty {\mathrm{e}}^{-\theta x} \delta_{ij}\exp \left( \int_s^t \mu_{ii}(u)\,{\mathrm{d}}u \right)1\left\{ \int_s^t b^i(u)\,{\mathrm{d}}u\leq x \right\} \,{\mathrm{d}}x &=& \delta_{ij}\exp \left( \int_s^t \mu_{ii}(u)\,{\mathrm{d}}u \right)\int_{\int_s^t b^i(u)\,{\mathrm{d}}u}^\infty {\mathrm{e}}^{-\theta x} \,{\mathrm{d}}x\\ &=&\frac{\delta_{ij}}{\theta}\exp \left( \int_s^t \mu_{ii}(u)\,{\mathrm{d}}u - \theta\int_s^t b^i(u)\,{\mathrm{d}}u \right) . \end{aligned}$$ The second term contributes to the Laplace transform by $$\begin{aligned} \lefteqn{\int_{-\infty}^\infty {\mathrm{e}}^{-\theta x}\sum_{k=1}^p\int_s^t \exp \left( \int_s^u \mu_{ii}(r)\,{\mathrm{d}}r \right)d_{ik}(u)F_{kj}\left(x-\int_s^ub^i(r)dr-b^{ik}(u);u,t\right) \,{\mathrm{d}}u\ \,{\mathrm{d}}x}~~\\ &=&\sum_{k=1}^p\int_s^t \exp \left( \int_s^u \mu_{ii}(r)\,{\mathrm{d}}r \right)d_{ik}(u) \int_{-\infty}^\infty {\mathrm{e}}^{-\theta x} F_{kj}\left(x-\int_s^ub^i(r)\,{\mathrm{d}}r-b^{ik}(u);u,t\right)\,{\mathrm{d}}x \ \,{\mathrm{d}}u \\ &=&\sum_{k=1}^p\int_s^t \exp \left( \int_s^u \mu_{ii}(r)\,{\mathrm{d}}r \right)d_{ik}(u) \exp \left( -\theta\int_s^ub^i(r)\,{\mathrm{d}}r -\theta b^{ik}(u) \right) \int_{-\infty}^\infty {\mathrm{e}}^{-\theta x} F_{kj}\left(x;u,t\right)\,{\mathrm{d}}x \ \,{\mathrm{d}}u \\ &=&\sum_{k=1}^p\int_s^t \exp \left( \int_s^u \mu_{ii}(r)\,{\mathrm{d}}r \right)d_{ik}(u) \exp \left( -\theta \int_s^ub^i(r)\,{\mathrm{d}}r-\theta b^{ik}(u) \right) F^0_{kj}(\theta;u,t) \,{\mathrm{d}}u . \end{aligned}$$ The third term contributes similarly by $$\begin{aligned} \lefteqn{\int_{-\infty}^\infty {\mathrm{e}}^{-\theta x}\sum_{k\neq i}\int_s^t \exp \left( \int_s^u \mu_{ii}(r)\,{\mathrm{d}}r \right)c_{ik}(u)F_{kj}\left(x-\int_s^ub^i(r)dr;u,t\right) \,{\mathrm{d}}u\ \,{\mathrm{d}}x}~~\\ &=&\sum_{k\neq i}\int_s^t \exp \left( \int_s^u \mu_{ii}(r)\,{\mathrm{d}}r \right)c_{ik}(u) \exp \left( -\theta \int_s^u b^i(r)\,{\mathrm{d}}r \right) F^0_{kj}(\theta;u,t) \,{\mathrm{d}}u . \end{aligned}$$ Thus $$\begin{aligned} F^0_{ij}(\theta;s,t)&=&\frac{\delta_{ij}}{\theta}\exp \left( \int_s^t \mu_{ii}(r)\,{\mathrm{d}}r - \theta \int_s^t b^i(r)\,{\mathrm{d}}r \right) \\ &&+\sum_{k=1}^p\int_s^t \exp \left( \int_s^u \mu_{ii}(r)\,{\mathrm{d}}r \right)d_{ik}(u) \exp \left( -\theta \int_s^ub^i(r)\,{\mathrm{d}}r-\theta b^{ik}(u) \right) F^0_{kj}(\theta;u,t) \,{\mathrm{d}}u \\ &&+ \sum_{k\neq i}\int_s^t \exp \left( \int_s^u \mu_{ii}(r)\,{\mathrm{d}}r \right)c_{ik}(u) \exp \left( -\theta \int_s^u b^i(r)\,{\mathrm{d}}r \right) F^0_{kj}(\theta;u,t) \,{\mathrm{d}}u . \end{aligned}$$ For the corresponding Laplace–Stieltjes transform $$F^*_{ij}(\theta;s,t) = \theta F^0_{ij}(\theta;s,t)$$ we then get $$\begin{aligned} F^*_{ij}(\theta;s,t)&=&\delta_{ij}\exp \left( \int_s^t \mu_{ii}(u)\,{\mathrm{d}}u - \theta \int_s^t b^i(r)\,{\mathrm{d}}r \right) \\ &&+\sum_{k=1}^p\int_s^t \exp \left( \int_s^u \mu_{ii}(r)\,{\mathrm{d}}r \right)d_{ik}(u) \exp \left( -\theta \int_s^ub^i(r)\,{\mathrm{d}}r-\theta b^{ik}(u) \right) F^*_{kj}(\theta;u,t) \,{\mathrm{d}}u \\ &&+ \sum_{k\neq i}\int_s^t \exp \left( \int_s^u \mu_{ii}(r)\,{\mathrm{d}}r \right)c_{ik}(u) \exp \left( -\theta \int_s^u b^i(r)\,{\mathrm{d}}r \right) F^*_{kj}(\theta;u,t) \,{\mathrm{d}}u \end{aligned}$$ so $$\begin{aligned} F^*_{ij}(\theta;s,t)\exp \left( -\int_s^t \mu_{ii}(u)\,{\mathrm{d}}u + \theta \int_s^t b^i(r)\,{\mathrm{d}}r \right)\\&& \hspace{-6cm}=\delta_{ij} +\sum_{k=1}^p\int_s^t \exp \left( -\int_u^t \mu_{ii}(r)\,{\mathrm{d}}r \right)d_{ik}(u) \exp \left( \theta \int_u^t b^i(r)\,{\mathrm{d}}r-\theta b^{ik}(u) \right) F^*_{kj}(\theta;u,t) \,{\mathrm{d}}u \\ &&\hspace{-6cm}+ \sum_{k\neq i}\int_s^t \exp \left( -\int_u^t \mu_{ii}(r)\,{\mathrm{d}}r \right)c_{ik}(u) \exp \left( \theta \int_u^t b^i(r)\,{\mathrm{d}}r \right) F^*_{kj}(\theta;u,t) \,{\mathrm{d}}u \end{aligned}$$ Now differentiate with respect to $s$ to get $$\begin{aligned} \lefteqn{\frac{\partial F^*_{ij}}{\partial s} (\theta;s,t)\exp \left( -\int_s^t \mu_{ii}(u)\,{\mathrm{d}}u + \theta \int_s^t b^i(r)\,{\mathrm{d}}r \right)}~~~\\ &&+ F^*_{ij}(\theta;s,t)\exp \left( -\int_s^t \mu_{ii}(u)\,{\mathrm{d}}u + \theta \int_s^t b^i(r)\,{\mathrm{d}}r \right) (\mu_{ii}(s)-\theta b^i(s)) \\&& =- \sum_{k=1}^p \exp \left( -\int_s^t \mu_{ii}(r)dr \right)d_{ik}(s) \exp \left( \theta \int_s^t b^i(r)\,{\mathrm{d}}r-\theta b^{ik}(s) \right) F^*_{kj}(\theta;s,t) \\ &&- \sum_{k\neq i}\exp \left( -\int_s^t \mu_{ii}(r)\,{\mathrm{d}}r \right)c_{ik}(s) \exp \left( \theta \int_s^t b^i(r)\,{\mathrm{d}}r \right) F^*_{kj}(\theta;s,t) \end{aligned}$$ which implies that $$\begin{aligned} \frac{\partial F^*_{ij}}{\partial s} (\theta;s,t)&=& -\sum_{k} \left[ {\boldsymbol{\bm D}}(s) \bullet \{ {\mathrm{e}}^{-\theta b^{mn}(s)}\}_{m,n} + {\boldsymbol{\bm C}}(s) - \theta {\boldsymbol{\bm \Delta}}({\boldsymbol{\bm b}}(s)) \} \right]_{ik} F^*_{kj}(\theta;s,t) , \end{aligned}$$ or, in matrix notation, $$\frac{\partial}{\partial s} {\boldsymbol{\bm F}}^*(\theta;s,t) = - \left[ {\boldsymbol{\bm D}}(s) \bullet \{ {\mathrm{e}}^{-\theta b^{mn}(s)} \}_{m,n} + {\boldsymbol{\bm C}}(s)- \theta {\boldsymbol{\bm \Delta}}({\boldsymbol{\bm b}}(s)) \} \right] {\boldsymbol{\bm F}}^*(\theta;s,t) \label{eq:diff-eq-transform}$$ with boundary condition of ${\boldsymbol{\bm F}}^*(\theta;t,t)={\boldsymbol{\bm I}}$. Thus the solution is given by $${\boldsymbol{\bm F}}^*(\theta;s,t) = \prod_{s}^t \left( {\boldsymbol{\bm I}} + \left[ {\boldsymbol{\bm D}}(u)\bullet \{ {\mathrm{e}}^{-\theta b^{mn}(u)} \}_{m,n} + {\boldsymbol{\bm C}}(u) -\theta {\boldsymbol{\bm \Delta}}({\boldsymbol{\bm b}}(u)) \right] \,{\mathrm{d}}u \right) .$$ Higher order moments {#sec:moments} ==================== Define the matrices $${\boldsymbol{\bm F}}^{(k)}(x)= \arraycolsep=3.8pt\def\arraystretch{2.2} \left(\begin{array}{ccccccc} {\boldsymbol{\bm M}}(x) & {k\choose 1}{\boldsymbol{\bm R}}(x) & {k\choose 2}{\boldsymbol{\bm C}}^{(2)}(x) & \cdots & {k\choose k-1}{\boldsymbol{\bm C}}^{(k-1)}(x) & {\boldsymbol{\bm C}}^{(k)}(x) \\ {\boldsymbol{\bm 0}} & {\boldsymbol{\bm M}}(x){\boldsymbol{\bm I}} & {k-1 \choose 1}{\boldsymbol{\bm R}}(x) & \cdots & {k-1\choose k-2}{\boldsymbol{\bm C}}^{(k-2)}(x) &{\boldsymbol{\bm C}}^{(k-1)}(x) \\ \vdots & \vdots & \vdots & \vdots \vdots \vdots & \vdots &\vdots \\ {\boldsymbol{\bm 0}} &{\boldsymbol{\bm 0}} &{\boldsymbol{\bm 0}} & \cdots & {\boldsymbol{\bm M}}(x) &{\boldsymbol{\bm R}}(x) \\ {\boldsymbol{\bm 0}} &{\boldsymbol{\bm 0}} &{\boldsymbol{\bm 0}} & \cdots &{\boldsymbol{\bm 0}} & {\boldsymbol{\bm M}}(x) \end{array}\right) . \label{eq:F-gen-res}$$ and $$\begin{aligned} {\boldsymbol{\bm H}}^{(k)}(s,t)= \arraycolsep=3.8pt\def\arraystretch{2.2} \left(\begin{array}{ccccccc} {\boldsymbol{\bm P}}(s,t) & {k\choose 1}{\boldsymbol{\bm m}}^{(1)}(s,t) & {k\choose 2}{\boldsymbol{\bm m}}^{(2)}(s,t) & \cdots & {k\choose k-1}{\boldsymbol{\bm m}}^{(k-1)}(s,t) & {\boldsymbol{\bm m}}^{(k)}(s,t) \\ {\boldsymbol{\bm 0}} & {\boldsymbol{\bm P}}(s,t) & {k-1 \choose 1}{\boldsymbol{\bm m}}^{(1)}(s,t) & \cdots & {k-1\choose k-2}{\boldsymbol{\bm m}}^{(k-2)}(s,t) &{\boldsymbol{\bm m}}^{(k-1)}(s,t)\\ \vdots & \vdots & \vdots & \vdots \vdots \vdots & \vdots &\vdots \\ {\boldsymbol{\bm 0}} &{\boldsymbol{\bm 0}} &{\boldsymbol{\bm 0}} & \cdots & {\boldsymbol{\bm P}}(s,t) &{\boldsymbol{\bm m}}^{(1)}(s,t) \\ {\boldsymbol{\bm 0}} &{\boldsymbol{\bm 0}} &{\boldsymbol{\bm 0}} & \cdots &{\boldsymbol{\bm 0}} & {\boldsymbol{\bm P}}(s,t) \end{array}\right) . \label{eq:H-moments-of-F}\end{aligned}$$ Then we have the following main result. \[th:main-moments\] $$\prod_s^t ({\boldsymbol{\bm I}} + {\boldsymbol{\bm F}}^{(k)}(x)\,{\mathrm{d}}x) = {\boldsymbol{\bm H}}^{(k)}(s,t) .$$ From the Laplace–Stieltjes transform it is now possible to derive higher order moments. First we notice that ${\boldsymbol{\bm F}}^*(0;s,t)=\prod_s^t ({\boldsymbol{\bm I}}+{\boldsymbol{\bm M}}(u)\,{\mathrm{d}}u)$, and we can obtain (recall ) $${\boldsymbol{\bm m}}^{(k)}(s,t) \ =\ \Bigl\{ {\mathds{E}}\Bigl[1\{ Z(t)=j\} R^T(s,t)^k \,\Big|\, Z(s)=i\Bigr]\Bigr\}_{i,j}$$ by $${\boldsymbol{\bm m}}^{(k)}(s,t)= (-1)^k\frac{\partial^k}{\partial \theta^k} {\boldsymbol{\bm F}}^*(\theta;s,t)\biggr\rvert_{\theta=0} .$$ Now $$\frac{\partial^k}{\partial \theta^k} \{ {\mathrm{e}}^{-\theta b^{ij}(t)} \}_{i,j}\biggr\rvert_{\theta=0} = (-{\boldsymbol{\bm B}}(t))^{\bullet k}= (-{\boldsymbol{\bm B}}(t))\bullet \cdots \bullet (-{\boldsymbol{\bm B}}(t))$$ ($k$ factors) whereas for $k=0$ (no differentiation, only evaluation at $\theta =0$) it equals the matrix which has all entrances equal to one. From we get by differentiation with respect to $\theta$ that $$\begin{aligned} \lefteqn{(-1)^k\frac{\partial^k}{\partial \theta^k}\frac{\partial}{\partial s}{\boldsymbol{\bm F}}^*(\theta;s,t) = -(-1)^{k}\frac{\partial^k}{\partial \theta^k}\left( \left[ {\boldsymbol{\bm D}}(s) \bullet \{ {\mathrm{e}}^{-\theta b^{ij}(s)} \}_{i,j}+{\boldsymbol{\bm C}}(s)- \theta {\boldsymbol{\bm \Delta}}({\boldsymbol{\bm b(s)}}) \} \right] {\boldsymbol{\bm F}}^*(\theta;s,t)\right)}~~~ \\ &=&-\sum_{m=0}^k \begin{pmatrix} k \\ m \end{pmatrix} (-1)^m\frac{\partial^m}{\partial \theta^m}\left[ {\boldsymbol{\bm D}}(s) \bullet \{ {\mathrm{e}}^{-\theta b^{ij}(s)} \}_{i,j}+{\boldsymbol{\bm C}}(s)- \theta {\boldsymbol{\bm \Delta}}({\boldsymbol{\bm b(s)}}) \} \right] (-1)^{k-m}\frac{\partial^{k-m}}{\partial \theta^{k-m}}{\boldsymbol{\bm F}}^*(\theta;s,t) . \end{aligned}$$ Recalling that $${\boldsymbol{\bm R}}(t) = {\boldsymbol{\bm D}}(t)\bullet {\boldsymbol{\bm B}}(t) + {\boldsymbol{\bm \Delta}}({\boldsymbol{\bm b}}(t))$$ and since $$\left[ {\boldsymbol{\bm D}}(s) \bullet \{ {\mathrm{e}}^{-\theta b^{ij}(s)} \}_{i,j}+{\boldsymbol{\bm C}}(s)- \theta {\boldsymbol{\bm \Delta}}({\boldsymbol{\bm b(s)}}) \} \right] \biggr\rvert_{\theta=0} = {\boldsymbol{\bm D}}(s)+{\boldsymbol{\bm C}}(s) = {\boldsymbol{\bm M}}(s)$$ we get that $$\begin{aligned} \frac{\partial}{\partial s}{\boldsymbol{\bm m}}^{(k)}(s,t)=-\Big[ {\boldsymbol{\bm M}}(s) {\boldsymbol{\bm m}}^{(k)}(s,t) + k{\boldsymbol{\bm R}}(s){\boldsymbol{\bm m}}^{(k-1)}(s,t) +\sum_{m=2}^k \begin{pmatrix} k \\ m \end{pmatrix} {\boldsymbol{\bm D}}(s)\bullet {\boldsymbol{\bm B}}^{\bullet m}(s) {\boldsymbol{\bm m}}^{(k-m)}(s,t) \Big],\ \label{eq:moment-differential} \end{aligned}$$ where $${\boldsymbol{\bm m}}^{(0)}(s,t) = {\boldsymbol{\bm F}}^*(0;s,t)=\prod_s^t ({\boldsymbol{\bm I}}+{\boldsymbol{\bm M}}(u)\,{\mathrm{d}}u) = {\boldsymbol{\bm P}}(s,t) .$$ Multiplying from the left on both sides with $\prod_t^s ({\boldsymbol{\bm I}}+{\boldsymbol{\bm M}}(u)\,{\mathrm{d}}u)$ (see also ) we get $$\begin{aligned} \frac{\partial}{\partial s}\left(\prod_t^s ({\boldsymbol{\bm I}}+{\boldsymbol{\bm M}}(u)\,{\mathrm{d}}u) {\boldsymbol{\bm m}}^{(k)}(s,t) \right)&=&- k\prod_t^s ({\boldsymbol{\bm I}}+{\boldsymbol{\bm M}}(u)\,{\mathrm{d}}u){\boldsymbol{\bm R}}(s){\boldsymbol{\bm m}}^{(k-1)}(s,t) \\ &&- \sum_{m=2}^k \begin{pmatrix} k \\ m \end{pmatrix} \prod_t^s ({\boldsymbol{\bm I}}+{\boldsymbol{\bm M}}(u)\,{\mathrm{d}}u)\left( {\boldsymbol{\bm D}}(s)\bullet {\boldsymbol{\bm B}}^{\bullet m}(s)\right) {\boldsymbol{\bm m}}^{(k-m)}(s,t) . \end{aligned}$$ Integrating the equation then gives $$\begin{aligned} {\boldsymbol{\bm m}}^{(k)}(s,t)&=&k \int_s^t {\boldsymbol{\bm P}}(s,x){\boldsymbol{\bm R}}(x){\boldsymbol{\bm m}}^{(k-1)}(x,t)\,{\mathrm{d}}x \nonumber \\ &&+ \sum_{m=2}^k \begin{pmatrix} k \\ m \end{pmatrix} \int_s^t {\boldsymbol{\bm P}}(s,x)\left( {\boldsymbol{\bm D}}(x)\bullet {\boldsymbol{\bm B}}^{\bullet m}(x) \right) {\boldsymbol{\bm m}}^{(k-m)}(x,t)\,{\mathrm{d}}x \nonumber \\ &\stackrel{\eqref{def-symbol:C}}{=}& k \int_s^t {\boldsymbol{\bm P}}(s,x){\boldsymbol{\bm R}}(x){\boldsymbol{\bm m}}^{(k-1)}(x,t)\,{\mathrm{d}}x \nonumber \\ &&+ \sum_{m=2}^k \begin{pmatrix} k \\ m \end{pmatrix} \int_s^t {\boldsymbol{\bm P}}(s,x) {\boldsymbol{\bm C}}^{(m)}(x) {\boldsymbol{\bm m}}^{(k-m)}(x,t)\,{\mathrm{d}}x . \label{eq:moment-integrals} \end{aligned}$$ Now we employ an induction argument to prove the identity of the product integral of the above matrix indeed equals ${\boldsymbol{\bm H}}^{(k)}(s,t)$. For $k=1$ the results amounts to $$\prod_s^t \left({\boldsymbol{\bm I}}+ \begin{pmatrix} {\boldsymbol{\bm M}}(u) & {\boldsymbol{\bm R}}(u) \\ {\boldsymbol{\bm 0}} & {\boldsymbol{\bm M}}(u) \end{pmatrix} \,{\mathrm{d}}u\right) = \begin{pmatrix} {\boldsymbol{\bm P}}(s,t) & {\boldsymbol{\bm m}}^{(1)}(s,t) \\ {\boldsymbol{\bm 0}} & {\boldsymbol{\bm P}}(s,t) \end{pmatrix}$$ which indeed holds true since Lemma \[lemma:van-loan\] implies that $${\boldsymbol{\bm m}}^{(1)}(s,t) = \int_s^t {\boldsymbol{\bm P}}(s,x) {\boldsymbol{\bm R}}(x) {\boldsymbol{\bm P}}(x,t) \,{\mathrm{d}}x , \label{eq:first-moment-int}$$ which has been previously established in Lemma \[lemma:rewards\]. Assume that the results hold true for dimension $k-1$. Partition the matrix ${\boldsymbol{\bm F}}^{(k)}(u)$ as $${\boldsymbol{\bm F}}^{(k)}(u) = \begin{pmatrix} {\boldsymbol{\bm M}}(u) & {\boldsymbol{\bm x}}^{(k)}(u) \\ {\boldsymbol{\bm 0}} & {\boldsymbol{\bm F}}^{(k-1)}(u) \end{pmatrix} ,$$ where $${\boldsymbol{\bm x}}^{(k)}(u) = \left( {k\choose 1}{\boldsymbol{\bm R}},\ {k\choose 2}{\boldsymbol{\bm C}}^{(2)},\ {k\choose 3}{\boldsymbol{\bm C}}^{(3)}, \cdots ,\ {k\choose k-1}{\boldsymbol{\bm C}}^{(k-1)},\ {k \choose k}{\boldsymbol{\bm C}}^{(k)} \right) .$$ Then use Lemma \[lemma:van-loan\], the induction hypothesis and to verify the correct form of the first block row. The central moments $${\mathds{E}}\Bigl[U(s,t)-{\mathds{E}}\bigl[U(s,t)\,\,|Z(s)=i\bigr]\Bigr]^k\,\Big|\, Z(s)=i \Bigr]$$ can be obtained by Theorem \[th:main-moments\] by a simple reparametrisation, replacing ${\boldsymbol{\bm b}}(t)=(b^1(t),...,b^p(t))$ by $${\boldsymbol{\bm b}}(t)-\frac{{\mathds{E}}\bigl[U(s,t)\,\big|\,Z(s)=i\bigr]}{t-s} {\boldsymbol{\bm e}},$$ where the expected values ${\mathds{E}}\bigl[U(s,t)\,\big|\,Z(s)=i\bigr]$ are then first calculated by Theorem  \[th:main-moments\] in the usual way with $k=1$. Let $\{ Z(t)\}_{t\geq 0}$ be a time–homogeneous Markov jump process with state–space $E=\{1,2,...,p\}$ and intensity matrix $${\boldsymbol{\bm \Lambda}}={\boldsymbol{\bm C}}+{\boldsymbol{\bm D}} ,$$ where ${\boldsymbol{\bm C}}$ is a sub–intensity matrix and ${\boldsymbol{\bm D}}$ a non–negative matrix. A Markovian Arrival Process $N$ is a point process which is constructed in the following way. Upon transitions from $i$ to $j$ of $Z(t)$, an arrival of $N$ is produced with probability $d_{ij}/(c_{ij}+d_{ij})$, and during during sojourns in state $i$, there are Poisson arrivals at rate $d_{ii}$. Let $N(0,t)$ denote the number of arrivals in the MAP during $[0,t]$. Then we may calculate the moments of $N(0,t)$ by the use of Theorem \[th:main-moments\]. We identify ${\boldsymbol{\bm M}}(s)={\boldsymbol{\bm C}}+{\boldsymbol{\bm D}}$, ${\boldsymbol{\bm B}}(t) = {\boldsymbol{\bm E}}$ (the matrix of ones), ${\boldsymbol{\bm b}}(s)={\boldsymbol{\bm 0}}$, ${\boldsymbol{\bm R}}(s)={\boldsymbol{\bm D}}$, ${\boldsymbol{\bm C}}^{(k)}(s)={\boldsymbol{\bm D}}$ and $${\boldsymbol{\bm F}}^{(k)}={\boldsymbol{\bm F}}^{(k)}(x)= \arraycolsep=3.8pt\def\arraystretch{2.2} \left(\begin{array}{ccccccc} {\boldsymbol{\bm C}}+{\boldsymbol{\bm D}} & {k\choose 1}{\boldsymbol{\bm D}} & {k\choose 2}{\boldsymbol{\bm D}}& \cdots & {k\choose k-1}{\boldsymbol{\bm D}} & {\boldsymbol{\bm D}} \\ {\boldsymbol{\bm 0}} & {\boldsymbol{\bm C}}+{\boldsymbol{\bm D}} & {k-1 \choose 1}{\boldsymbol{\bm D}} & \cdots & {k-1\choose k-2}{\boldsymbol{\bm D}} &{\boldsymbol{\bm D}} \\ \vdots & \vdots & \vdots & \vdots \vdots \vdots & \vdots &\vdots \\ {\boldsymbol{\bm 0}} &{\boldsymbol{\bm 0}} &{\boldsymbol{\bm 0}} & \cdots & {\boldsymbol{\bm C}}+{\boldsymbol{\bm D}} &{\boldsymbol{\bm D}} \\ {\boldsymbol{\bm 0}} &{\boldsymbol{\bm 0}} &{\boldsymbol{\bm 0}} & \cdots &{\boldsymbol{\bm 0}} & {\boldsymbol{\bm C}}+{\boldsymbol{\bm D}} \end{array}\right) .$$ Then the moments $${\boldsymbol{\bm m}}^{(k)}(s,t)=\bigg\{ {\mathds{E}}\left. \left( 1\{ Z(t)=j \} N(s,t)^k \right|Z(s)=i \right) \bigg\}_{i,j\in E}$$ are obtained through the matrix–exponential of ${\boldsymbol{\bm F}}^{(k)}$ as $$\begin{aligned} \arraycolsep=3.8pt\def\arraystretch{2.2} \left(\begin{array}{ccccccc} {\boldsymbol{\bm P}}(s,t) & {k\choose 1}{\boldsymbol{\bm m}}^{(1)}(s,t) & {k\choose 2}{\boldsymbol{\bm m}}^{(2)}(s,t) & \cdots & {k\choose k-1}{\boldsymbol{\bm m}}^{(k-1)}(s,t) & {\boldsymbol{\bm m}}^{(k)}(s,t) \\ {\boldsymbol{\bm 0}} & {\boldsymbol{\bm P}}(s,t) & {k-1 \choose 1}{\boldsymbol{\bm m}}^{(1)}(s,t) & \cdots & {k-1\choose k-2}{\boldsymbol{\bm m}}^{(k-2)}(s,t) &{\boldsymbol{\bm m}}^{(k-1)}(s,t)\\ \vdots & \vdots & \vdots & \vdots \vdots \vdots & \vdots &\vdots \\ {\boldsymbol{\bm 0}} &{\boldsymbol{\bm 0}} &{\boldsymbol{\bm 0}} & \cdots & {\boldsymbol{\bm P}}(s,t) &{\boldsymbol{\bm m}}^{(1)}(s,t) \\ {\boldsymbol{\bm 0}} &{\boldsymbol{\bm 0}} &{\boldsymbol{\bm 0}} & \cdots &{\boldsymbol{\bm 0}} & {\boldsymbol{\bm P}}(s,t) \end{array}\right) = \exp \left( {\boldsymbol{\bm F}}^{(k)} (t-s) \right) .\end{aligned}$$ In particular, the conditional moments $$\begin{pmatrix} {\mathds{E}}\left. \left( N(s,t)^k \right|Z(s)=1 \right) \\ {\mathds{E}}\left. \left( N(s,t)^k \right|Z(s)=2 \right) \\ \vdots \\ {\mathds{E}}\left. \left( N(s,t)^k \right|Z(s)=p \right) \end{pmatrix} = \exp \left( {\boldsymbol{\bm F}}^{(k)} (t-s) \right){\boldsymbol{\bm e}} .$$ The factorial moments $${\boldsymbol{\bm fm}}^{(k)}(s,t) = \bigg\{ {\mathds{E}}\left. \left( 1\{ Z(t)=j \} N(s,t)(N(s,t)-1)\cdots (N(s,t)-k+1) \right|Z(s)=1 \right) \bigg\}_{i,j\in E}$$ are similarly obtained by the formula $$\begin{aligned} \lefteqn{\arraycolsep=3.8pt\def\arraystretch{2.2} \left(\begin{array}{ccccccc} {\boldsymbol{\bm P}}(s,t) & {\boldsymbol{\bm fm}}^{(1)}(s,t) & {\boldsymbol{\bm fm}}^{(2)}(s,t) & \cdots & {\boldsymbol{\bm fm}}^{(k-1)}(s,t) & {\boldsymbol{\bm fm}}^{(k)}(s,t) \\ {\boldsymbol{\bm 0}} & {\boldsymbol{\bm P}}(s,t) & {\boldsymbol{\bm fm}}^{(1)}(s,t) & \cdots & {\boldsymbol{\bm fm}}^{(k-2)}(s,t) &{\boldsymbol{\bm fm}}^{(k-1)}(s,t)\\ \vdots & \vdots & \vdots & \vdots \vdots \vdots & \vdots &\vdots \\ {\boldsymbol{\bm 0}} &{\boldsymbol{\bm 0}} &{\boldsymbol{\bm 0}} & \cdots & {\boldsymbol{\bm P}}(s,t) &{\boldsymbol{\bm fm}}^{(1)}(s,t) \\ {\boldsymbol{\bm 0}} &{\boldsymbol{\bm 0}} &{\boldsymbol{\bm 0}} & \cdots &{\boldsymbol{\bm 0}} & {\boldsymbol{\bm P}}(s,t) \end{array}\right) }~~\\ &=& \exp \left( \begin{pmatrix} {\boldsymbol{\bm C}}+{\boldsymbol{\bm D}} & {\boldsymbol{\bm D}} & {\boldsymbol{\bm 0}} & {\boldsymbol{\bm 0}} & \cdots & {\boldsymbol{\bm 0}} & {\boldsymbol{\bm 0}} \\ {\boldsymbol{\bm 0}} & {\boldsymbol{\bm C}}+{\boldsymbol{\bm D}} & {\boldsymbol{\bm D}} & {\boldsymbol{\bm 0}} & \cdots & {\boldsymbol{\bm 0}} & {\boldsymbol{\bm 0}} \\ \vdots & \vdots & \vdots & \vdots & \vdots \vdots \vdots & \vdots &\vdots \\ {\boldsymbol{\bm 0}} &{\boldsymbol{\bm 0}} &{\boldsymbol{\bm 0}} & {\boldsymbol{\bm 0}} &\cdots & {\boldsymbol{\bm C}}+{\boldsymbol{\bm D}} &{\boldsymbol{\bm D}} \\ {\boldsymbol{\bm 0}} &{\boldsymbol{\bm 0}} &{\boldsymbol{\bm 0}} & {\boldsymbol{\bm 0}} &\cdots & {\boldsymbol{\bm 0}} &{\boldsymbol{\bm C}}+{\boldsymbol{\bm D}} \\ \end{pmatrix} (t-s) \right) .\end{aligned}$$ Integral representations of the moments and factorial moments in a MAP has been considered in [@bo-Uffe-2007]. Next we turn to the case of discounted rewards (future payments). In principle we may calculate the moments of the future discounted payments by applying Theorem \[th:main-moments\] to the discounted rewards $${\mathrm{e}}^{-\int_s^ur(x)\,{\mathrm{d}}s}b^i(u) \ \ \mbox{and}\ \ {\mathrm{e}}^{-\int_s^ur(x)\,{\mathrm{d}}s}b^{ij}(u) .$$ We may however obtain an explicit matrix representation which also involves the interest rate $r(x)$ in a more convenient way and which is closer to standard intuition in life insurance (like e.g. Hattenforffs theorem). We define ${\boldsymbol{\bm F}}_U^{(k)}(x) $ as the matrix $$\arraycolsep=3.8pt\def\arraystretch{2.2} \left(\begin{array}{ccccccc} {\boldsymbol{\bm M}}(x)-kr(x){\boldsymbol{\bm I}} & {k\choose 1}{\boldsymbol{\bm R}}(x) & {k\choose 2}{\boldsymbol{\bm C}}^{(2)}(x) & \cdots & {k\choose k-1}{\boldsymbol{\bm C}}^{(k-1)}(x) & {\boldsymbol{\bm C}}^{(k)}(x) \\ {\boldsymbol{\bm 0}} & {\boldsymbol{\bm M}}(x)-(k-1)r(x){\boldsymbol{\bm I}} & {k-1 \choose 1}{\boldsymbol{\bm R}}(x) & \cdots & {k-1\choose k-2}{\boldsymbol{\bm C}}^{(k-2)}(x) &{\boldsymbol{\bm C}}^{(k-1)}(x) \\ \vdots & \vdots & \vdots & \vdots \vdots \vdots & \vdots &\vdots \\ {\boldsymbol{\bm 0}} &{\boldsymbol{\bm 0}} &{\boldsymbol{\bm 0}} & \cdots & {\boldsymbol{\bm M}}(x)-r(x){\boldsymbol{\bm I}} &{\boldsymbol{\bm R}}(x) \\ {\boldsymbol{\bm 0}} &{\boldsymbol{\bm 0}} &{\boldsymbol{\bm 0}} & \cdots &{\boldsymbol{\bm 0}} & {\boldsymbol{\bm M}}(x) \end{array}\right) \label{eq:F-gen-res}$$ and let $${\boldsymbol{\bm G}}^{(k)}(s,t)= \prod_s^t ({\boldsymbol{\bm I}} + {\boldsymbol{\bm F}}_U^{(k)}(x)\,{\mathrm{d}}x) .$$ The matrix ${\boldsymbol{\bm F}}_U^{(k)}(x)$ is a $(k+1)p\times (k+1)p$ block–partitioned matrix with blocks of sizes $p\times p$. Thus ${\boldsymbol{\bm G}}^{(k)}(s,t)$ is also a $(k+1)p\times (k+1)p$ matrix, and we define a similar block partitioning as for ${\boldsymbol{\bm F}}^{(k)}(x)$, letting ${\boldsymbol{\bm G}}_{ij}^{(k)}(s,t)$ denote the $ij$’th block which corresponds to the $ij$’th block of ${\boldsymbol{\bm F}}_U^{(k)}(x)$. Then we have the following main result. \[th:main-reserve\] For $j=1,...,k$ we have that $${\boldsymbol{\bm V}}^{(j)}(s,t) = {\boldsymbol{\bm G}}_{k+1-j,k+1}^{(k)}(s,t) ,$$ whereas $${\boldsymbol{\bm P}}(s,t) = {\boldsymbol{\bm G}}_{k+1,k+1} .$$ The theorem states that the right block–column of ${\boldsymbol{\bm G}}^{(k)}(s,t)$ contains the moments $ {\boldsymbol{\bm V}}^{(j)}(s,t)$, starting with the highest moment in the upper right corner and finishing with the transition matrix in the lower right corner (which may be considered as the zeroth moment). Symbolically, $$\prod_s^t ({\boldsymbol{\bm I}} + {\boldsymbol{\bm F}}^{(k)}(x)\,{\mathrm{d}}x) = \begin{pmatrix} * & * & * & * & \cdots * & {\boldsymbol{\bm V}}^{(k)}(s,t) \\ * & * & * & * & \cdots * & {\boldsymbol{\bm V}}^{(k-1)}(s,t) \\ * & * & * & * & \cdots * & {\boldsymbol{\bm V}}^{(k-2)}(s,t) \\ \vdots & \vdots & \vdots &\vdots & \vdots\vdots\vdots & \vdots & \vdots \\ * & * & * & * & \cdots * & {\boldsymbol{\bm V}}^{(1)}(s,t) \\ * & * & * & * & \cdots * & {\boldsymbol{\bm P}}(s,t) \\ \end{pmatrix} . \label{eq:symbolically}$$ The idea of the general proof is most easily explained through the following example, which proves the result of Theorem \[th:main-reserve\] for the case $k=2$, which is the lowest non–trivial order. \[ex:2-moment\] First we consider the product integral $${\boldsymbol{\bm G}}(s,t) = \prod_s^t \left( {\boldsymbol{\bm I}} + \begin{pmatrix} {\boldsymbol{\bm A}}_{11}(x) & {\boldsymbol{\bm A}}_{12}(x) & {\boldsymbol{\bm A}}_{13}(x) \\ {\boldsymbol{\bm 0}} & {\boldsymbol{\bm A}}_{22}(x) & {\boldsymbol{\bm A}}_{23}(x) \\ {\boldsymbol{\bm 0}} & {\boldsymbol{\bm 0}} & {\boldsymbol{\bm A}}_{33}(x) . \end{pmatrix} \,{\mathrm{d}}x \right) = \begin{pmatrix} {\boldsymbol{\bm G}}_{11}(s,t) & {\boldsymbol{\bm G}}_{12}(s,t) & {\boldsymbol{\bm G}}_{13}(s,t) \\ {\boldsymbol{\bm 0}} & {\boldsymbol{\bm G}}_{22}(s,t) & {\boldsymbol{\bm G}}_{23}(s,t) \\ {\boldsymbol{\bm 0}} & {\boldsymbol{\bm 0}} & {\boldsymbol{\bm G}}_{33}(s,t) \end{pmatrix} .$$ Employing Lemma \[lemma:van-loan\] inductively, by first partioning the matrix as $$\left(\begin{array}{c|cc} {\boldsymbol{\bm A}}_{11}(x) & {\boldsymbol{\bm A}}_{12}(x) & {\boldsymbol{\bm A}}_{13}(x) \\ \hline {\boldsymbol{\bm 0}} & {\boldsymbol{\bm A}}_{22}(x) & {\boldsymbol{\bm A}}_{23}(x) \\ {\boldsymbol{\bm 0}} & {\boldsymbol{\bm 0}} & {\boldsymbol{\bm A}}_{33}(x) \end{array}\right) .$$ we see that $$\begin{aligned} {\boldsymbol{\bm G}}_{11}(s,t)&=&\prod_s^t ({\boldsymbol{\bm I}} + {\boldsymbol{\bm A}}_{11}(x)\,{\mathrm{d}}x) \\ \begin{pmatrix} {\boldsymbol{\bm G}}_{22}(s,t) & {\boldsymbol{\bm G}}_{23}(s,t) \\ {\boldsymbol{\bm 0}} & {\boldsymbol{\bm G}}_{33}(s,t) \end{pmatrix} &=& \prod_s^t \left( {\boldsymbol{\bm I}} + \begin{pmatrix} {\boldsymbol{\bm A}}_{22}(x) & {\boldsymbol{\bm A}}_{23}(x) \\ {\boldsymbol{\bm 0}} & {\boldsymbol{\bm A}}_{33}(x) \end{pmatrix}\,{\mathrm{d}}x \right) \\ &=& \begin{pmatrix} \prod_s^t ({\boldsymbol{\bm I}} + {\boldsymbol{\bm A}}_{22}(x)\,{\mathrm{d}}x) & \int_s^t \prod_s^x ({\boldsymbol{\bm I}} + {\boldsymbol{\bm A}}_{22}(u)\,{\mathrm{d}}u) {\boldsymbol{\bm A}}_{23}(x) \prod_x^t ({\boldsymbol{\bm I}} + {\boldsymbol{\bm A}}_{11}(u)\,{\mathrm{d}}u)\,{\mathrm{d}}x \\ {\boldsymbol{\bm 0}} & \prod_s^t ({\boldsymbol{\bm I}} + {\boldsymbol{\bm A}}_{33}(x)\,{\mathrm{d}}x) \end{pmatrix}\end{aligned}$$ whereas $$\begin{aligned} \left({\boldsymbol{\bm G}}_{12}(s,t), {\boldsymbol{\bm G}}_{13}(s,t)\right)&=& \int_s^t \prod_s^x ({\boldsymbol{\bm I}}+{\boldsymbol{\bm A}}_{11}(u)\,{\mathrm{d}}u) \left( {\boldsymbol{\bm A}}_{12}(x),\ {\boldsymbol{\bm A}}_{13}(x) \right) \prod_{x}^t \left( {\boldsymbol{\bm I}} + \begin{pmatrix} {\boldsymbol{\bm A}}_{22}(u) & {\boldsymbol{\bm A}}_{23}(u) \\ {\boldsymbol{\bm 0}} & {\boldsymbol{\bm A}}_{33}(u) \end{pmatrix}\,{\mathrm{d}}u \right) \,{\mathrm{d}}x\end{aligned}$$ so that $$\begin{aligned} {\boldsymbol{\bm G}}_{12}(s)&=&\int_s^t \prod_s^x ({\boldsymbol{\bm I}}+{\boldsymbol{\bm A}}_{11}(u)\,{\mathrm{d}}u) {\boldsymbol{\bm A}}_{12}(x)\prod_x^t ({\boldsymbol{\bm I}}+{\boldsymbol{\bm A}}_{22}(u)\,{\mathrm{d}}u)\,{\mathrm{d}}x\end{aligned}$$ and $$\begin{aligned} {\boldsymbol{\bm G}}_{13}(s)&=&\int_s^t \prod_s^x ({\boldsymbol{\bm I}}+{\boldsymbol{\bm A}}_{11}(u)\,{\mathrm{d}}u) {\boldsymbol{\bm A}}_{13}(x)\prod_x^t ({\boldsymbol{\bm I}}+{\boldsymbol{\bm A}}_{33}(u)\,{\mathrm{d}}u)\,{\mathrm{d}}x \\ &&+ \int_s^t \prod_s^x ({\boldsymbol{\bm I}}+{\boldsymbol{\bm A}}_{11}(u)\,{\mathrm{d}}u){\boldsymbol{\bm A}}_{12}(x) \int_x^t \prod_x^y ({\boldsymbol{\bm I}}+{\boldsymbol{\bm A}}_{22}(u)\,{\mathrm{d}}u){\boldsymbol{\bm A}}_{23}(y) \prod_y^t ({\boldsymbol{\bm I}}+{\boldsymbol{\bm A}}_{33}(u)\,{\mathrm{d}}u)dy .\end{aligned}$$ Now assume that we are concerned with the discounted prices. Then at any time $x\in [s,t]$, we discount the price by $$\exp \left( -\int_s^x r(u)\,{\mathrm{d}}u \right) .$$ In the above expression for ${\boldsymbol{\bm G}}_{13}(s,t)$, $${\boldsymbol{\bm A}}_{13}(x) = {\boldsymbol{\bm C}}^{(2)}(x) = {\boldsymbol{\bm D}}(x)\bullet {\boldsymbol{\bm B}}(x)\bullet {\boldsymbol{\bm B}}(x)$$ while $${\boldsymbol{\bm A}}_{12}(x)={\boldsymbol{\bm A}}_{23}(x)={\boldsymbol{\bm R}}(x)={\boldsymbol{\bm D}}(x)\bullet {\boldsymbol{\bm B}}(x)+{\boldsymbol{\bm \Delta}}({\boldsymbol{\bm b}}(x)).$$ In the expression for ${\boldsymbol{\bm G}}_{13}(s,t)$, ${\boldsymbol{\bm A}}_{13}(x)$ produces a discount of $$\exp \left( -2\int_s^x r(u)\,{\mathrm{d}}u \right) ,$$ ${\boldsymbol{\bm A}}_{12}(x)$ a discount of $$\exp \left( -\int_s^x r(u)\,{\mathrm{d}}u \right)$$ while ${\boldsymbol{\bm A}}_{23}(y)$ produces a discount of $$\exp \left( -\int_s^y r(u)\,{\mathrm{d}}u \right) = \exp \left( -\int_s^x r(u)\,{\mathrm{d}}u \right)\exp \left( -\int_x^y r(u)\,{\mathrm{d}}u \right) .$$ Thus we may write $$\begin{aligned} {\boldsymbol{\bm G}}_{13}(s,t)&=& \int_s^t \prod_s^x ({\boldsymbol{\bm I}}+[{\boldsymbol{\bm A}}_{11}(u)-2r(u){\boldsymbol{\bm I}}]\,{\mathrm{d}}u) {\boldsymbol{\bm A}}_{13}(x)\prod_x^t ({\boldsymbol{\bm I}}+{\boldsymbol{\bm A}}_{33}(u)\,{\mathrm{d}}u)\,{\mathrm{d}}x \\ &&\hspace{-2cm}+ \int_s^t \prod_s^x ({\boldsymbol{\bm I}}+[{\boldsymbol{\bm A}}_{11}(u)-2r(u){\boldsymbol{\bm I}}]\,{\mathrm{d}}u){\boldsymbol{\bm A}}_{12}(x) \int_x^t \prod_x^y ({\boldsymbol{\bm I}}+[{\boldsymbol{\bm A}}_{22}(u)-r(u){\boldsymbol{\bm I}}]\,{\mathrm{d}}u){\boldsymbol{\bm A}}_{23}(y) \prod_y^t ({\boldsymbol{\bm I}}+{\boldsymbol{\bm A}}_{33}(u)\,{\mathrm{d}}u)dy .\end{aligned}$$ Let $${\boldsymbol{\bm H}}^{(2)}(x) = \begin{pmatrix} {\boldsymbol{\bm M}}(x)-2r(x){\boldsymbol{\bm I}} & 2{\boldsymbol{\bm R}}(x) & {\boldsymbol{\bm C}}^{(2)}(x) \\ {\boldsymbol{\bm 0}} & {\boldsymbol{\bm M}}(x)-r(x){\boldsymbol{\bm I}} & {\boldsymbol{\bm R}}(x) \\ {\boldsymbol{\bm 0}} & {\boldsymbol{\bm 0}} & {\boldsymbol{\bm M}}(x) \end{pmatrix} ,$$ and $${\boldsymbol{\bm V}}^{(2)}(s,t)=\prod_s^t ({\boldsymbol{\bm I}}+{\boldsymbol{\bm H}}^{(2)}(x)\,{\mathrm{d}}x) = \begin{pmatrix} {\boldsymbol{\bm V}}^{(2)}_{11}(s,t) & {\boldsymbol{\bm V}}^{(2)}_{12}(s,t) & {\boldsymbol{\bm V}}^{(2)}_{13}(s,t) \\ {\boldsymbol{\bm 0}} & {\boldsymbol{\bm V}}^{(2)}_{22}(s,t) & {\boldsymbol{\bm V}}^{(2)}_{23}(s,t) \\ {\boldsymbol{\bm 0}} &{\boldsymbol{\bm 0}} & {\boldsymbol{\bm V}}^{(2)}_{33}(s,t) \end{pmatrix} .$$ Then $$\begin{aligned} {\boldsymbol{\bm V}}^{(2)}_{33}(s,t)&=& {\boldsymbol{\bm P}}(s,t) \\ {\boldsymbol{\bm V}}^{(2)}_{23}(s,t)&=& \left\{ {\mathds{E}}\left. \left(1\{ Z(t)=j \} {\boldsymbol{\bm U}}(s,t) \right| Z(s)=i \right)\right\}_{i,j} \\ {\boldsymbol{\bm V}}^{(2)}_{13}(s,t)&=&\left\{ {\mathds{E}}\left. \left(1\{ Z(t)=j \} {\boldsymbol{\bm U}}^2(s,t) \right| Z(s)=i \right)\right\}_{i,j} .\end{aligned}$$ In particular, $$\begin{aligned} {\mathds{E}}(U(s,t)| Z(s)=i)&=&{\boldsymbol{\bm e}}_i^\prime {\boldsymbol{\bm V}}^{(2)}_{23}(s,t){\boldsymbol{\bm e}} \\ {\mathds{E}}(U^2(s,t)| Z(s)=i)&=&{\boldsymbol{\bm e}}_i^\prime {\boldsymbol{\bm V}}^{(2)}_{13}(s,t){\boldsymbol{\bm e}}\end{aligned}$$ We now turn to the general proof. We apply Theorem \[th:main-moments\] to the discounted prices $${\mathrm{e}}^{-\int_s^ur(x)\,{\mathrm{d}}s}b^i(u) \ \ \mbox{and}\ \ {\mathrm{e}}^{-\int_s^ur(x)\,{\mathrm{d}}s}b^{ij}(u) .$$ This will indeed provide us with the correct result (for fixed $s$), and as in Example \[ex:2-moment\] we redistribute the discounted terms into the block diagonal matrices. For simplicity of identification of the individual blocks of the matrix, we write ${\boldsymbol{\bm F}}^{(k)}_{U}(u)$ on a black partitioned way as $${\boldsymbol{\bm F}}_U^{(k)}(u) = \begin{pmatrix} {\boldsymbol{\bm A}}_{11}(u) & {\boldsymbol{\bm A}}_{12}(u) & \cdots &{\boldsymbol{\bm A}}_{1,k+1}(u) \\ {\boldsymbol{\bm 0}} & {\boldsymbol{\bm A}}_{22}(u) & \cdots &{\boldsymbol{\bm A}}_{2,k+1}(u) \\ \vdots & \vdots & \vdots\vdots\vdots & \vdots \\ {\boldsymbol{\bm 0}} & {\boldsymbol{\bm 0}} & \cdots &{\boldsymbol{\bm A}}_{k+1,k+1}(u) \\ \end{pmatrix}$$ and $$\prod_s^t ({\boldsymbol{\bm I}}+{\boldsymbol{\bm F}}_U^{(k)}(u)\,{\mathrm{d}}u) = \begin{pmatrix} {\boldsymbol{\bm B}}_{11}(s,t) & {\boldsymbol{\bm B}}_{12}(s,t) & \cdots &{\boldsymbol{\bm B}}_{1,k+1}(s,t) \\ {\boldsymbol{\bm 0}} & {\boldsymbol{\bm B}}_{22}(s,t) & \cdots &{\boldsymbol{\bm B}}_{2,k+1}(s,t) \\ \vdots & \vdots & \vdots\vdots\vdots & \vdots \\ {\boldsymbol{\bm 0}} & {\boldsymbol{\bm 0}} & \cdots &{\boldsymbol{\bm B}}_{k+1,k+1}(s,t) \\ \end{pmatrix} .$$ For example, ${\boldsymbol{\bm A}}_{ii}(u)={\boldsymbol{\bm M}}(u)$ and ${\boldsymbol{\bm A}}_{i,i+1}(u)=(k-i+1){\boldsymbol{\bm R}}(u)$. The matrix ${\boldsymbol{\bm A}}_{i,i+m}(x)$ is then scaled by $$\exp (-m\int_s^x r(u)\,{\mathrm{d}}u) .$$ For $k=1$ it is clear that $${\boldsymbol{\bm B}}_{k,k+1}(s,t) = \int_s^t \prod_{s}^x ({\boldsymbol{\bm I}}+{\boldsymbol{\bm A}}_{kk}(u)\,{\mathrm{d}}u){\boldsymbol{\bm A}}_{k,k+1}(x) \prod_x^t ({\boldsymbol{\bm I}}+{\boldsymbol{\bm A}}_{k+1,k+1}(u)\,{\mathrm{d}}s)\ \,{\mathrm{d}}x$$ has scaling factor $\exp (-\int_s^x r(u)\,{\mathrm{d}}u)$, while in Example \[ex:2-moment\] we saw that $ {\boldsymbol{\bm B}}_{k-1,k+1}(s,t)$ has scaling factor $\exp (-2\int_s^x r(u)\,{\mathrm{d}}u)$. Now assume (induction) that all ${\boldsymbol{\bm B}}_{j,k+1}(s,t)$, $j=i+1,...,k$ have scaling factors $\exp (-(k-j+1)\int_s^x r(u)\,{\mathrm{d}}u)$. From the recursion , $${\boldsymbol{\bm B}}_{i,k+1}(s,t) = \sum_{j=i+1}^{k+1} \int_s^t \prod_s^x ({\boldsymbol{\bm I}}+{\boldsymbol{\bm A}}_{ii}(u)\,{\mathrm{d}}u){\boldsymbol{\bm A}}_{ij}(x){\boldsymbol{\bm B}}_{j,k+1}(x,t)\,{\mathrm{d}}x ,$$ we see that ${\boldsymbol{\bm A}}_{ij}(x)$ will produce a scaling factor $\exp (-(j-i)\int_s^x r(u)\,{\mathrm{d}}u)$, while ${\boldsymbol{\bm B}}_{j,k+1}(x,t)$ can be written as another integral over $x$ to $t$ with integration variable $y$, say, which will then have scaling factors (induction) of size $\exp (-(k-j+1)\int_s^y r(u)\,{\mathrm{d}}u)$. Now write $$\exp \left(-(k-j+1)\int_s^y r(u)\,{\mathrm{d}}u\right) = \exp \left(-(k-j+1)\int_s^x r(u)\,{\mathrm{d}}u\right)\exp \left(-(k-j+1)\int_x^y r(u)\,{\mathrm{d}}u\right)$$ and pull out the factor $\exp (-(k-j+1)\int_s^x r(u)\,{\mathrm{d}}u)$ to get a scaling factor of $$\exp \left(-(j-i)\int_s^x r(u)\,{\mathrm{d}}u\right)\exp \left(-(k-j+1)\int_s^x r(u)\,{\mathrm{d}}u\right) = \exp \left(-(k-i+1)\int_s^x r(u)\,{\mathrm{d}}u\right) .$$ This scaling factor can then be put together with $\prod_s^x ({\boldsymbol{\bm I}}+{\boldsymbol{\bm A}}_{ii}(u)\,{\mathrm{d}}u)$, which in turn is the $(i,i)$ block level entrance which is simply $\prod_s^x ({\boldsymbol{\bm I}}+{\boldsymbol{\bm M}}(u)\,{\mathrm{d}}u)$. We may then obtain a slightly generalised version of Hattendorff’s theorem. $$\frac{\partial}{\partial s}{\boldsymbol{\bm V}}^{(k)}(s,t) = \left( kr(s){\boldsymbol{\bm I}} -{\boldsymbol{\bm M}}(s) \right) {\boldsymbol{\bm V}}^{(k)}(s,t) - k{\boldsymbol{\bm R}}(s){\boldsymbol{\bm V}}^{(k-1)}(s,t) - \sum_{i=2}^k {k \choose i} {\boldsymbol{\bm C}}^{(i)}(s){\boldsymbol{\bm V}}^{(k-i)}(s,t) ,$$ with terminal condition ${\boldsymbol{\bm V}}^{(k)}(t,t)={\boldsymbol{\bm 0}}$. Follows from differentiation of $\prod_s^t ({\boldsymbol{\bm I}} + {\boldsymbol{\bm F}}^{(k)}(x)\,{\mathrm{d}}x)$ with respect to $s$, obtaining a Kolmogorov type of differential equation, with ${\boldsymbol{\bm F}}^{(k)}(x)$ given by , and comparing to . We only need the first block row of ${\boldsymbol{\bm F}}^{(k)}(x)$ and the last block column of $\prod_s^t ({\boldsymbol{\bm I}} + {\boldsymbol{\bm F}}^{(k)}(x)\,{\mathrm{d}}x)$. This theorem reduces to the state–wise standard Hattendorff theorem for $k$th order moments, which is achieved by post–multiplying the differential equation by the vector ${\boldsymbol{\bm e}}=(1,1,...,1)^\prime$. Gram-Charlier expansions of the full distribution ================================================= The c.d.f. or density of $X=U(s,T)$ can of course be evaluated by Laplace transform inversion from Theorem \[Th:4.5a\], say implementing via the Euler or Post–Widder methods (see [@Abate1995]). However, the procedure is somewhat tedious and given the availability of all moments, an attractive alternative is Gram-Charlier expansions via orthogonal polynomials. The method can briefly be summarized as follows. Consider a reference density $f_0(x)$ having all moments $\int x^kf_0(x)\,{\mathrm{d}}x$ well-defined and finite, and a target density $f(x)$ for which all moments ${\mathds{E}}X^k=$ $\int x^kf(x)\,{\mathrm{d}}x$ can be computed. Consider $L_2(f_0)$ with inner product $\langle g,h\rangle=$ $\int g(x)h(x)f_0(x)\,{\mathrm{d}}x$ and let $p_0(x), p_1(x),\ldots$ be a set of orthonormal polynomials, i.e. $\langle p_n,p_m\rangle=$ $\delta_{nm}$. If this set is complete in $L_2(f_0)$ and $$\label{5.4a} f/f_0\in L_2(f_0)\,,\quad\text{i.e.}\ \ \int \frac{f^2(x)}{f_0(x)}\,{\mathrm{d}}x\,<\,\infty\quad\text{or equivalently }f^2/f_0\in L_1(\text{Leb})\\,,$$ we can then expand $f/f_0$ in the $p_n$ to get $$\label{5.4b} f(x)\ =\ f_0(x)\Bigl\{1+\sum_{n=3}^\infty c_np_n(x)\Bigr\}\quad\text{where}\ \ c_n\,=\, \langle f/f_0,p_n\rangle \,=\,{\mathds{E}}p_n(X)\,.$$ If the emphasis is on the c.d.f. $F$ or the quantiles, simply integrate this to get an expansion of $F(x)$. For fast convergence of the series , $f_0$ should be chosen as much alike $f$ as possible. The most popular choice is the normal density with the same mean $\mu$ and variance $\sigma^2$ as $f$, in which case $c_1=c_2=0$ (one has always $c_0= 1$). This implies $p_n(x)=$ $H_n\bigl((x-\mu)/\sigma\bigr)/\sqrt{n!}$ for $n\ge 1$ where $H_n$ is the $n$th (probabilistic) Hermite polynomial defined by $({\mathrm{d}}^n/{\mathrm{d}}x)^n{\mathrm{e}}^{-x^2/2}$ $=(-1)^nH_n(x){\mathrm{e}}^{-x^2/2}$. In particular, with $$d_n=\frac{1}{n!}\int_{-\infty}^\infty H_n((x-\mu)/\sigma\bigr)f(x)\,{\mathrm{d}}x$$ we have $$\begin{aligned} \label{5.4cf} f(x)\ &=\ \frac{1}{\sigma\sqrt{2\pi}}{\mathrm{e}}^{(x-\mu)^2/2\sigma^2} \Bigl\{1+\sum_{n=3}^\infty d_nH_{n}\bigl((x-\mu)/\sigma\bigr)\Bigr\}\,,\\ \label{5.4c} F(x)\ &=\ \Phi\bigl((x-\mu)/\sigma\bigr)- \frac{1}{\sigma\sqrt{2\pi}}{\mathrm{e}}^{(x-\mu)^2/2\sigma^2}\sum_{n=3}^\infty d_nH_{n-1}\bigl((x-\mu)/\sigma\bigr)\,.\end{aligned}$$ The conditions for to be a valid expansion are in fact just $$\label{4.5a} f/f_0^{1/2}\in L_1(\text{Leb})\,,\quad\text{i.e.}\ \ \int {\mathrm{e}}^{(x-\mu)^2/4\sigma^2}f(x)\,{\mathrm{d}}x\,<\,\infty\,,$$ cf. [@cramer1946 p.223]. Truncated versions of or go under the name of Edgeworth expansions; the examples with the whole series not converging simply arise when conditions or is violated (whereas completeness holds when $f_0$ is normal). See, e.g., [@Szego39], [@cramer1946 p.133,222ff.] and [@Barndorff2013asymptotic] for more detail. Actuarial applications of the method can be found, e.g., in [@Bowers1966], [@ALBRECHER2001345], [@GOFFARD2016499], [@Goffard2017], [@Goffard2019a], [@Goffard2019b]. The insurance implementation ---------------------------- When implementing the method in the insurance context $X=U(s,T)$, a difficulty is that absolute continuity typically fails. More precisely, the target distribution $F$ will be a mixture of atoms at $a_1,a_2,\ldots$, with probability $q_i$ for $a_i$, and a part having a density. One then has to take $f(x)$ as the density of the absolutely continuous part, $$f(x)\ =\ {\mathds{P}}\bigl(X\in {\mathrm{d}}x\,\big|\,X\ne a_1, X\ne a_2, \ldots\bigr)$$ Most often, there is only one atom with $q_1$ easily computable. Examples of atoms:\ 1) the initial state $i$ is held throughout $(s,T]$, occuring w.p. $q_i=\int_s^T {\mathrm{e}}^{\int_s^T\mu_ii}$, so that $a_1=$ $\int_s^T {\mathrm{e}}^{-r(u-s)} b^i(u)\,{\mathrm{d}}u$.\ 2) No discounting and equal lump sum payments, $b^{ij}(t)\equiv b^{ij}$, $r=0$. Then $U(s,T)$ is a linear combination of the $b^{ij}$.\ These are more or less the only natural ways to get atoms that occur to us, but see Remark \[Rem:24.4a\] below.\ For simplicity of notation, we assume there is only one atom and write $a=a_1$, $q=q_1$ (the modifications in the case of several atoms are trivial). To implement the Gram-Charlier expansion of $f$, define $$m_1\,=\,\int_{-\infty}^\infty xf(x)\,{\mathrm{d}}x\,,\quad m_k\,=\,\int_{-\infty}^\infty (x-m_1)^jf(x)\,{\mathrm{d}}x\\,, j=2,3,\ldots$$ Obviously, $$\label{23.4a} {\mathds{E}}\bigl[U(s,T)-m_1\bigr]^j\ =\ q(a-m_1)^j+(1-q)m_j, \ \ \ j\geq 2$$ whereas for $j=1$, $$\label{23.4b} {\mathds{E}}\bigl[U(s,T)-m_1\bigr]\ =\ q(a-m_1) .$$ The program for computing an $\alpha$-quantile $z_\alpha$ is then the following: 1. Compute ${\mathds{E}}U(s,T)$ via Theorem 7.1 with $k=1$ and compute $$m_1\,=\,\frac{{\mathds{E}}U(s,T) -qa}{1-q}\,.$$ 2. Choose $k>1$ and compute ${\mathds{E}}\bigl[U(s,T)-m_1\bigr]^j$ for $j=2,\ldots,k$ via Theorem 7.1. To this end, replace the drift parameter $a_i(t)$ by $a_i(t)-m_1/(T-s)$. Solve next for the $m_j$ to get $$m_j= \frac{{\mathds{E}}\bigl[U(s,T)-m_1\bigr]^j-q(a-m_1)^j}{1-q} .$$ 3. Take $f_0$ as the normal density with mean $m_1$ and variance $\sigma_f^2=m_2-m_1^2$. Write $H_n(x)=\sum_0^n a_{j;n}x^j$ and compute the $$d_n\ =\ \frac{1}{n!}\sum_{j=0}^n\frac{a_{j;n}m_j}{\sigma^j}\,,\quad n=3,\ldots,k$$ 4. Approximate the conditional density $f$ and the unconditional c.d.f. $F$ by $$\begin{aligned} \widehat f_k(x)\ &=\ f_0(x)\Bigl\{1+\sum_{n=3}^k d_nH_{n}\bigl((x-\mu)/\sigma\bigr)\Bigr\} \,,\\ \widehat F_k(x)\ &=\ q1_{x\ge a}+(1-q)\Biggl[\Phi\bigl((x-\mu)/\sigma\bigr)-f_0(x)\sum_{n=3}^\infty d_nH_{n-1}\bigl((x-\mu)/\sigma\bigr)\Biggr]\,.\end{aligned}$$ 5. Solve $\widehat F_k(z_\alpha^k)=\alpha$ to get a candidate $z_\alpha^k$ for $z_\alpha$ 6. Repeat from step 2) with a larger $k$ until $z_\alpha^k$ stabilizes. At the formal mathematical level, one needs to verify . This seems easier for feed forward systems, since then $U(s,T)$ has finite support and because a normal $f_0$ is bounded below on compact intervals, it would suffice that $f$ is bounded. This may occur highly plausible, but Remark \[Rem:24.4b\] below shows that in fact may fail in complete generality. The following, result seems, however, sufficient for all practical purposes. We call a model with constant intensities $\mu_{ij}(t)\equiv \mu_{ij}$ *feed-forward* if there are no loops, i.e. no chain $i_0i_n\ldots i_N$ with $i_0=i_N$ and all $\mu_{i_{n-1}i_n}>0$. \[Th:29.4a\] Assume as in Section x that all intensities and rewards are piecewise constant, that the distribution of $U(s,T]$ has an absolutely continuous part with conditional density $f$ and that $f_0$ is normal$(\mu,\sigma^2)$.. Then the $L_2$ condition holds for $f$ if either *(i)* the model is feed-forward, *(ii)* $b^{ij}_k=0$ for all $i\ne j$ or, more generally, *(iii)* $|b^{ij}_k|>0$ implies that there is no path from $j$ to $i$, i.e. no chain $i_0i_n\ldots i_N$ with $i_0=j,i_N=i$ and all $\mu_{i_{n-1}i_n}>0$. Let $\widetilde f(x)$ be the unconditional density and $$F(A\,|\,s,t;i,j)\ =\ {\mathds{P}}\Bigl(\int_s^t {\mathrm{e}}^{-\int_s^u r(u)\,{\mathrm{d}}u} {\mathrm{d}}B(u),\, Z_t=j\,\Big|\,Z_s=i\Bigr)\ =\ F^{\mathrm{ac}}(A\,|\,s,t;i,j)+F^{\mathrm{at}}(A\,|\,s,t;i,j)$$ where $F^{\mathrm{ac}},F^{\mathrm{at}}$ are the absolutely, resp. atomic parts (due to the special structure, there can be no singular but non-atomic part). Let $g(x\,|\,s,t;i,j)$ be the density of $F^{\mathrm{ac}}$ so that $\widetilde f(x) =$ $\sum_j g(x\,|\,0,T;i,j)$ when $Z_0=i$. Let $\overline g(s,t;i,j)=$ $\sup_x g(x\,|\,s,t;i,j)$. and assume it shown that $\overline g(0,T-1;i,j)<\infty$, $\overline g(T-1,T;i,j)<\infty$. We then get $$\begin{aligned} g(0,T;i,j)\ &=\ \sum_k\int g(x-y\,|\,0,T-1;i,k)F({\mathrm{d}}y\,|\,T-1,T;k,j)\\ &\qquad\qquad + \sum_k\int g(x-y\,|\, T-1,T;k,j)F({\mathrm{d}}y\,|\,0,T-1;i,k)\\ &\le \ \sum_k \overline g(0,T-1;i,k)\int F({\mathrm{d}}y\,|\,T-1,T;k,j)+ \sum_k\overline g(T-1,T;k,j)\int F({\mathrm{d}}y\,|\,0,T-1;i,k)\end{aligned}$$ so that $\overline g(0,T;i,j)<\infty$. Thus we may assume that $T=1$ and simply write $\mu_{ij,k}=\mu_{ij}$ etc. The stated conditions imply in all three cases that $f$ has finite support. Indeed, the support is contained in $[-A,A]$ where $A=p\max|b^i|+p(p-1)\max|b^{ij}|$ in cases (i) or (iii) and $A=p\max|b^i|$ in case (ii). Since $f_0$ is bounded below on $[-A,A]$ for any $A$, it thus suffices to show that $f$ is bounded. Assume first that all $b^{ij}=0$, , and let $x$ be fixed. Define $S_N\subset [0,1]^n$ as $S_N=$ $\{0<t_1<\cdots<t_N<1\}$ and $h_n=t_{n}-t_{n-1}$ for $0<n<N$, $h_0=t_1$, $h_N=1-t_N$. A path $Z_0=i_0i_1\ldots i_N=Z_1$ contributes to $\widetilde f(x)$ only if $N>0$ and then by $$\begin{aligned} \MoveEqLeft \int_{S_N}\Bigl[\prod_{n=1}^{N} {\mathrm{e}}^{\mu_{i_{n-1}i_{n-1}}h_n}\mu_{i_{n-1}i_{n}}{\mathrm{d}}t_n\Bigr]\cdot {\mathrm{e}}^{\mu_{i_Ni_N}h_N}\cdot {\bf 1}(a_1h_1+\cdots+a_Nh_N=x)\\ &\le \int_{S_N} \prod_{n=1}^{N}\mu_{i_{n-1}i_{n}}{\mathrm{d}}t_n \ \le\ \overline\mu^n \cdot\text{Leb}(S_n)= \ =\ \frac{\overline\mu^N}{N!}\end{aligned}$$ where $\overline\mu^n=\max \mu_{ij}$. Thus the contribution from all paths of length $N$ is at most $p^N\mu^N{N!}$. Summing over $N$ gives the bound ${\mathrm{e}}^{p\overline\mu}$ for $f(x)$ which is independent of $x$. \[Rem:28.4a\]Condition (iii) is not far from being necessary. Consider as an example the disability model with states 0:active, 1:disabled, 2:dead and recovery in the interval time interval $[0,1]$ withthe same intensity $\lambda>0$ for transitions from 0 to 1 as from 1 to 0 and (for simplicity) mortality rate $0$ and discounting rate $r=0$. The benefits are a lump sum $b^{01}=1$ upon transition from 0 to 1 and the contributions a constant payment rate $b^0<0$ when active. When $Z_0=0$, the total number $N$ of transitions in $[0,1]$ is Poisson$(\lambda)$, the total benefits $U_1(0,1)$ are $M=\lceil N/2\rceil$, and the total contributions $U_0(0,1)$ equal $|b^1|$ times the total time $T_0$ spent in state 0. Thus $U(0,1)=$ $U_1(0,1)-U_0(0,1)$ $=M-U_0(0,1)$ Obviously, $U_0(0,1)$ is concentrated on the interval $[0,|b^1|]$ with a density $g(x)$ which is bounded away from 0, say $h(x)>c_1>0$. Assuming, again for simplicity, that $|b^1|>1$, the intervals $[m-[0,|b^1|],m]$ overlap and so for a given $x$, at least one of them contribute to $f(x)$. One candidate is the one with $m=\lceil x\rceil]$. This gives $$\begin{aligned} f(x)\ &\ge\ {\mathds{P}}(M=\lceil x\rceil)h(\lceil x\rceil-x)\ \ge\ {\mathds{P}}(N=2\lceil x\rceil)c_1\,.\end{aligned}$$ But using Stirling’s approximation to estimate the Poisson probability, it follows after a little calculus that ${\mathds{P}}(N=2\lceil x\rceil)\ge$ $c{\mathrm{e}}^{-4x\log x}$ for all large $x$, say $x\ge x_0$, and some $c>0$ (in fact the 4 can be replaced by any $c_3>2$). This gives for a normal$(\mu,\sigma^2)$ $f_0$ that $$\int\frac{f^2}{f_0}\ \ge\ \int_{x_0}^\infty\frac{f^2(x)}{f_0(x)}\,{\mathrm{d}}x \ \ge\ c_4\int_{x_0}^\infty\exp\{(x-\mu)/\sigma^2-8x\log x\}\,{\mathrm{d}}x\ =\ \infty\,.$$ That is, fails. The obvious way out is of course to take $f_0$ with a heavier tail than the normal, say doubly exponential (Laplacian) with density ${\mathrm{e}}^{-|x|}$ for $-\infty<x<\infty$. Given that this example and other cases where condition (iii) is violated do not seem very realistic, we have not exploited this further. \[Ex:29.4a\] (0,7) node [$\bullet$]{} node\[above\] [$s$]{} – (10,7) node [$\bullet$]{} node\[above\] [$T$]{}; (0,7) – (10,7); (11,7) node\[right\] [$F_0=\{\tau_0^1\wedge\tau_0^2>T\}$]{}; (7,7) node [$\bullet$]{} node\[above\] [$S$]{}; (0,6) node [$\bullet$]{} – (10,6) node [$\bullet$]{}; (7,6) node [$\bullet$]{}; (0,6) – (5,6); (5,6) node[$*$]{} node\[below\] [$\tau_0^2$]{}; (5,6) – (10,6); (11,6) node\[right\] [$F_{2-}=\{0<\tau_0^2\le S\}$]{}; (0,5) node [$\bullet$]{} – (10,5) node [$\bullet$]{}; (7,5) node [$\bullet$]{}; (0,5) – (8,5); (8,5) node[$*$]{} node\[below\] [$\tau_0^2$]{}; (8,5) – (10,5); (11,5) node\[right\] [$F_{2+}=\{S<\tau_0^2\le T\}$]{}; (0,4) node [$\bullet$]{} – (10,4) node [$\bullet$]{}; (7,4) node [$\bullet$]{}; (0,4) – (5,4); (5,4) node[$*$]{} node\[below\] [$\tau_0^1$]{}; (5,4) – (10,4); (11,4) node\[right\] [$F_{1-}=\{0<\tau_0^1\le S,\, \tau_1^2>T\}$]{}; (0,3) node [$\bullet$]{} – (10,3) node [$\bullet$]{}; (7,3) node [$\bullet$]{}; (0,3) – (8,3); (8,3) node[$*$]{} node\[below\] [$\tau_0^1$]{}; (8,3) – (10,3); (11,3) node\[right\] [$F_{1+}=\{S<\tau_0^1 ,\, \tau_1^2>T\}$]{}; (0,2) node [$\bullet$]{} – (10,2) node [$\bullet$]{}; (7,2) node [$\bullet$]{}; (0,2) – (5,2); (5,2) node[$*$]{} node\[below\] [$\tau_0^1$]{}; (5,2) – (6.5,2); (6.5,2) node[$*$]{} node\[below\] [$\tau_1^2$]{}; (6.5,2) – (10,2); (11,2) node\[right\] [$F_{1_-2_-}=\{0<\tau_0^1<\tau_1^2\le S\}$]{}; (0,1) node [$\bullet$]{} – (10,1) node [$\bullet$]{}; (7,1) node [$\bullet$]{}; (0,1) – (5,1); (5,1) node[$*$]{} node\[below\] [$\tau_0^1$]{}; (5,1) – (9,1); (9,1) node[$*$]{} node\[below\] [$\tau_1^2$]{}; (9,1) – (10,1); (11,1) node\[right\] [$F_{1_-2_+}=\{0<\tau_0^1\le S<\tau_1^2\le T\}$]{}; (0,0) node [$\bullet$]{} – (10,0) node [$\bullet$]{}; (7,0) node [$\bullet$]{}; (0,0) – (7.5,0); (7.55,0) node[$*$]{} node\[below\] [$\tau_0^1$]{}; (7.5,0) – (9,0); (9,0) node[$*$]{} node\[below\] [$\tau_1^2$]{}; (9,0) – (10,0); (11,0) node\[right\] [$F_{1_+2_+}=\{0<\tau_0^1\le S<\tau_1^2\le T\}$]{}; Consider again the disability model with states 0:active, 1:disabled, 2:dead and no recovery. As in BuchM, we assume that the payment stream has the form $b^0(t)=-b_-^0$ for $t\le S$ and $b^0(t)=b_+^0$ for $t>S$,[^1] $b^1(t)\equiv b^1$ for all $t$. For example $S$ could be the retirement age, say 65. Without recovery, the only non-zero transition rates are the $\mu_{01}(t)$, $\mu_{02}(t)$, $\mu_{12}(t)$. With the values used in BuchM, these are bounded away from 0 and $\infty$ on $[0,T]$ for any $T$ (say 75 or 80), and this innocent assumption is all that matters for the following. Define the stopping times $$\begin{aligned} \tau_0^1\ &=\ \inf\{t>s:\, Z_t=1,\,Z_s=0\text{ for }s<t\}\\ \tau_1^2\ &=\ \inf\{t>\tau_0^1:\, Z_t=2\}\\ \tau_0^2\ &=\ \inf\{t>s:\, Z_t=2,\,Z_s=0\text{ for }s<t\}\end{aligned}$$ with the usual convention that the stopping time is $\infty$ if there is no $t$ meeting the requirement in the definition. One then easily checks that the sets $F_0,\ldots,F_{1_+2_+}$ defined in Fig. \[disabfig\] defines a partition of the sample space. Here $F_0$ (corresponding to zero transitions in $[s,T]$) contributes with an atom at $$a=-b_-^0(1-{\mathrm{e}}^{-rS })/r+b_+^0({\mathrm{e}}^{- rS}-{\mathrm{e}}^{- rT})/r\text{\ \ with probability\ \ } q=\exp\Bigl\{\int_0^T\mu_{00}(u)\,{\mathrm{d}}u\Bigr\}$$ and the remaining 7 events with absolutely continuous parts, say with (defective) densities $g_{2_-},\ldots,g_{1_+2_+}$. It is therefore sufficient to show that each of these is bounded. In obvious notation, the contribution to $U(s,T)$ of the first 4 of these 7 events (corresponding to precisely one transition in $[s,T]$) are $$\begin{gathered} A_{2-}\ =\ \bigl[-b^0_-(1-{\mathrm{e}}^{-r\tau_0^2})/r\bigr]\cdot1_{\tau_0^2\le S}\,,\\ A_{2-}\ =\ \bigl[-b^0_-(1-{\mathrm{e}}^{-rS})/r+b_+^0({\mathrm{e}}^{-rS}-{\mathrm{e}}^{-r\tau_0^2})/r\bigr]\cdot1_{S<\tau_0^2\le T}\,,\\ A_{1-}\ =\ \bigl[-b^0_-(1-{\mathrm{e}}^{-r\tau_0^1})/r+b^1({\mathrm{e}}^{-rT}-{\mathrm{e}}^{-r\tau_0^1})/r\bigr]\cdot1_{\tau_0^1\le S}\,,\\ A_{1_+}\ =\ \bigl[-b^0_-(1-{\mathrm{e}}^{-rS})/r+b_+^0({\mathrm{e}}^{-rS}-{\mathrm{e}}^{-r\tau_0^1})/r+b^1({\mathrm{e}}^{-rT}-{\mathrm{e}}^{-r\tau_0^1})/r\bigr]\cdot1_{S<\tau_0^1\le T}\,.\end{gathered}$$ The desired boundedness of $g_{2-},g_{2+},g_{1-},g_{1_+}$ therefore follows from the following lemma, where $\tau$ may be improper (${\mathds{P}}(\tau=\infty)>0$) so that $\int h<1$: \[Lemma:24.4a\] If a r.v. $\tau$ has a bounded density $h$ and the function $\varphi$ is monotone and differentiable with $\varphi'$ bounded away from 0, then the density of $\varphi(\tau)$ is bounded as well. The assumptions imply the existence of $\psi=\varphi^{-1}$. Now just note that the density of $\varphi(\tau)$ is $h(\psi(x))/\varphi'(\psi(x))$ if $\varphi$ is increasing and $h(\psi(x))/\bigl|\varphi'(\psi(x))\bigr|$ if it is decreasing. The cases of $g_{1_-2_-},g_{1_-2_+},g_{1_+2_+}$. (corresponding to precisely two transitions in $(s,T]$) is slightly more intricate. Consider first $g_{1_-2_-}$. The contribution to $U(0,T)$ is here $$A_{1_-2_-}\ =\ -b_-^0\int_0^{\tau_0^1}{\mathrm{e}}^{-ru}\,{\mathrm{d}}u\,+\, b^1\int_{\tau_0^1}^{\tau_1^2}{\mathrm{e}}^{-ru}\,{\mathrm{d}}u\ =\ c_0+c_1{\mathrm{e}}^{-r\tau_0^1}-c_2{\mathrm{e}}^{-r\tau_1^2}$$ Now the joint density $h(t_1,t_2)$ of $(\tau_0^1,\tau_0^2)$ at a point $(t_1,t_2)$ with $0<t_1<t_2\le R$ is $$\exp\Bigl\{\int_0^{t_1}\mu_{00}(u)\,{\mathrm{d}}u\Bigr\}\mu_{01}(t_1) \cdot\exp\Bigl\{\int_{t_1}^{t_2}\mu_{11}(v)\,{\mathrm{d}}v\Bigr\}\mu_{12}(t_2)$$ so that $h(t_1,t_2)$ is bounded. Consider now the transformation taking $(\tau_0^1,\tau_0^2)$ into $(\tau_0^1,A_{12_-})$. The inverse of the Jacobiant $J$ is $$\begin{vmatrix} 1 & 0 \\ -rc_1{\mathrm{e}}^{-r\tau_0^1}&rc_2{\mathrm{e}}^{-r\tau_1^2} \end{vmatrix}\ =\ rc_2{\mathrm{e}}^{-r\tau_1^2}$$ which is uniformly bounded away from 0 when $\tau_1^2\le S$. Therefore also the joint density $k(z_1,z_2)$ of $(\tau_0^1,A_{12-})$ is bounded. Integrating $k(z_1,z_2)$w.r.t. $z_1$ over the finite region $0\le z_1\le S$ finally gives that $g_{1_-2_-}$ is bounded. The argument did not use that $\tau_1^2\le S$ and hence also applies to $g_{1_-2_+}$. Finally note that the contribution to $g_{1_+2_+}$ from $[0,S]$ is just a constant, whereas the one from $(S,T]$ has the same structure as used for $g_{1_-2_-}$. Hence also $g_{1_+2_+}$ is bounded. \[Rem:24.4a\]The calculations show that $F_{1+}$ also is an atom if $b_+^0=b^1$. However, since the contribution rate $b_-^0$ is calculated via the equivalence principle given the benefits $b^0_+,b^1$ and the transition intensities, this would be a very special case. The somewhat less special situation $b^0_+=b^1$ (same annuity to an disabled as to someone retired as active) also gives an atom, now at $F_{1+}$. In fact, $b_-^0=b^1$ is assumed in BuchM. \[Rem:24.4b\]For a counterexample to , consider again the disability model with the only benefit being a lump sum of size $b^{01}(t)=$ ${\mathrm{e}}^{rt}\varphi(t)$ being paid out at $\tau_0^1$ where $\varphi(t)=$ $(t-a)^21_{t\le a}+b$, cf. Fig. \[varphiFig\]. (0,5) to \[out=-85, in=180\] (2,1) – (3.8,1); ((4,0) node\[below\] [$t$]{}– (0,0) – (0,6); (2,0) node\[below\] [$a$]{} – (2,1) (0,2) node\[left\] [$y=\varphi(t)$]{} –(0.55,2); (0.55,0) node\[below\] [$t=\psi(y)$]{} – (0.55,2) (0,1) node\[left\] [$b$]{} – (2,1); (0,5) node\[left\] [$b+a^2$]{}; Then $U(s,T]=\varphi(\tau_0^1)$ has an atom at $b$ (corresponding to $\tau_0^1>a$) and an absolutely continuous part on $(b,b+a^2]$. Letting $\psi:\,[b,b+a^2]\mapsto [0,a]$ be the inverse of $\varphi$, $t=\psi(y)$ satisfies $$y=(t-a)^2+b\ \Rightarrow (y-b)^{1/2}=-(t-a)\ \Rightarrow \ t=\psi(y)=a-(y-b)^{1/2}\,,$$ where the minus sign after the first $\Rightarrow$ follows since $\psi$ is decreasing. With $h$ the density of $\tau_0^1$, we get as in Lemma \[Lemma:24.4a\] that the conditional density $f$ of the absolutely continuous part is given by $$f(y)\ =\ \frac{1}{{\mathds{P}}(\tau_0^1\le a)}\frac{h(\psi(y))}{\bigl|\varphi'(\psi(y))\bigr|}\ =\ \frac{1}{{\mathds{P}}(\tau_0^1\le a)}\frac{h(a-(y-b)^{1/2})}{(y-b)^{1/2}}$$ But there are $c_1,c_2>0$ such that $f_0(y)\le c_1$ on $[b,b+a^2]$ and $h(t)\ge c_2$ on $[b,b+a^2]$, and so we get $$\int\frac{f^2}{f_0}\ = \int_{b}^{b+a^2}\frac{f^2(y)}{f_0(y)}\,{\mathrm{d}}y\ \ge\ \int_{b}^{b+a^2}\frac{c_2^2}{c_1(y-b)}\ =\ \infty\,,$$ meaning that does not hold. Our second main example is the number $X=N(t)$ of events before time $t$ in the Markovian arrival process. This is a discrete r.v., but for the general theory one just needs to replace Lebesgue measure ${\mathrm{d}}x$ in , etc. above by counting measure on $\{0,1,2,\ldots\}$. For $X=N(t)$, a candidate for the reference distribution could be the Poisson distribution with the same mean $\lambda$ as $N(t)$ in stationarity. The orthogonal polynomials are then the Charlier-Poisson polynomials $$\label{8.4a} p_n(x)\ =\ \lambda^{n/2}(n!)^{-1/2}\sum_{k=0}^n(-1)^{n-k}{n\choose k}k!\lambda^{-k}{x\choose k}\ =\ \lambda^{n/2}(n!)^{1/2}L_n^{(x-n)}(\lambda)$$ where $L_n$ is the $n$th Laguerre polynomial, cf. Szegö [@Szego39] pp. 34–35 and Schmidt [@Schmidt33]. A numerical example =================== By , we may (essentially) assume without loss of generality that the parameters are constant. Thus we shall consider a disability–unemployment model, defined in terms of a time–homogeneous Markov process $\{ Z(t) \}_{t\geq 0}$, with state space $E=\{ 1,2,3,4,5\}$, where state 1 corresponds to active (premium paying), 2 unemployed, 3 disabled, 4 re–employed and 5 death. Death rates from all states are assumed to be the same and equal to $0.5$, and all other possible transitions happen at rate $0.1$ as illustrated in Figure \[examfig\]. The interest rate is $r=0.08$, and the only lump sum payment (of quantity 2) is when entering the disability state from either state $1$, $3$ or $4$. In state $1$ premium is paid at rate $1$ and in state $3$ a benefit of rate $1$ is obtained. A parametrisation of the model is given by $${\boldsymbol{\bm C}}= \begin{pmatrix} -0.7 & 0 & 0 & 0 & 0 \\ 0 & -0.5 & 0 & 0 & 0 \\ 0 & 0 & -0.7 & 0 & 0\\ 0 & 0 & 0 & -0.6 & 0 \\ 0 & 0 & 0 & 0 & 0 & \end{pmatrix}, \ \ {\boldsymbol{\bm D}} = \begin{pmatrix} 0 & 0.1 & 0.1 & 0 & 0.5 \\ 0 & 0 & 0 & 0 & 0.5 \\ 0 & 0.1 & 0 & 0.1 & 0.5 \\ 0 & 0.1 & 0 & 0 & 0.5 \\ 0 & 0 & 0 & 0 & 0 \end{pmatrix}, \ \ {\boldsymbol{\bm B}}= \begin{pmatrix} 0 & 2 & 0 & 0 & 0 \\ 0 & 0& 0& 0& 0\\ 0 & 2 & 0& 0& 0 \\ 0 & 2& 0& 0 & 0 \\ 0& 0& 0& 0& 0 \end{pmatrix}$$ and $${\boldsymbol{\bm b}} = (-1,0,1,0,0) .$$ It is clear from this example that parametrisations may be ambiguous. Indeed, for all $d_{ij}>0$ with a corresponding $b^{ij}=0$ we could equally have moved these values into the ${\boldsymbol{\bm C}}$ matrix instead. (0,3) rectangle (1,4) (3,3) rectangle (4,4) (6,3) rectangle (7,4) ; (6,0) rectangle (7,1) (6,6) rectangle (7,7) ; (0.5,3.5) node [$1$]{} (3.5,3.5) node [$2$]{} (6.5,3.5) node [$4$]{} (6.5,0.5) node [$5$]{} (6.5,6.5) node [$3$]{}; (0.5,4.3) – (0.5,6.5) – (5.7,6.5); (0.5,2.7) – (0.5,0.5) – (5.7,0.5); (7.3,6.5) – (8.5,6.5) – (8.5,0.5) – (7.3,0.5); (1.3,3.5) – (2.7,3.5); (5.7,3.5) – (4.3,3.5); (6.5,5.7) – (6.5,4.3); (6.5,2.7) – (6.5,1.3); (5.7,5.7) – (4.3,4.3); (4.3,2.7) – (5.7,1.3); (3.3,6.1) node [$\theta$]{} (3.3,0.8) node [$\mu$]{}; (2,3.9) node [$\lambda$]{} (5,3.9) node [$\lambda$]{} (8.8,3.5) node [$\mu$]{} ; (4.6,5.2) node [$\lambda$]{} (4.6,1.8) node [$\mu$]{}; (6.8,5) node [$\eta$]{} (6.8,2) node [$\mu$]{}; ![Left: densities based on $k=20,30,40,50$ moments plotted towards a histogram based on 300,000 simulations. Right: Empirical CDF of the 300,000 simulations vs. the CDF obtained by $k$ moments[]{data-label="fig:Ex1-hist-vs-dens"}](test2-1.pdf) ![Left: densities based on $k=20,30,40,50$ moments plotted towards a histogram based on 300,000 simulations. Right: Empirical CDF of the 300,000 simulations vs. the CDF obtained by $k$ moments[]{data-label="fig:Ex1-hist-vs-dens"}](test1-1.pdf) A comparison of the first eight moments calculated respectively by a matrix exponential and by simulation is given in Table \[tab:moments\]. **Order** **Matrix exp** **Simulation** ----------- ---------------- ---------------- 1 -0.8240 -0.8247 2 2.8630 2.8639 3 -6.751 -6.797 4 33.21 33.33 5 -122.4 -124.5 6 708.9 716.6 7 -3233 -3323 8 20633 21024 : Moments calculated on exact method and on simulation[]{data-label="tab:moments"} In Figure \[fig:Ex1-hist-vs-dens\] we show a histogram of 300,000 simulated data as compared to estimated densities (and one cdf) based on a different number of moments. It is clear that the challenging shape of the density will in general require a high number of moments in order to obtain a good approximation. The cdf, which has a smoother behaviour and therefore is easier to approximate, seems to provide a good fit to the simulated data. How sensitive the cdf approximation is to the number of moments used is shown in Figure \[fig:cdfs\]. The method seems at first eye to be pretty robust, which is also confirmed in Table \[tab:quantiles\], where quantiles which are obtained from the approximating cdfs for different numbers of moments, are all within a sensible range from each other. **Number of moments** **$2.5\%$ quantile** **$97.5\%$ quantile** ----------------------- ---------------------- ----------------------- 10 -4.05 2.15 20 -3.95 2.10 30 -4.00 2.05 40 -3.95 2.05 50 -4.00 2.05 : Quantiles calculated using the approximating cdf based on a different number of moments. The simulated quantiles are $-4.10$ and $1.95$ respectively.[]{data-label="tab:quantiles"} ![Approximation of the CDF based on a different number of moments.[]{data-label="fig:cdfs"}](cdfs.pdf) Conclusion ========== In this paper we have established a matrix oriented approach for calculating the (discounted) rewards of time–inhomogeneous Markov processes with finite state–space. In particular, for applications to multi–state Markov models in life insurance our approach provides an alternative to standard derivations in the literature, which are usually based on case–by–case derivations involving differential or integral equations. In the slightly more general set–up in this paper we provide a unifying approach to deriving reserves and moments (of, in principle, arbitrary orders) which has a simple numerical implementation. The Laplace–Stieltjes transform of the (discounted) future payments, which plays an important role in the derivation of the moments and whose derivation is based on probabilistic (sample path) arguments, has a strikingly simple form which would allow for a numerical inversion in order to obtain the cdf or density of the future payments as well. However, since the moments of all orders, in principle, are available we propose an alternative method involving approximation of the cdf and densities via orthogonal polynomial expansions based on central moments. While this method seems to be very robust concerning the cdf, the approximation of the density itself is more involved which stems from the fact that presence of lump sums mixed with continuous rates implies that the densities can have a very challenging form. [^1]: Here and in the following, a - subscript mimics ‘before $S$’ and a + ‘after $S$’.
--- abstract: | Galerkin and Petrov-Galerkin methods are some of the most successful solution procedures in numerical analysis. Their popularity is mainly due to the optimality properties of their approximate solution. We show that these features carry over to the (Petrov-)Galerkin methods applied for the solution of linear matrix equations. Some novel considerations about the use of Galerkin and Petrov-Galerkin schemes in the numerical treatment of general linear matrix equations are expounded and the use of constrained minimization techniques in the Petrov-Galerkin framework is proposed. author: - 'Davide Palitta[^1]' - 'Valeria Simoncini[^2]' bibliography: - 'galerkin.bib' title: 'Optimality properties of Galerkin and Petrov-Galerkin methods for linear matrix equations[^3]' --- [*Dedicated to Volker Mehrmann on the occasion of his 65th birthday*]{} Linear matrix equations. Large scale equations. Sylvester equation. 65F10, 65F30, 15A06 Introduction ============ Many state-of-the-art solution procedures for algebraic linear systems of the form $$\label{eqn:mainkron} {\cal M} x = f,$$ where ${\cal M}\in{{\mathbb{R}}}^{N\times N}$ and $f\in{{\mathbb{R}}}^N$, are based on projection. Given a subspace ${\cal K}_m$ of dimension $m$, and a matrix ${\cal V}_m$ whose orthonormal columns span ${\cal K}_m$, these methods seek an approximate solution $x_m = {\cal V}_m y_m$ for some $y_m\in{{\mathbb{R}}}^m$ by imposing certain conditions. The most successful projection procedures impose either a [*Galerkin*]{} or a [*Petrov-Galerkin*]{} condition on the residual $r_m = f - {\cal M} x_m$. See, e.g., [@Saad2003]. These conditions are very general, and they are at the basis of many approximation methods, beyond the algebraic context of interest here; any approximation strategy associated with an inner product can determine the projected solution by one of such a condition. Finite element methods, both at the continuous and discrete levels, strongly rely on this methodology; see, e.g., [@Strang.Fix.73], but also eigenvalue problems [@Saad1992]. It is very important to realize that this is a methodology, not a single method: the approximation space can be generated independently of the condition, and in a way to make the computation of $y_m$ more effective, while obtaining a sufficiently accurate approximation with the smallest possible space dimension. A fundamental property of the Galerkin methodology is obtained whenever the coefficient matrix ${\cal M}$ is symmetric and positive definite (spd): the Galerkin condition on the residual corresponds to minimizing the error vector in the norm associated with ${\cal M}$ over the approximation space. This property is at the basis of the convergence analysis of methods such as the Conjugate Gradient (CG) [@Hestenes1952], and it ensures monotonic convergence, in addition to finite termination, in exact precision arithmetic. When $\mathcal{M}$ is not spd, the application of the Galerkin method does not automatically imply a minimization of the error norm. Nevertheless, a certain family of Petrov-Galerkin procedures still fulfills an optimality property. Indeed, these methods minimize the residual norm over the space $\mathcal{MK}_m$. See, e.g., [@Saad2003]. Some of the most popular solvers for linear systems such as MINRES [@Paige1975] and GMRES [@Schultz1986] belong to this collection of methods. In the past decades, projection techniques have been successfully used to solve linear matrix equations of the form $$\label{eqn:matrixeq} A_1 X B_1 + A_2 X B_2 + \ldots + A_\ell X B_\ell = F,$$ that have arisen as a natural algebraic model for discretized partial differential equations (PDEs), possibly including stochastic terms or parameter dependent coefficient matrices [@Baumann2018; @Benner.Damm.11; @Powell.Silvester.Simoncini.17; @Palitta2016], for PDE-constrained optimization problems [@Stoll2015], data assimilation [@Freitag2018], and many other applied contexts, including building blocks of other numerical procedures [@Lin2013]; see also [@Simoncini2014; @Benner2013] for further references. The general matrix equation (\[eqn:matrixeq\]) covers two well known cases, the (generalized) Sylvester equation (for $\ell = 2$), and the Lyapunov equation $$\begin{aligned} \label{eqn:Lyap} A X + X A^T = F,\end{aligned}$$ which plays a crucial role in many applications such as control and system theory [@Benner2005; @Antoulas.05], and in the solution of Riccati equations by the Newton method, in which a Lyapunov equation needs to be solved at each Newton step. See, e.g., [@Mehrmann1991]. The aim of this paper is to generalize the optimality properties of the Galerkin and Petrov-Galerkin methods to matrix equations, and to extend other convergence properties of CG and some related schemes to the matrix setting. Some of the proposed results are new, some others can be found in articles scattered in the literature in different contexts. We thus provide a more uniform presentation of these results. To introduce a matrix version of the error and residual minimization, we first recall the relation between matrix-matrix operations and Kronecker products. Indeed, if $\otimes$ denotes the Kronecker product and ${\cal T} := B^T \otimes A$, then $$Y = A X B \quad \Leftrightarrow \quad y={\cal T} x, \quad x = {\rm vec}(X),\; y={\rm vec}(Y) ,$$ where the usual “vec($\cdot$)” operator stacks the columns of the argument matrix one after the other into a long vector. The Galerkin condition ====================== In this section we first recall the result connecting the Galerkin condition on the residual with the minimization of the error norm when this is applied to the solution of linear systems, and then we show that similar results can be obtained also in the matrix equation setting. For the rest of the section we assume that ${\cal M}$ in is symmetric and positive definite. The linear system setting ------------------------- Let $x_m=V_m y_m$ be an approximation to the true solution of (\[eqn:mainkron\]), and let $e_m = x - x_m$, $r_m = f - {\cal M} x_m$ be the associated error and residual, respectively. We recall that imposing the Galerkin condition yields $$\begin{aligned} \label{eqn:gal_cond} V_m^T r_m =0 \quad \Leftrightarrow \quad V_m^T {\cal M} V_m y_m = V_m^T f.\end{aligned}$$ Note that the coefficient matrix $V_m^T {\cal M} V_m$ is symmetric and positive definite. Solving this system yields the “projected” vector $y_m$, so as to completely define $x_m$. Let $\|e_m\|_{\cal M}^2 : = e_m^T {\cal M} e_m$ be the ${\cal M}$-norm associated with the spd matrix ${\cal M}$. For the error we thus have $$\begin{aligned} \label{eqn:err_M} \|e_m\|_{\cal M}^2 = \| {\cal M}^{1/2}(x-x_m)\|^2 = \|{\cal M}^{1/2}x - {\cal M}^{1/2} V_m y_m\|^2.\end{aligned}$$ The minimization of the error ${\cal M}$-norm thus corresponds to solving the least squares problem on the right, which gives $$({\cal M}^{1/2} V_m)^T {\cal M}^{1/2} V_m y_m = ({\cal M}^{1/2} V_m)^T {\cal M}^{1/2}x,$$ which, upon simplifications of the transpositions yields $V_m^T {\cal M} V_m y_m = V_m^T f$, that is, using (\[eqn:gal\_cond\]), $V_m^T r_m =0$. Galerkin method and error minimization for matrix equations {#sec:galerkin_ME} ----------------------------------------------------------- To simplify the presentation, we first discuss Galerkin projection with the Lyapunov equation. Given the equation (\[eqn:Lyap\]) with $A$ spd and $F=F^T$, then it can be shown that $X$ is symmetric. Letting ${\rm range}(V_k)$ be an approximation space, we can determine an approximation to $X$ as $X_k = V_k Y_k V_k^T$, which in vector notation is written as $\text{vec}(X_k) = (V_k\otimes V_k) {\rm vec}(Y_k)$. The matrix $Y_k$ is obtained by imposing the Galerkin condition in a matrix sense to the residual matrix $R_k = F-(AX_k+X_kA)$, that is $$V_k^T R_k V_k = 0 \quad \Leftrightarrow \quad (V_k\otimes V_k)^T r_k = 0,$$ where $r_k = {\rm vec}(R_k)$. Therefore, if one writes the Lyapunov equation by means of the Kronecker formulation, the obtained approximation space is ${\cal K}_m={\rm range}(V_k\otimes V_k)$. We explicitly notice that $X_k$ belongs to ${\rm range}(V_k)$, which is much smaller than range$(V_k\otimes V_k)$. Therefore, by sticking to the matrix equation formulation, we expect to build a much smaller approximation space than if a blind use of the Kronecker form were used. In other words, by exploiting the original matrix structure, no redundant information is sought after. In section section \[Comparisons with the Kronecker formulation\] we provide a rigorous analysis of this argument. See also [@Kressner.Tobler.10]. To be able to exploit the derivation in (\[eqn:err\_M\]) we will define an error matrix and the associated inner product. The generalization to the multiterm linear equation (\[eqn:matrixeq\]) requires the definition of two approximation spaces, since the right and left coefficient matrices are not necessarily the same. Therefore, let range$(V_k)$ and range$(W_k)$ be two approximation spaces of dimension $k$ each[^4], and let us write the approximation to $X$ as $X_k = V_k Y_k W_k^T$. With the residual matrix $R_k = F - \sum_{j=1}^\ell A_j X_k B_j$, the Galerkin condition now takes the form $$V_k^T R_k W_k = 0 \quad \Leftrightarrow \quad (W_k\otimes V_k)^T r_k = 0,$$ where $r_k = {\rm vec}(R_k)$, so that ${\cal K}_m={\rm range}(W_k\otimes V_k)$ with $m=k^2$ in the Kronecker formulation. To adapt the error minimization procedure to the matrix equation setting we first introduce a matrix norm, that allows us to make a connection with the ${\cal M}$-norm of the error vector. A corresponding derivation for $\ell=2$ can be found, for instance, in [@Vandere_Vandew.10 p. 2557] and [@Benner2014 p. 149]. \[def:S\] Let $$\label{defS} \begin{array}{lrll} {\cal S} :& \mathbb{R}^{n\times p}&\rightarrow&\mathbb{R}^{n\times p}\\ & X &\mapsto& \displaystyle\sum_{j=1}^\ell A_j X B_j,\\ \end{array}$$ and ${\cal S}_\ell =\sum_{j=1}^\ell B_j^T \otimes A_j$. We say that the operator ${\cal S}$ is symmetric and positive definite if for any $0 \ne x\in {{\mathbb{R}}}^{np}$, $x={\rm vec}(X)$, with $X\in{{\mathbb{R}}}^{n\times p}$, it holds that ${\cal S}_\ell = {\cal S}_\ell^T$ and $x^T {\cal S}_\ell x >0$, where $$x^T {\cal S}_\ell x = {\rm trace}\left( \sum_{j=1}^\ell X^T A_j X B_j\right).$$ The norm induced by this operator will be denoted by $\|X\|_{\cal S}$. Note that any linear operator $\mathcal{L}:\mathbb{R}^{n\times p}\rightarrow\mathbb{R}^{n\times p}$ can be written in the form with a uniquely defined minimum number of terms $\ell$ called the *Sylvester index*. See [@Konstantinov2000]. Assuming $\mathcal{S}$ to be spd, in the following proposition we show that the error matrix is minimized in the ${\cal S}$-norm. \[th\_errormin\] Let ${\cal S}(X) = F$ with ${\cal S}: X \mapsto \sum_j A_jXB_j$ spd, and let $\text{range}(V_k)$, $\text{range}(W_k)$ be the constructed approximation spaces, so that $X_k = V_k Y_k W_k^T$ is the Galerkin approximate solution. Then $$\|X-X_k\|_{\cal S} = \min_{Z=V_k Y W_k^T\atop Y\in{{\mathbb{R}}}^{k\times k}} \|X-Z\|_{\cal S} .$$ Let $e_k = {\rm vec}(X-X_k)$ be the error vector, $r_k = {\rm vec}(F-\sum_j A_j X_kB_j)$ the residual vector, ${\cal K}_m = \text{range}(W_k \otimes V_k)$ the approximation space and ${\cal S}_\ell = \sum_j B_j^T \otimes A_j$ the coefficient matrix. Then, since $\mathcal{S}$ is spd by assumption, also ${\cal S}_\ell$ is spd, and the Galerkin condition ${\cal V}_m^T r_k = 0$, $\mathcal{V}_m=W_k \otimes V_k$, corresponds to the minimization of the error. More precisely, it holds $$\|e_k\|_{\cal S}^2 = e_k^T {\cal S}_\ell e_k = {\rm trace}((X-X_k)^T {\cal S} (X-X_k)) = \|X-X_k\|_{\cal S}^2,$$ and the proof is completed. Proposition \[th\_errormin\] states that as long as the approximation spaces are expanded, the error will decrease monotonically in the considered norm. A Galerkin approach for a multiterm linear matrix equation was for instance employed in [@Powell.Silvester.Simoncini.17]; the proposition above thus ensures that under the stated hypotheses on the data the method will minimize the error as the approximation spaces grow. See also Example \[Ex.3\]. A result similar to the one stated in Proposition \[th\_errormin\] can be found in [@Kressner.Tobler.10] where the authors consider specific approximation spaces and assume $\mathcal{S}$ to be a so-called *Laplace-like* operator. Proposition \[th\_errormin\] shows the strength of the Galerkin method, also in the general matrix equation setting. Indeed, the optimality condition of the Galerkin method does neither depend on the adopted approximation spaces nor on the definition of $\mathcal{S}$, as long as this is spd. Given a general linear matrix equation written in the form $\mathcal{S}(X)=F$, one would like to characterize the symmetry and positive definiteness of $\mathcal{S}$ by looking only at the properties of the matrices $A_j$ and $B_j$ and avoid the construction of the large matrix $\mathcal{S}_\ell$. Assuming $\ell$ to be the Sylvester index of $\mathcal{S}$, it is easy to show that $\mathcal{S}$ is a symmetric operator if and only if the matrices $A_j$ and $B_j$ are symmetric for all $j=1,\ldots,\ell$, whereas, in general, it is not possible to identify the positive definiteness of $\mathcal{S}$ by examining the spectral distributions of $A_j$ and $B_j$, even when these are completely known. See, e.g., [@Lancaster1970]. Note that for $\mathcal{S}$ to be spd it is not necessary for all the $A_j$’s and $B_j$’s to be positive definite. Nevertheless, if $A_j$, $B_j$ are positive definite for all $j=1,\ldots,\ell$, then $\mathcal{S}$ is positive definite; see, e.g., [@Vandere_Vandew.10 Proposition 3.1] for $\ell =2$. Therefore, in the case of the Lyapunov equation with $A$ spd, also the operator $\mathcal{S}$ is spd and it holds that $$\|X\|_{\cal S}^2 = 2\, {\rm trace}( X^T A X).$$ Another case where the properties of $\mathcal{S}$ can be determined in terms of the (symmetric) coefficient matrices $A_j$ and $B_j$ is the Sylvester operator ${\cal S} : X \mapsto A X + X B$. By exploiting the property of the Kronecker product, it holds that $\mathcal{S}$ is positive definite if and only if $\lambda_i(A)+\lambda_j(B) >0$ for all $i$s and $j$s. Moreover, the norm $\|\cdot\|_{\cal S}$ can be written as $\|X\|_{\cal S}^2 = {\rm trace}( X^T A X) + {\rm trace}( X B X^T)$. Consider the Lyapunov equation with the spd coefficient matrix $A$, and let $E_k := X-X_k$ be the corresponding error matrix. Then, the previous discussion shows that $$\|E_k\|_{\cal S}^2 = \min_{Z=V_k Y W_k^T\atop Y\in{{\mathbb{R}}}^{k\times k}} \|X-Z\|_{\cal S}^2 = 2\, {\rm trace}( E_k^T A E_k). $$ In the remark above we have not specified whether the known term $F$ in (\[eqn:Lyap\]) needs to be symmetric. If $F$ is symmetric, then indeed the two spaces can coincide, and $E_k$ is also symmetric. On the other hand, if $F$ has the form $F=F_1 F_2^T$, possibly low rank, natural choices as approximation spaces are such that ${\rm range}(F_1)\subseteq {\rm range}(V_k)$ and ${\rm range}(F_2)\subseteq {\rm range}(W_k)$, so that the (vector) residual is orthogonal to $F_2\otimes F_1$. A possible alternative could use $V_k=W_k$ such that ${\rm range}(F_1), {\rm range}(F_2) \subseteq {\rm range}(V_k)$, where however in general we expect ${\rm range}(V_k)$ to have larger dimension than in the previous case. \[Ex.3\] [By applying the stochastic Galerkin methodology for the discretization of elliptic stochastic PDEs [@Babuska2004], the resulting algebraic formulation can be written as the linear matrix equation  with typically $\ell >2$. When dealing with the stochastic steady-state diffusion problem with homogeneous Dirichlet boundary conditions, the symmetric matrices $A_j$ and $B_j$ may not all be positive definite; nonetheless, the associated operator $\mathcal{S}$ is symmetric and indeed [*positive definite*]{} (see, e.g., [@Powell2009]), so that the previous theory applies. In the following we consider the Galerkin approach developed in [@Powell.Silvester.Simoncini.17] – based on the rational Krylov subspace – to illustrate the monotonic decrease of the error $\mathcal{S}$-norm as the approximation space increases[^5]. We generate $A_j$ and $B_j$ as the second test case in the S-IFISS package [@Silvester2015] with the default setting for all the requested parameters. This yields]{} a linear matrix equation of the form  with $\ell=6$, $A_j\in\mathbb{R}^{n\times n}$, $n=225$, and $B_j\in\mathbb{R}^{p\times p}$, $p=56$. The right-hand side $F$ has rank 1. Thanks to the small problem dimension, we could compute the vectorized solution $x\in\mathbb{R}^{np}$ as $x = {\rm vec}(X)= {\cal S}_\ell^{-1}f$ (Matlab function “$\setminus$”), to be used as a reference “exact” solution. In particular, if $X_k$ denotes the approximate solution obtained after $k$ iterations of the Galerkin method, we compute $\|X-X_k\|_{\mathcal{S}}/\|X\|_{\mathcal{S}}$ until this falls below $10^{-6}$. Figure \[Fig.Ex.3\] displays the history of this relative error $\mathcal{S}$-norm, illustrating the expected monotonically non-increasing curve. +\[thick\] table\[x index=0, y index=1\] [errenergynorm\_newexample.dat]{}; Convergence properties {#Convergence properties} ====================== In the previous section we have shown that the Galerkin condition leads to a minimization of the error $\mathcal{S}$-norm and this property does not depend on the selected space $\mathcal{K}_k=\text{range}(V_k)$. In actual computations, a measurable estimate of the error is needed and in [@Simoncini.Druskin.09] an upper bound on the Euclidean norm of the error is provided in the case of the Lyapunov equation with rank-one right-hand side $F=b b^T$ with $\|b\|=1$, and a positive definite but not necessarily symmetric $A$. By exploiting the closed-form of the solution $X$, the authors showed that $$\|X - X_k\|_2 \leq 2 \int_0^\infty e^{-t\alpha_{\min}(A)} \|x-x_m\|_2 dt, \quad \alpha_{\min}(A) = \lambda_{\min}((A+A^T)/2),$$ where $x = e^{-t A} b$, $x_k = V_k e^{-t A_k} e_1$, $A_k:=V_k^TAV_k$, and $\|\cdot\|_2$ denotes the Euclidean norm. This led to the following proposition when the selected approximation space is the Krylov subspace $\text{range}(V_k)=K_k(A,b) = {\rm span}\{b, Ab, \ldots, A^{k-1}b\}$ and $A$ is symmetric. \[prop:sym\] Let $A$ be spd, and let $\lambda_{\max}$ and $\lambda_{\min}$ be the largest and the smallest eigenvalue of $A$, respectively. Denoting by $\hat\kappa=(\lambda_{\max}+\lambda_{\min})/2\lambda_{\min}$ the condition number of the spd matrix $A+\lambda_{\min} I$, then the Galerkin approximate solution $X_k=V_k Y_k V_k^T$ satisfies $$\begin{aligned} \|X- X_k\|_2 & \leq & 2\frac { \sqrt{\hat\kappa}+1 } { \lambda_{\min}\sqrt{\hat\kappa}} \left ( \frac{\sqrt{\hat \kappa} -1}{\sqrt{\hat\kappa}+1}\right )^k . \label{eqn:bound1} \end{aligned}$$ This bound, in terms of slope as $k$ increases, was shown to be sharp in [@Simoncini.Druskin.09]. Notice that the bound holds also for the Frobenius norm of the error, namely $\|X-X_k\|_F$. Indeed, we can still write $ \|X - X_k\|_F \leq 2 \int_0^\infty e^{-t\alpha_{\min}(A)} \|x-x_m\|_F dt$ and the rest of the proof of Proposition \[prop:sym\] makes use of bounds for norms of vectors only, for which the Euclidean and the Frobenius norms coincide. See [@Simoncini.Druskin.09 Proposition 3.1] for more details. The bound can be generalized to the use of other spaces, such as rational Krylov subspaces, see, e.g., [@Beckermann.11; @Beckermann.Kressner.Tobler.13; @Knizhnerman.Simoncini.11; @Druskin.Knizhnerman.Simoncini.11]. We generalize the bound presented in Proposition \[prop:sym\] to the case of the Sylvester equation, $$\label{eq.Sylv} AX + X B = b_1 b_2^T,$$ with $A$ and $B$ symmetric and positive definite; without loss of generality, we can assume that $\|b_1\|_\star=\|b_2\|_\star=1$ where $\|\cdot\|_\star$ denotes either the Euclidean or the Frobenius norms. We first recall the Cauchy representation of the solution matrix $X$ to . Let us for now only assume that $A$ and $B$ are positive definite, and not necessarily symmetric. We can write (see, e.g., [@Lancaster1970]) $$X=\int_0^\infty e^{-t A} b_1 b_2^T e^{-tB} dt.$$ Consider the approximation $X_k=V_kY_kW_k^T$ where $V_k$ and $W_k$ span suitable subspaces and both have orthonormal columns. The matrix $Y_k$ is obtained by imposing the Galerkin condition on $R_k = A X_k + X_k B - b_1b_2^T$, that is $$V_k^T R_k W_k = 0 \quad \Leftrightarrow \quad (V_k^T A V_k) Y_k + Y_k (W_k^T B W_k) - (V_k^Tb_1)(b_2^T W_k) = 0.$$ Let $A_k := V_k^T A V_k,\;B_k: = W_k^T B W_k$. Thus $Y_k$ is obtained by solving a reduced Sylvester equation, whose size depends on the approximation space dimensions. Since the spectrum of $A_k$ ($B_k$) is contained in the spectral region of $A$ ($B$) , we have that $\Lambda(A_k)+\Lambda(B_k)\subset\mathbb{C}_+$ and the matrix $Y_k$ can be written in integral form as $Y_k=\int_0^\infty e^{-t A_k} (V_k^Tb_1)(b_2^T W_k) e^{-tB_k} dt$ so that $$X_k = V_k \int_0^\infty e^{-t A_k} (V_k^Tb_1)(b_2^T W_k) e^{-tB_k} dt\, W_k^T.$$ Let $x:=e^{-t A} b_1, x_k := V_k e^{-t A_k} (V_k^Tb_1)$, $y:=e^{-t B} b_2, y_k := W_k e^{-t B_k} (W_k^Tb_2)$. Then, using $\|x\|_\star\leq e^{-t\alpha_{\min}(A)}$ (see, e.g., [@Corless2003 Lemma 3.2.1]), and since $\alpha_{\min}(A_k)\geq\alpha_{\min}(A)$, it holds that $\|x_k\|_\star\leq e^{-t\alpha_{\min}(A)}$. Similarly, $\|y\|_\star,\|y_k\|_\star\leq e^{-t\alpha_{\min}(B)}$. Therefore, (see also [@Kressner.Tobler.10 Lemma 4.7]) $$\begin{aligned} \label{eq.integral.Sylv} \|X - X_k\|_\star &= & \left\|\int_0^\infty (xy^T - x_k y_k^T) dt\right\|_\star \notag \\ &= & \frac{1}{2}\left\|\int_0^\infty (x+x_k)(y-y_k)^T + (x-x_k)(y+y_k)^T) dt\right\|_\star \notag\\ & \leq & \frac{1}{2}\int_0^\infty \Big((\|x\|_\star+\|x_k\|_\star) \|y-y_k\|_\star + \|x-x_k\|_\star (\|y\|_\star+\|y_k\|_\star)\Big) dt\notag \\ &\leq& \int_0^\infty \Big(e^{-t\alpha_{\min}(A)}\|y-y_k\|_\star + e^{-t\alpha_{\min}(B)} \|x-x_k\|_\star\Big) dt \notag \\ & = & \int_0^\infty (\|\hat y-\hat y_k\|_\star + \|\hat x-\hat x_k\|_\star ) dt ,\end{aligned}$$ where $\hat y = e^{-t (B+\lambda_{\min}(A) I)} b_2$, $\hat x = e^{-t (A+\lambda_{\min}(B) I)} b_1$, and analogously for $\hat y_k, \hat x_k$. The inequality in  states that the $\star$-norm of the error associated with the Galerkin solution can be bounded by integrating over $[0,\infty)$ the errors obtained in the approximation of the exponential of the shifted matrices $B+\lambda_{\min}(A) I$ and $A+\lambda_{\min}(B) I$. In the next proposition we specialize the bound above when the Krylov subspaces $\text{range}(V_k)=K_k(A,b_1)$ and $\text{range}(W_k)=K_k(B,b_2)$ are adopted as approximation spaces and $A$, $B$ are both symmetric and positive definite. To this end, let us define $\lambda_{\min}(A)$, $\lambda_{\max}(A)$, $\lambda_{\min}(B)$, and $\lambda_{\max}(B)$ to be the extreme eigenvalues of $A$ and $B$, respectively, and $$\hat \kappa_A=\frac{\lambda_{\max}(A)+\lambda_{\min}(B)}{\lambda_{\min}(A)+\lambda_{\min}(B)}, \qquad \hat \kappa_B=\frac{\lambda_{\max}(B)+\lambda_{\min}(A)}{\lambda_{\min}(B)+\lambda_{\min}(A)},$$ the condition numbers of $A+\lambda_{\min}(B)I$ and $B+\lambda_{\min}(A)I$, respectively. Let $A$ and $B$ be spd and $\text{range}(V_k)=K_k(A,b_1)$, $\text{range}(W_k)=K_k(B,b_2)$. Then the Galerkin approximate solution $X_k=V_kY_kW_k^T$ to is such that [$$\|X-X_k\|_\star \leq\frac{2}{\lambda_{\min}(A)+\lambda_{\min}(B)}\left(\frac{\sqrt{\hat \kappa_A}+1}{\sqrt{\hat \kappa_A}}\left(\frac{\sqrt{\hat \kappa_A}-1}{\sqrt{\hat \kappa_A}+1}\right)^k+ \frac{\sqrt{\hat \kappa_B}+1}{\sqrt{\hat \kappa_B}}\left(\frac{\sqrt{\hat \kappa_B}-1}{\sqrt{\hat \kappa_B}+1}\right)^k\right),$$]{} where $\|\cdot\|_\star$ denotes either the Euclidean or the Frobenius norm. The proof can be obtained by applying the same arguments of the proof of [@Simoncini.Druskin.09 Proposition 3.1] to the single integrals $\int_0^\infty \|\hat y-\hat y_k\|_\star dt$, $\int_0^\infty \|\hat x-\hat x_k\|_\star dt$ in . Convergence results for generic matrix equations of the form are difficult to derive as no easy-to-handle closed-form solution is known in general. The main difficulty is given by the fact that the exponential of a Kronecker sum $\sum_{j=1}^\ell B_j^T\otimes A_j$ cannot be separated in the product of the exponentials of the single terms if no further assumptions on $A_j$ and $B_j$ are considered. By adapting the reasonings proposed in this section, one may be able to deduct error estimates for some special equations of the form $$\sum_{j,k=0}^\ell \alpha_{j,k}A^jXB^k=F,$$ where the coefficient matrices are given as powers of two *seed* matrices $A$ and $B$, and $\alpha_{j,k}\in\mathbb{R}$ for all $j,k$. Indeed, in this case, the exact solution $X$ can be written in integral form as illustrated in [@Lancaster1970 Theorem 4]. However, such derivations deserve a separate analysis. Comparison with the Kronecker formulation {#Comparisons with the Kronecker formulation} ========================================= Given a linear matrix equation of the form , the simplest-minded numerical procedure for its solution consists in applying well-established iterative schemes to the [*vector*]{} linear system obtained from by Kronecker transformations, namely $$\label{eq.Kron} \left(\sum_{j=1}^\ell B_j^T\otimes A_j\right)\text{vec}(X)=\text{vec}(F).$$ Sometimes this is the only option as effective algorithms to solve in its *natural* matrix equation form are still lacking in the literature in the most general case. The methods developed so far require some additional assumptions on the coefficient matrices $A_j$, $B_j$; see, e.g., [@Benner2013a; @Shank2016; @Jarlebring2018; @kressner2015truncated; @Powell.Silvester.Simoncini.17]. In this section we show that exploiting the matrix structure of equation  not only leads to numerical algorithms with lower computational costs per iteration and modest storage demands, but they also avoid some spectral redundancy encoded in the problem formulation . Such a redundancy often leads to a delay in the convergence of the adopted solution scheme when iterative procedures are applied to . A similar discussion can be found in [@Kressner.Tobler.10 Remark 4.5] for more general tensor structured problems. To illustrate this phenomenon we consider a Lyapunov equation of the form with $A\in\mathbb{R}^{n\times n}$ spd and $F=bb^T$, $b\in\mathbb{R}^n$, $\|b\|=1$. We compare the Galerkin method applied to the matrix equation  with the CG method applied to the linear system $$\label{eq.Kron_lyap} \mathcal{A}\text{vec}(X)=\text{vec}(bb^T),\quad \mathcal{A}=A\otimes I+I\otimes A\in\mathbb{R}^{n^2\times n^2}.$$ Notice that since $A$ is spd, $\mathcal{A}$ is also spd. Let $x = {\rm vec}(X)$ be the exact solution to (\[eq.Kron\_lyap\]). Let the CG initial guess be equal to the zero vector, and let $x_k^{cg}$ be the approximate solution to $x$ obtained after $k$ CG iterations. Then the following classical bound for the energy-norm of the error $x - x_k^{cg}$ holds $$\label{eq.CGbound1} \frac{\|x-x_k^{cg}\|_\mathcal{A}}{\|x\|_\mathcal{A}}\leq 2\left(\frac{\sqrt{\kappa}-1}{\sqrt{\kappa}+1}\right)^k,$$ where $\kappa=\lambda_{\max}(\mathcal{A})/\lambda_{\min}(\mathcal{A})=\lambda_{\max}(A)/\lambda_{\min}(A)$. See, e.g., [@Golub2013 Theorem 10.2.6]. This bound may be rather pessimistic since it takes into account neither the role of the right-hand side nor the actual spectral distribution of $\mathcal{A}$. See, e.g., [@Sluis1986; @Beckermann2002; @Beckermann2001; @Liesen.Strakos.book.12]. We want to compare the bound in with the estimate proposed in Proposition \[prop:sym\], using the same norms and relative quantities. To this end, we recall that for any vector $v$ it holds that $$\sqrt{2\lambda_{\min}(A)}\|v\|_2 \leq \|v\|_\mathcal{A} \leq \sqrt{2\lambda_{\max}(A)}\|v\|_2.$$ In particular, letting $X_k^{cg}\in\mathbb{R}^{n\times n}$ be such that $\text{vec}(X_k^{cg})=x_k^{cg}$, we have $$\label{eq.CGbound2} \frac{\|X-X_k^{cg}\|_F}{\|X\|_F} = \frac{\|x-x_k^{cg}\|_2}{\|x\|_2}\leq \sqrt{\kappa} \frac{\|x-x_k^{cg}\|_{\cal A}}{\|x\|_{\cal A}} \leq 2\sqrt{\kappa}\left(\frac{\sqrt{\kappa}-1}{\sqrt{\kappa}+1}\right)^k.$$ Therefore, to obtain a relative error (in Frobenius norm) of less than $\varepsilon$, a sufficient number $k_*^{(cg)}$ of CG iterations is given by $$k_*^{(cg)} := \frac{ \log\left( \varepsilon/(2\sqrt{\kappa})\right)}{ \log\left( (\sqrt{\kappa}-1)/(\sqrt{\kappa}+1)\right)}.$$ If $X_k$ denotes the approximate solution computed after $k$ iterations of the Galerkin-based method with $K_k(A,b)$ as approximation space, the error norm bound in can be written in relative terms as $$\label{eq.Galerkin_bound2} \frac{\|X-X_k\|_F}{\|X\|_F}\leq 4( \sqrt{\hat\kappa}+1 ) \sqrt{\hat\kappa} \left ( \frac{\sqrt{\hat \kappa} -1}{\sqrt{\hat\kappa}+1}\right )^k ,$$ where we used $\|x\|_2\geq \lambda_{\min}(\mathcal{A}^{-1})\,\|\text{vec}(bb^T)\|_F= 1/(\lambda_{\max}+\lambda_{\min})$. Once again, to obtain a relative error (in Frobenius norm) of less than $\varepsilon$, a sufficient number $k_*^{(G)}$ of iterations is given by $$k_*^{(G)} := \frac{ \log\left( \varepsilon/(4\sqrt{\hat\kappa} (\sqrt{\hat\kappa}+1))\right)}{ \log\left( (\sqrt{\hat\kappa}-1)/(\sqrt{\hat\kappa}+1)\right)}.$$ The bounds – show that the asymptotic behavior of the relative error norms of CG and the Galerkin method are guided by $\kappa$ and $ \hat \kappa$, respectively, where $\hat \kappa$ is always smaller than $\kappa$, for $\kappa>1$. Indeed, $$\hat \kappa= \frac{\lambda_{\max}(A)+\lambda_{\min}(A)}{2 \lambda_{\min}(A)}=\frac{1}{2}\kappa+\frac{1}{2} .$$ The worse conditioning of the linear system formulation may lead to a delay in the convergence of CG so that, for a fixed threshold, CG may require more iterations to converge than the Galerkin method applied to the matrix equation . This is numerically illustrated in the examples below. We once again stress that the similarities of the two formulations (matrix equation and Kronecker form) highlight the fact that what makes the matrix equation context more efficient than CG on ${\cal A}x =b$ is the special choice of the approximation space, that is ${\cal K}_m = \text{range}(V_k \otimes V_k)$, which heavily takes into account the Kronecker sum structure of ${\cal A}$. On the other hand, CG applied blindly on ${\cal A}$ generates a redundant approximation space. \[Ex.1\] We consider the spd matrix $A=QD Q^T\in\mathbb{R}^{n\times n}$, where $D$ is a diagonal matrix whose diagonal entries are uniformly distributed (in logarithmic scale) values between 1 and 100, and $Q$ is orthogonal. This means that $\kappa=100$ and $\hat \kappa=50.5$ for any $n$. The vector $b\in\mathbb{R}^n$ is a random vector with unit norm. For $\varepsilon=10^{-6}$, a direct computation shows that $k_*^{(G)}=68$ iterations of the Galerkin method are sufficient to get $\|X-X_{k_*^{(G)}}\|_F/\|X\|_F\leq \varepsilon$, whereas according to the bound , $k_*^{(cg)}=84$ iterations are required for CG to reach the same accuracy when solving ${\cal A} x = b$. In practice, the number of actual iterations can be lower, since this estimate is obtained from a bound. Figure \[Ex.1\_Fig.1\] reports the error convergence history of the two iterations, using logarithmic scale for $n=1000$. The two methods are stopped as soon as the relative error norm becomes smaller than $\varepsilon$. The “exact” solution $X$ was computed with the Bartels-Stewart method [@Bartels1972], which was feasible due to the small problem size. +\[ mark = o, blue\] table\[x index=0, y index=1\] [err\_cg2.dat]{}; + \[ mark = square, red\]table\[x index=0, y index=1\] [err\_galerkin2.dat]{}; ; Both methods require slightly fewer iterations than predicted by the bounds. Nonetheless, we can still appreciate that CG applied to the linear system  requires more iterations than the Galerkin method applied to the matrix equation  to achieve the same prescribed accuracy. \[Ex.4\] We modify the data of Example \[Ex.1\] by replacing $\lambda_{\min}(A)=1$ with $\lambda_{\min}(A)=\lambda_1=0.001$, while the other eigenvalues are such that $\lambda_2=2,\ldots,\lambda_n=n$. Here $b\in\mathbb{R}^n$ is the vector of all ones normalized. The relative error energy norm obtained by CG and the Galerkin method is reported in the left plot of Figure \[Ex.4\_Fig.1\] for $n=100$. Notice that with such a $n$ we have $\kappa = 10^5, \hat\kappa \approx 5\cdot 10^4$. \[fig:lapl2D\_rpk\] [cc]{} +\[ mark = square, red\] table\[x index=0, y index=1\] [err\_galerkin\_referee\_example.dat]{}; +\[ mark = o, blue\] table\[x index=0, y index=1\] [err\_cg\_referee\_example.dat]{}; ; & \[width=0.46, height=.38, legend pos = north east, legend style=[at=[(0.6,0.70)]{},anchor=west]{}, xlabel = $k$, ylabel = Ritz values\] +\[no marks, dashed, black\] table\[x index=0, y index=5\] [RitzValues.dat]{}; +\[no marks, dashed, black\] table\[x index=0, y index=6\] [RitzValues.dat]{}; +\[no marks, dashed, black\] table\[x index=0, y index=7\] [RitzValues.dat]{}; +\[no marks, dashed, black\] table\[x index=0, y index=8\] [RitzValues.dat]{}; +\[ mark = square, only marks, red\] table\[x index=0, y index=1\] [RitzValues.dat]{}; +\[ mark = square, only marks, red\] table\[x index=0, y index=2\] [RitzValues.dat]{}; +\[ mark = o, only marks, blue\] table\[x index=0, y index=3\] [RitzValues.dat]{}; +\[ mark = o, only marks, blue\] table\[x index=0, y index=4\] [RitzValues.dat]{}; ; Both methods stagnate in the initial phase of the solution process, followed by a rapid convergence afterwards. The stagnation phase is significantly longer for CG, contributing to the overall CG delay. A closer look at the convergence history of the Ritz values towards the eigenvalues of the corresponding coefficient matrices provides a better understanding. In this setting, the Ritz values for the Galerkin and CG methods are the eigenvalues of $V_k^TAV_k$ and of $Q_k^T {\cal A}Q_k$, respectively, where the columns of $Q_k$ are the orthonormal basis of the space generated by CG (here we used the Arnoldi procedure to compute $Q_k$). Recalling that $\lambda_{\min}(A)=0.001$, $\lambda_{\max}(A)=100$ so that $\lambda_{\min}(\mathcal{A})=0.002$, $\lambda_{\max}(\mathcal{A})=200$, the right plot of Figure \[Ex.4\_Fig.1\] reports the convergence history of the extreme Ritz values computed at the $k$-th iteration of both CG and the Galerkin method for $k=1,\ldots,82$ (the dashed lines indicate the target eigenvalues). The Ritz value tending to the largest eigenvalue converges in very few iterations. For each approach, the Ritz value approximating the smallest eigenvalue takes many more iterations to converge, and these iterations seem to match the stagnation phase observed in the left plot. It appears that the matrix Galerkin approximation space is able to implicitly capture the Kronecker structure of the eigenvector associated with $\lambda_{\min}({\cal A})$ much earlier than what CG can do by using the unstructured basis $Q_k$. Once again, this emphasizes the importance of the Kronecker basis determined by the matrix Galerkin method. Petrov-Galerkin method and residual minimization {#Petrov-Galerkin methods and residual minimization} ================================================ Whenever the linear operator $\mathcal{S}$ is not spd, the Galerkin method does not necessarily lead to a minimization of the error norm. As in the linear system setting, a numerical procedure fulfilling an optimality condition can be obtained by imposing a Petrov-Galerkin condition on the residual also when solving linear matrix equations. For the case of Lyapunov and Sylvester equations, this strategy has been already explored in, e.g., [@Lin2013; @Hu1992], and in this section we are going to present some considerations about the application of Petrov-Galerkin methods to the solution of generic linear matrix equations of the form . We first recall the Petrov-Galerkin framework applied to the solution of the linear system . If the columns of $\mathcal{V}_m\in\mathbb{R}^{N\times m}$ constitute an orthonormal basis for the selected trial space $\mathcal{K}_m$, we want to compute a solution $x_m=\mathcal{V}_my_m$, where $y_m\in\mathbb{R}^m$ is calculated by imposing a Petrov-Galerkin condition on the residual vector $r_m=f-\mathcal{M}x_m$. In its full generality, such a condition reads $$\label{PetrovGalerkin1} r_m\,\bot\,\mathcal{L}_m,\;\text{i.e.,}\; \mathcal{W}_m^Tr_m=0,\;\text{range}(\mathcal{W}_m)=\mathcal{L}_m,$$ where $\mathcal{L}_m$ is the chosen test space. See, e.g., [@Saad2003 Chapter 5]. For the particular choice $\mathcal{L}_m=\mathcal{M}\mathcal{K}_m$, the condition in is equivalent to computing $x_m$ as the minimizer of the residual norm over $\mathcal{K}_m$, namely $$x_m=\operatorname*{argmin}_{x\in\mathcal{K}_m}\|f-\mathcal{M}x\|.$$ See, e.g., [@Saad2003 Proposition 5.3]. With the selection $\mathcal{K}_m=K_m(\mathcal{M},f)$, the minimization problem above can be significantly simplified by exploiting the Arnoldi relation; this is the foundation of some of the most popular minimal residual methods for linear systems such as, e.g., MINRES [@Paige1975] and GMRES [@Schultz1986]. A similar approach can be pursued for the solution of linear matrix equations. Indeed, let $N=n p$ and consider $V_k\in\mathbb{R}^{n\times k}$, $W_k\in\mathbb{R}^{p\times k}$ with full column rank [^6], and let $\text{range}(V_k)$, $\text{range}(W_k)$, be the corresponding left and right approximation spaces. With $\mathcal{S}_\ell$ as in Definition \[def:S\], we can formally set $\mathcal{K}_m=\text{range}(W_k\otimes V_k)$ and $\mathcal{L}_m=\mathcal{S}_\ell \mathcal{K}_m$. An approximate solution in the form $X_k=V_kY_kW_k^T$, with $Y_k\in\mathbb{R}^{k\times k}$, can be determined by imposing the condition to the vector form of the residual matrix $R_k=\mathcal{S} (V_kY_kW_k^T)-F$. Petrov-Galerkin methods for thus seek a solution $X_k=V_kY_kW_k^T$ by solving $$\min_{x\in\text{range}(W_k\otimes V_k)}\|\text{vec}(F)-\mathcal{S}_\ell x\|_2= \min_{y\in\mathbb{R}^{k^2}}\|\text{vec}(F)-\mathcal{S}_\ell (W_k\otimes V_k)y\|_2,$$ that is $$\label{PetrovGalerkin_matrixeq} \min_{X=V_kYW_k^T}\|F-\mathcal{S}(X)\|_F=\min_{Y\in\mathbb{R}^{k\times k}}\|F-\mathcal{S}(V_kYW_k^T)\|_F.$$ In spite of their appealing minimization property, minimal residual methods are not very popular in the matrix equation literature. This is mainly due to the difficulty in dealing with the numerical solution of the minimization problem . In general, one can apply an operator-oriented (preconditioned) CG method to the normal equations as $$\label{normal_eq} Y_k=\operatorname*{argmin}_{Y\in\mathbb{R}^{k\times k}}\|F-\mathcal{S}(V_kYW_k^T)\|_F\qquad \Leftrightarrow\qquad \mathcal{S}^*(F-\mathcal{S}(V_kY_kW_k^T))=0,$$ where $\mathcal{S}^*$ is the adjoint of $\mathcal{S}$, namely $$\begin{array}{lrll} {\cal S}^* :& \mathbb{R}^{n\times p}&\rightarrow&\mathbb{R}^{n\times p}\\ & X &\mapsto& \displaystyle\sum_{j=1}^\ell A_j^T X B_j^T.\\ \end{array}$$ If $\text{range}(V_k)$ and $\text{range}(W_k)$ are general spaces, the solution of can be very expensive in terms of both computational time and memory requirements. In [@Lin2013], the authors consider in the case of the Lyapunov equation with $F$ low-rank and negative semidefinite. In particular, if $F=-bb^T,$ $b\in\mathbb{R}^{n\times q}$, $q\ll n$, they employ the approximation spaces $\text{range}(V_k)=\text{range}(W_k)$ such that $b=V_1L_b$ for some $L_b\in\mathbb{R}^{q\times q}$, $q=\text{rank}(C)$, and satisfying an Arnoldi-like relation of the form $$AV_k=[V_k,\breve{V}_{k+1}]\underline{H}_k,$$ for $[V_k,\breve{V}_{k+1}]\in\mathbb{R}^{n\times (k+1)q}$ having orthonormal columns and $\underline{H}_k\in\mathbb{R} ^{(k+1)q\times kq}$. In this case, the minimization problem can be written as $$\label{minimal_residual} Y_k=\operatorname*{argmin}_{Y\in\mathbb{R}^{kq\times kq}}\left\|\underline{H}_kY[I_{kq},0]+\begin{bmatrix} I_{kq} \\ 0\\ \end{bmatrix}Y\underline{H}_k^T+\begin{bmatrix} L_bL_b^T & 0 \\ 0 & 0 \\ \end{bmatrix} \right\|_F,$$ and three different methods for its solution are illustrated. If the coefficient matrix $A$ in is stable (antistable) and $F$ is symmetric negative semidefinite, the exact solution $X$ is symmetric positive (negative) semidefinite. See, e.g., [@Snyders1970]. However, as reported in [@Lin2013], the numerical solution $X_k=V_kY_kV_k^T$ is not guaranteed to be semidefinite if $Y_k$ is computed as in . In [@Lin2013 Section 3.4] it is shown that is equivalent to computing $Y_k$ as the solution of the *generalized* Sylvester equation $$\label{gen_Lyap} \underline{H}_k^T\underline{H}_kY+Y\underline{H}_k^T\underline{H}_k+H_kYH_k+H_k^TYH_K^T+D=0,$$ where $$D:=\underline{H}_k^T \begin{bmatrix} L_bL_b^T & 0 \\ 0 & 0 \\ \end{bmatrix}\begin{bmatrix} I_{kq} \\ 0\\ \end{bmatrix} +[I_{kq},0]\begin{bmatrix} L_bL_b^T & 0 \\ 0 & 0 \\ \end{bmatrix} \underline{H}_k= H_k^T\begin{bmatrix} L_bL_b^T & 0 \\ 0 & 0 \\ \end{bmatrix}+\begin{bmatrix} L_bL_b^T & 0 \\ 0 & 0 \\ \end{bmatrix}H_k ,$$ so that $D$ is symmetric but indefinite. This is one of the main obstacles in proving the semidefiniteness of $Y_k$ through the matrix formulation . Without further hypotheses, the symmetric matrix $Y_k$ solving is indefinite in general, thus preventing $Y_k$ from preserving the semidefiniteness property of the solution to be approximated. From a computational viewpoint, if resorting to a Kronecker form is excluded, the generalized Sylvester equation can be solved by means of the methods described in [@Lin2013] and its references. In addition, setting $\mathfrak{L}(Z)=\underline{H}_k^T\underline{H}_kZ+Z\underline{H}_k^T\underline{H}_k$ and $\mathfrak{N}(Z) =H_kZH_k+H_k^TZH_k^T$, fixed point iterations can be used whenever the spectral radius of the operator $\mathfrak{L}^{-1}(\mathfrak{N}(\cdot))$ is less than one; see, e.g., [@Damm.08; @Shank2016; @Jarlebring2018] for various implementations. A constrained residual minimization approach for Lyapunov equations ------------------------------------------------------------------- To cope with the lack of semidefiniteness in the least squares problem approach, we propose to explicitly impose the semidefiniteness as a constraint. For instance, if a negative semidefinite solution is sought, the problem becomes $$\label{eqn:constr_lsqr} Y_k=\operatorname*{argmin}_{Y\in\mathbb{R}^{kq\times kq}\atop Y \leq 0} \left\|\underline{H}_kY[I_{kq},0]+ \begin{bmatrix} I_{kq} \\ 0\\ \end{bmatrix} Y\underline{H}_k^T+ \begin{bmatrix} L_bL_b^T & 0 \\ 0 & 0 \\ \end{bmatrix} \right\|_F.$$ To numerically solve this inequality constrained least squares problem, we consider a linear matrix inequalities (LMI) approach, which suits very well the matrix equation framework [@BEFB94; @Skeltonetal.98]; other general purpose methods could also be considered [@Anjos2012; @Malick2004]. In the LMI context, (\[eqn:constr\_lsqr\]) can be stated as the following semidefiniteness matrix inequalities $$Y \leq 0, \qquad \begin{bmatrix} I & {\rm vec}(\underline{H}_k Y J^T + J Y \underline{H}_k^T + M) \\ {\rm vec}(\underline{H}_k Y J^T + J Y \underline{H}_k^T + M)^T & \gamma \end{bmatrix} \ge 0,$$ for the unknown matrix $Y$ and scalar $\gamma >0$; here $J=[I_{kq};0]$ and $M = [L_bL_b^T,0;0, 0]$. \[Ex.2\] We consider the Lyapunov equation with $A=QD Q^{-1}\in\mathbb{R}^{n\times n}$, $D$ as in Example \[Ex.1\], $Q$ a random matrix, and $F=-bb^T$, where $b\in\mathbb{R}^n$ is a random vector with unit norm. Since $A$ is antistable and the right-hand side is symmetric negative semidefinite, the solution $X$ is symmetric negative semidefinite and we thus expect the approximate solution $X_k=V_kY_kV_k^T$ to be so as well. We apply the Petrov-Galerkin method discussed in this section in the solution process and we adopt the Krylov subspace as approximation space, i.e., $\text{range}(V_k)=K_k(A,b)$. The matrix $Y_k$ is computed in two different ways. We first solve the unconstrained minimization problem getting the matrix $Y_k^{\text{uncon}}$. In particular, $Y_k^{\text{uncon}}$ is computed by applying a (preconditioned) CG method to the matrix equation . See, e.g., [@Lin2013]. Then, we compute $Y_k^{\text{const}}$ by solving the constrained minimization problem . The Petrov-Galerkin method is stopped as soon as the relative residual norm becomes smaller than $10^{-6}$. + \[scatter, only marks,error bar legend 1, scatter src=explicit symbolic, scatter/classes=[ A0=[mark=none,red,thick ]{}]{}, error bars/.cd, y dir=both, y explicit, error bar style=[color=red,dotted,mark=square\*]{}\] table\[x=x,y=y,y error=err,meta=class\] [eig0\_10.dat]{}; + \[scatter,error bar legend 2, only marks, scatter src=explicit symbolic, scatter/classes=[ A0=[mark=none,blue, ]{}]{}, error bars/.cd, y dir=both, y explicit, error bar style=[color=blue]{}\] table\[x=x,y=y,y error=err,meta=class\] [eig11\_20.dat]{}; table\[x=x,y=y,y error=err,meta=class\] [eig21\_30.dat]{}; ; In Figure \[fig:3\] we plot the intervals $[\min_j\{\lambda_j(Y_k^{\text{uncon}})\geq 0\},\max_j\{\lambda_j(Y_k^{\text{uncon}})\geq 0\}]$ of the undesired positive eigenvalues of $Y_k^{\text{uncon}}$ for all $k$ for the case $n=1000$. For $k=1,2$, $Y_k^{\text{uncon}}$ has all negative eigenvalues, while it starts being indefinite for $k\geq3$ so that $X_k^{\text{uncon}}=V_kY_k^{\text{uncon}}V_k^T$ is an indefinite approximation to the negative semidefinite $X$. Nonetheless, for $k=68$, the (undesired) positive eigenvalues of $X_k^{\text{uncon}}$ are small enough so as to still allow a sufficiently accurate approximation, in terms of relative residual norm. On the other hand, this problem is not encountered with $Y_k^{\text{constr}}$, thanks to the explicit negative semidefiniteness constraint in the formulation . From the legend of Figure \[fig:3\], we can see that the number of positive eigenvalues of $Y_k^{\text{uncon}}$ increases as the iteration proceed, even though they diminish in magnitude. The latter trend is not surprising. Indeed, even if $Y_k^{\text{uncon}}$ is computed by , the Petrov-Galerkin method is converging towards the negative semidefinite solution $X$ and, for an approximation space spanning the whole ${{\mathbb{R}}}^n$, the method would retrieve the exact solution, regardless of the minimization problem . We would like to point out that both tested variants of the Petrov-Galerkin method needed 68 iterations to converge and the actual values of the residual norm provided by and were always very similar to each other, during the whole convergence history. This phenomenon surely deserves further studies as, in principle, leads to a residual norm that is greater or equal than the one provided by , while the two solutions (constrained and unconstrained) do not necessarily have to be close to each other. In our computational experiments, we have used the Yalmip software [@Lofberg2004] running the algorithm Sedumi in Matlab [@Sedumi]. This algorithm is rather expensive and computing the solution $Y_k$ to at each Krylov iteration $k$ often leads to a very time consuming solution procedure. We think this issue can be fixed in different ways. For instance, one may compute $Y_k$, and thus check the residual norm, only periodically, say every $d\geq1$ iterations. Moreover, the explicit solution $Y_k$ is required only at convergence while we just need the value of the residual norm during the Krylov routine. It may be possible to compute such a residual norm without calculating the whole $Y_k$ as it is done in [@Palitta2018] for the Galerkin method and in [@Lin2013] for the Petrov-Galerkin technique equipped with the unconstrained minimization problem . The study of the aforementioned enhancements and, more in general, the employment of constrained minimization procedures in the solution of linear matrix equations will be the topic of future research. Conclusions {#Conclusions} =========== We have shown that the optimality properties of Galerkin and Petrov-Galerkin methods naturally extend to the general linear matrix equation setting. Such features do not depend on the adopted approximation spaces even though, in actual computations, fast convergence depends on the suitable subspace selection. Identifying effective subspaces for general (multiterm) linear matrix equations depends on the problem at hand, and it may seem easier to recast the solution in terms of a large vector linear system. On the other hand, the vector form can be extremely memory consuming, while the vector linear system encodes some spectral redundancy which may cause a delay in the converge of the adopted iterative solution scheme. Petrov-Galerkin schemes require to solve a matrix minimization problem at each iteration and we have suggested to explicitly incorporate a semidefiniteness constraint in its formulation. To the best of our knowledge, such approach has never been proposed in the literature and the employment of constrained optimization techniques in the context of Petrov-Galerkin methods for linear matrix equations opens many new research directions. Acknowledgements {#acknowledgements .unnumbered} ================ Both authors are members of the Italian INdAM Research group GNCS. We thank the two anonymous reviewers for their insightful remarks. [^1]: Research Group Computational Methods in Systems and Control Theory (CSC), Max Planck Institute for Dynamics of Complex Technical Systems, Sandtorstraß[e]{} 1, 39106 Magdeburg, Germany. `palitta@mpi-magdeburg.mpg.de` [^2]: Dipartimento di Matematica, Alma Mater Studiorum Università di Bologna, Piazza di Porta San Donato 5, I-40127 Bologna, Italy, and IMATI-CNR, Pavia, Italy. [valeria.simoncini@unibo.it]{} [^3]: Version of November 14, 2019 [^4]: In principle, we can have $\text{dim}(\text{range}(V_k))\neq\text{dim}(\text{range}(W_k))$. Here we consider $\text{dim}(\text{range}(V_k))=\text{dim}(\text{range}(W_k))=k$ for the sake of simplicity in the presentation. [^5]: The Matlab code is available at [http://www.dm.unibo.it/\~simoncin/software.html.]{} [^6]: Once again, the two matrices may have different column dimensions, that is $V_{k_1}\in\mathbb{R}^{n\times k_1}$, $W_{k_2}\in\mathbb{R}^{p\times k_2}$. For the sake of clarity in the exposition, we limit our presentation to the case $k_1=k=k_2$.
--- author: - 'Giovanni Tumolo$^{(1)}$,   Luca Bonaventura$^{(2)}$' bibliography: - 'SISLDG.bib' title: | An accurate and efficient numerical framework\ for adaptive numerical weather prediction --- [$^{(1)}$ Earth System Physics Section\ The Abdus Salam International Center for Theoretical Physics\ Strada Costiera 11, 34151 Trieste, Italy\ [gtumolo@ictp.it]{} ]{} [$^{(2)}$ MOX – Modelling and Scientific Computing,\ Dipartimento di Matematica “F. Brioschi”, Politecnico di Milano\ Via Bonardi 9, 20133 Milano, Italy\ [luca.bonaventura@polimi.it]{} ]{} [**Keywords**]{}: Discontinuous Galerkin methods, adaptive finite elements, semi-implicit discretizations, semi-Lagrangian discretizations, shallow water equations, Euler equations. [**AMS Subject Classification**]{}: 35L02, 65M60, 65M25, 76U05, 86A10 Introduction {#intro} ============ The Discontinuous Galerkin (DG) spatial discretization approach is currently being employed by an increasing number of environmental fluid dynamics models see e.g. [@dawson:2006], [@giraldo:2002], [@lauter:2008],[@nair:2005],[@giraldo:2010],[@kelly:2012] and a more complete overview in [@bonaventura:2012]. This is motivated by the many attractive features of DG discretizations, such as high order accuracy, local mass conservation and ease of massively parallel implementation. On the other hand, DG methods imply severe stability restrictions when coupled with explicit time discretizations. One traditional approach to overcome stability restrictions in low Mach number problems is the combination of semi - implicit (SI) and semi - Lagrangian (SL) techniques. In a series of papers [@restelli:2006], [@restelli:2009], [@giraldo:2010], [@dumbser:2013], [@tumolo:2013] it has been shown that most of the computational gains traditionally achieved in finite difference models by the application of SI, SL and SISL discretization methods are also attainable in the framework of DG approaches. In particular, in [@tumolo:2013] we have introduced a dynamically $p-$adaptive SISL-DG discretization approach for low Mach number problems, that is quite effective in achieving high order spatial accuracy, while reducing substantially the computational cost. In this paper, we apply the technique of [@tumolo:2013] to the shallow water equations in spherical geometry and to the the fully compressible Euler equations, in order to show its effectiveness for model problems typical of global and regional weather forecasting. The advective form of the equations of motion is employed and the semi-implicit time discretization is based on the TR-BDF2 method, see e.g. [@hosea:1996], [@leveque:2007]. This combination of two robust ODE solvers yields a second order accurate, A-stable and L-stable method (see e.g. [@lambert:1991]), that is effective in damping selectively high frequency modes. At the same time, it achieves full second order accuracy, while the off-centering in the trapezoidal rule, typically necessary for realistic applications to nonlinear problems (see e.g. [@casulli:1994], [@davies:2005], [@tumolo:2013]), limits the accuracy in time to first order. Numerical results presented in this paper show that the total computational cost of one TR-BDF2 step is analogous to that of one step of the off-centered trapezoidal rule, as well as the structure of the linear problems to be solved at each time step, thus allowing to extend naturally to this more accurate method any implementation based on the off-centered trapezoidal rule. Numerical simulations of the shallow water benchmarks proposed in [@williamson:1992], [@lauter:2005], [@jakob:1995] and of the non-hydrostatic benchmarks proposed in [@skamarock:1994], [@carpenter:1990] have been employed to validate the method and to demonstrate its capabilities. In particular, it will be shown that the present approach enables the use of time steps even 100 times larger than those allowed for DG models by standard explicit schemes, see e.g. the results in [@nair:2005b]. The method presented in this paper, just as its previous version in [@tumolo:2013], can be applied in principle on arbitrarily unstructured and even nonconforming meshes. For example, a model based on this method could run on a non conforming mesh of rectangular elements built around the nodes of a reduced Gaussian grid [@hortal:1991]. For simplicity, however, no such implementation has been developed so far. Here, only a simple Cartesian mesh has been used. If no degree adaptivity is employed, this results in very high Courant numbers in the polar regions. These do not result in any special stability problems for the present SISL discretization approach, as it will be shown by the numerical results reported below. On the other hand, even with an implementation based on a simple Cartesian mesh in spherical coordinates, the flexibility of the DG space discretization allows to reduce the degree of the basis and test functions employed close to the poles, thus making the effective model resolution more uniform and solving the efficiency issues related to the pole problem by static $p-$adaptivity. This is especially advantageous because the conditioning of the linear system to be solved at each time step is greatly improved and, as a consequence, the number of iterations necessary for the linear solver is reduced by approximately $80\%$, while at the same time no spurious reflections nor artificial error increases are observed. Beyond these computational advantages, we believe that the present approach based on $p-$adaptivity is especially suitable for applications to numerical weather prediction, in contrast to $h-$adaptivity approaches (that is, local mesh coarsening or refinement in which the size of some elements changes in time). Indeed, in numerical weather prediction, information that is necessary to carry out realistic simulations (such as orography profiles, data on land use and soil type, land-sea masks) needs to be reconstructed on the computational mesh and has to be re-interpolated each time that the mesh is changed. Furthermore, many physical parameterizations are highly sensitive to the mesh size. Although devising better parameterizations that require less mesh-dependent tuning is an important research goal, more conventional parameterizations will still be in use for quite some time. As a consequence, it is useful to improve the accuracy locally by adding supplementary degrees of freedom where necessary, as done in a $p-$adaptive framework, without having to change the underlying computational mesh. In conclusion, the resulting modeling framework seems to be able to combine the efficiency and high order accuracy of traditional SISL pseudo-spectral methods with the locality and flexibility of more standard DG approaches. In section \[shwater\], two examples of governing equations are introduced. In section \[tr\_rev\], the TR-BDF2 method is reviewed. In section \[sphere\] the approach employed for the advection of vector fields in spherical geometry is described in detail. In section \[sisldg\], we introduce the SISL-DG discretization approach for the shallow water equations in spherical geometry. In section \[nhydro\], we outline its extension to the fully compressible Euler equations in a vertical plane. Numerical results are presented in section \[tests\], while in section \[conclu\] we try to draw some conclusions and outline the path towards application of the concepts introduced here in the context of a non hydrostatic dynamical core. Governing equations {#shwater} =================== We consider as a basic model problem the two-dimensional shallow water equations on a rotating sphere (see e.g. [@gill:1982]). These equations are a standard test bed for numerical methods to be applied to the full equations of motion of atmospheric or oceanic circulation models, see e.g. [@williamson:1992]. Among their possible solutions, they admit Rossby and inertial gravity waves, as well as the response of such waves to orographic forcing. We will use the advective, vector form of the shallow water equations: $$\begin{aligned} && \frac{D h}{ Dt} = - h \nabla \cdot {\bf u}, \label{continuityeq}\\ &&\frac{D {\bf u} }{ Dt} = - g \nabla h - f \hat{\bf k} \times {\bf u} -g \nabla b \label{vectmomentumeq}.\end{aligned}$$ Here $h$ represents the fluid depth, $b$ the bathymetry elevation, $f$ the Coriolis parameter, $\hat{\bf k}$ the unit vector locally normal to the Earth’s surface and $g$ the gravity force per unit mass on the Earth’s surface. Assuming that $x,y$ are orthogonal curvilinear coordinates on the sphere (or on a portion of it), we denote by $m_x$ and $m_y$ the components of the (diagonal) metric tensor. Furthermore, we set ${\bf u}=(u,v)^T,$ where $u$ and $v$ are the contravariant components of the velocity vector in the coordinate direction $x$ and $y$ respectively, multiplied by the corresponding metric tensor components. We also denote by $\frac{D}{Dt}$ the Lagrangian derivative $$\frac{D}{Dt} = \frac{\partial }{ \partial t} + \frac{u}{m_x} \frac{\partial }{ \partial x} + \frac{v}{m_y} \frac{\partial }{ \partial y} ,$$ so that $u = m_x \frac{D x}{Dt}, v = m_y \frac{D y}{Dt}$. In particular, in this paper standard spherical coordinates will be employed. As an example of a more complete model, we will also consider the fully compressible, non hydrostatic equations of motion. Following e.g. [@cullen:1990], [@bonaventura:2000], [@davies:2005], they can be written as $$\begin{aligned} && \frac{D \Pi}{Dt} = - \left( \frac{c_p}{c_v}-1 \right) \Pi \nabla \cdot {\bf u}, \\ && \frac{D {\bf u} }{Dt} = - c_p \Theta \nabla \Pi -g \hat{\bf k}, \\ && \frac{D \Theta}{Dt} = 0. \end{aligned}$$ where, being $p_0 $ a reference pressure value, $\Theta = T \big( \frac{p}{p_0} \big)^{-R/c_p}$ is the potential temperature, $ \Pi = \big( \frac{p}{p_0} \big)^{R/c_p}$ is the Exner pressure, while $c_p, c_v, R $ are the constant pressure and constant volume specific heats and the gas constant of dry air respectively. Here the Coriolis force is omitted for simplicity. Notice also that, by a slight abuse of notation, in the three-dimensional case ${\bf u}=(u,v,w)^T $ denotes the three dimensional velocity field and the $\frac{D}{Dt},$ $\nabla $ operators are also three-dimensional, while we will assume ${\bf u}=(u,w)^T $ in the description of $(x,z)$ two dimensional, vertical slice models. It is customary to rewrite such equations in terms of perturbations with respect to a steady hydrostatic reference profile, so that assuming $ \Pi(x,y,z,t) = \pi^*(z) + \pi(x,y,z,t), $ $ \Theta(x,y,z,t) = \theta^*(z) + \theta(x,y,z,t) $ with $ \hspace{2.5mm} c_p \theta^* \frac{d \pi^*}{d z} = -g, $ one obtains for a vertical plane $$\begin{aligned} && \frac{D \Pi}{Dt} = - \left( \frac{c_p}{c_v}-1 \right) \Pi \nabla \cdot { \bf u}, \label{vslice_conteq}\\ && \frac{D u }{Dt} = - c_p \Theta \frac{\partial \pi}{\partial x}, \label{vslice_ueq}\\ && \frac{D w }{Dt} = - c_p \Theta \frac{\partial \pi}{\partial z} + g \frac{\theta}{\theta^*}, \label{vslice_veq} \\ && \frac{D \theta}{Dt} = - \frac{d \theta^*}{dz} w. \label{vslice_eneq} \end{aligned}$$ It can be observed that equations - are isomorphic to equations -, which will allow to extend almost automatically the discretization approach proposed for the former to the more general model. Review of the TR-BDF2 method {#tr_rev} ============================ We review here some properties of the so called TR-BDF2 method, which was first introduced in [@bank:1985]. Given a Cauchy problem $$\begin{aligned} \label{cauchypb} {\bf y}^{\prime}&=&{\bf f}({\bf y},t) \nonumber \\ {\bf y}(0) &=& {\bf y}_0\end{aligned}$$ and considering a time discretization employing a constant time step $\Delta t,$ the TR-BDF2 method is defined by the two following implicit stages: $$\begin{aligned} \label{trbdf2} {\bf u}^{n+2\gamma} - \gamma \Delta t {\bf f}({\bf u}^{n+2\gamma},t_{n}+2\gamma\Delta t) &=& {\bf u}^n + \gamma \Delta t {\bf f}({\bf u}^{n},t_{n}), \nonumber \\ {\bf u}^{n+1} - \gamma_2 \Delta t {\bf f}({\bf u}^{n+1},t_{n+1}) &=& (1-\gamma_3 ){\bf u}^n +\gamma_3 {\bf u}^{n+2\gamma}. \end{aligned}$$ Here $\gamma \in [0,1/2] $ is an implicitness parameter and $$\gamma_2 = \frac{1-2\gamma}{2(1-\gamma)}, \ \ \ \gamma_3 =\frac{1-\gamma_2}{2\gamma} .$$ It is immediate that the first stage of is simply the application of the trapezoidal rule (or Crank-Nicolson method) over the interval $[t_n,t_{n}+2\gamma\Delta t].$ It could also be substituted by an off centered Crank-Nicolson step without reducing the overall accuracy of the method. The outcome of this stage is then used to turn the two step BDF2 method into a single step, two stages method. This combination of two robust stiff solvers yields a method with several interesting accuracy and stability properties, that were analyzed in detail in [@hosea:1996]. As shown in this paper, this analysis is most easily carried out by rewriting the method as $$\begin{aligned} {\bf k}_1& =& {\bf f}\left ({\bf u}^n, t_{n}\right) \nonumber \\ {\bf k}_2 &=& {\bf f}\left ({\bf u}^n +\gamma \Delta t {\bf k}_1 +\gamma \Delta t {\bf k}_2, t_{n}+\gamma \Delta t\right)\nonumber \\ {\bf k}_3 &=& {\bf f}\left ({\bf u}^n +\frac{1-\gamma}2\Delta t{\bf k}_1+\frac{1-\gamma}2\Delta t{\bf k}_2 +\gamma\Delta t{\bf k}_3,t_{n+1} \right ) \nonumber \\ {\bf u}^{n+1} &=& {\bf u}^n +\Delta t \left (\frac{1-\gamma}2{\bf k}_1+\frac{1-\gamma}2{\bf k}_2 +\gamma{\bf k}_3\right). \label{tr_dirk}\end{aligned}$$ In this formulation, the TR-BDF2 method is clearly a Singly Diagonal Implicit Runge Kutta (SDIRK) method, so that one can rely on the theory for this class of methods to derive stability and accuracy results (see e.g. [@lambert:1991]). Notice that the same method has been rediscovered in [@butcher:2000] and has been analyzed and applied also in [@giraldo:2013], to treat the implicit terms in the framework of an Additive Runge Kutta approach (see e.g. [@kennedy:2003]). As shown in [@hosea:1996], the TR-BDF2 method is second order accurate and A-stable for any value of $\gamma.$ Written as in , the method can also be proven to constitute a (2,3) embedded Runge-Kutta pair, with companion coefficients given by $$(1-\frac{\sqrt{2}}4)/3, \ \ \ (1+3\frac{\sqrt{2}}4)/3, \ \ \ \frac{2-\sqrt{2}}6,$$ provided that no off centering is employed in the first stage of . This equips the method with an extremely efficient estimator of the time discretization error. Furthermore, for $\gamma=1-\sqrt{2}/2$ it is also L-stable. Therefore, with this coefficient value it can be safely applied to problems with eigenvalues whose imaginary part is large, such as typically arise from the discretization of hyperbolic problems. This is not the case for the standard trapezoidal rule (or Crank-Nicolson) implicit method, whose linear stability region is exactly bounded by the imaginary axis. As a consequence, it is common to apply the trapezoidal rule with off centering, see e.g. [@casulli:1994], [@davies:2005] as well as [@tumolo:2013], which results in a first order time discretization. TR-BDF2 appears therefore to be an interesting one step alternative to maintain full second order accuracy, especially considering that, if formulated as , it is equivalent to performing two Crank-Nicolson steps with slightly modified coefficients. In order to highlight the advantages of the proposed method in terms of accuracy with respect to other common robust stiff solvers, we plot in figure \[stabreg\_trbdf\_noff\] the contour levels of the absolute value of the linear stability function of the TR-BDF2 method without off centering in the first stage, compared to the analogous contours of the off centered Crank-Nicolson method with averaging parameter $\theta=0.6, \theta=0.7 $ in figures \[stabreg\_theta06\], \[stabreg\_theta07\], respectively, and to those of the BDF2 method in figure \[bdf\_sreg\]. It is immediate to see that TR-BDF2 introduces less damping around the imaginary axis for moderate values of the time step. On the other hand, TR-BDF2 is more selective in damping very large eigenvalues, as clearly displayed in figure \[section\_imaxis\], where the absolute values of the linear stability functions of the same methods (with the exception of BDF2, for which an explicit representation of the stability function is not available) are plotted along the imaginary axis. ![Contour levels of the absolute value of the stability function of the TR-BDF2 method without off centering in the first stage. Contour spacing is $0.1$ from $0.5$ to $1.$[]{data-label="stabreg_trbdf_noff"}](figures/trbdf_noff.eps){height="0.35\textheight"} ![Contour levels of the absolute value of the stability function of the off centered Crank-Nicolson method with averaging parameter $\theta=0.6 $ (equivalent to an off centering parameter valued $0.05$). Contour spacing is $0.1$ from $0.5$ to $1.$[]{data-label="stabreg_theta06"}](figures/theta06_stabreg.eps){height="0.35\textheight"} ![Contour levels of the absolute value of the stability function of the off centered Crank-Nicolson method with averaging parameter $\theta=0.7 $ (equivalent to an off centering parameter valued $0.1$). Contour spacing is $0.1$ from $0.5$ to $1.$[]{data-label="stabreg_theta07"}](figures/theta07_stabreg.eps){height="0.35\textheight"} ![Contour levels of the absolute value of the stability function of the BDF2 method. Contour spacing is $0.1$ from $0.5$ to $1.$[]{data-label="bdf_sreg"}](figures/bdf_streg.eps){height="0.33\textheight"} ![Graph of the absolute value of the stability functions of several L-stable methods along the imaginary axis.[]{data-label="section_imaxis"}](figures/imaxis_sect_new.eps){height="0.35\textheight"} Review of Semi-Lagrangian evolution operators for vector fields on the sphere {#sphere} ============================================================================= The semi-Lagrangian method can be described introducing the concept of evolution operator, along the lines of [@morton:1995; @morton:1998]. Indeed, let $G = G({\bf x}, t)$ denote a generic function of space and time that is the solution of $$\frac{D G}{Dt} = \frac{\partial G}{ \partial t} + \frac{u}{m_x} \frac{\partial G}{ \partial x} + \frac{v}{m_y} \frac{\partial G}{ \partial y} =0.$$ To approximate this solution on the time interval $[t^n,t^{n+1}],$ a numerical evolution operator $E$ is introduced, that approximates the exact evolution operator associated to the frozen velocity field ${\bf u}^{*}=(u^*,v^*)^T ,$ that may coincide with the velocity field at time level $t^n$ or with an extrapolation derived from more previous time levels. More precisely, if ${\bf X}(t;t^{n+1},{\bf x})$ denotes the solution of $$\frac{d {\bf X}(t;t^{n+1},{\bf x} )}{dt}={\bf u}^*({\bf X}(t;t^{n+1},{\bf x})) \label{lagode}$$ with initial datum ${\bf X}(t^{n+1};t^{n+1},{\bf x})={\bf x}$ at time $t=t^{n+1}$, then the expression $ [E(t^{n},\Delta t) G]({\bf x}) $ denotes a numerical approximation of $G^n({\bf x}_D)$ where ${\bf x}_D={\bf X}(t^{n};t^{n+1},{\bf x})$ and the notation $G^n({\bf x}) = G({\bf x}, t^n)$ is used. Since ${\bf x}_D$ is nothing but the position at time $t^n$ of the fluid parcel reaching location ${\bf x}$ at time $t^{n+1}$, according to standard terminology, it is called the departure point associated with the arrival point ${\bf x}$. Different methods can be employed to approximate ${\bf x}_D$; in this paper, for simplicity, the method proposed in [@mcgregor:1993] has been employed in spherical geometry. Furthermore, to guarantee an accuracy compatible with that of the semi-implicit time-discretization, an extrapolation ${\bf u}^{n+\frac 12}$ of the velocity field at the intermediate time level $t^n+\Delta t/2 $ was used as ${\bf u}^*$ in (\[lagode\]). On the other hand, in the application to Cartesian geometry (for the vertical slice discretization), a simple first order Euler method with sub-stepping was employed, see e.g. [@giraldo:1999], [@rosatti:2005]. In case of the advection of a vector field $$\frac{D {\bf G}}{Dt} = \frac{\partial {\bf G}}{ \partial t} + \frac{u}{m_x} \frac{\partial {\bf G}}{ \partial x} + \frac{v}{m_y} \frac{\partial {\bf G}}{ \partial y} =0,$$ as in the momentum equation (\[vectmomentumeq\]), the extension of this approach has to take into account the curvature of the spherical manifold. More specifically, unit basis vectors at the departure point are not in general aligned with those at the arrival point, i.e., if $\hat{\boldsymbol{i}},\hat{\boldsymbol{j}},\hat{\boldsymbol{k}}$ represent a unit vector triad, in general $ \hspace{1mm} \hat{\bf i}({\bf x}) \neq \hat{\boldsymbol{i}}({\bf x}_D), \hspace{1mm} \hat{\boldsymbol{j}}({\bf x}) \neq \hat{\boldsymbol{j}}({\bf x}_D), \hspace{1mm} \hat{\boldsymbol{k}}({\bf x}) \neq \hat{\boldsymbol{k}}({\bf x}_D). $ To deal with this issue two approaches are available. The first, intrinsically Eulerian, consists in the introduction of the Christoffel symbols in the covariant derivatives definition, giving rise to the well known metric terms, before the SISL discretization, and then in the approximation along the trajectories of those metric terms. This approach has been shown to be source of instabilities in a semi-Lagrangian frame, see e.g. [@ritchie:1988; @cote:1988; @cote:1988b; @desharnais:1990] and therefore is not adopted in this work. The second approach, more suitable for semi-Lagrangian discretizations, takes into account the curvature of the manifold only at discrete level, i.e. after the SISL discretization has been performed. Many variations of this idea have been proposed, see e.g. [@ritchie:1988; @cote:1988; @cote:1988b; @bates:1990; @temperton:2001]. In [@staniforth:2010], they have all been derived in a unified way by the introduction of a proper rotation matrix that transforms vector components in the departure-point unit vector triad $\hat{\boldsymbol{i}}_D=\hat{\boldsymbol{i}}({\bf x}_D),$ $\hat{\boldsymbol{j}}_D=\hat{\boldsymbol{j}}({\bf x}_D),$ $\hat{\boldsymbol{k}}_D=\hat{\boldsymbol{k}}({\bf x}_D)$ into vector components in the arrival-point unit vector triad $\hat{\boldsymbol{i}}=\hat{\boldsymbol{i}}({\bf x}),$ $\hat{\boldsymbol{j}}=\hat{\boldsymbol{j}}({\bf x}),$ $\hat{\boldsymbol{k}}=\hat{\boldsymbol{k}}({\bf x})$. To see how this rotation matrix comes into play, it is sufficient to consider the action of the evolution operator $E$ on a given vector valued function of space and time $\boldsymbol{G}$, defined as an approximation of $$\left[ E(t^{n}, \Delta t) \boldsymbol{G} \right] ({\bf x}) = \boldsymbol{G}^n({\bf x}_D), \label{evoloponvectors}$$ and to write this equation componentwise. $\boldsymbol{G}^n({\bf x}_D)$ is known through its components in the departure point unit vector triad: $$\boldsymbol{G}^n({\bf x}_D) = \mathcal{G}^n_x({\bf x}_D)\hat{\boldsymbol{i}}_D+ \mathcal{G}^n_y({\bf x}_D)\hat{\boldsymbol{j}}_D+ \mathcal{G}^n_z({\bf x}_D)\hat{\boldsymbol{k}}_D. \label{vectGexpasion}$$ Therefore, via (\[evoloponvectors\]), the components of $\left[ E(t^{n}, \Delta t) \boldsymbol{G} \right] ({\bf x})$ in the unit vector triad at the same point are given by projection of (\[vectGexpasion\]) along $ \hat{\boldsymbol{i}},$ $\hat{\boldsymbol{j}},$ $ \hat{\boldsymbol{k}}$: $$\hspace{1mm}\hat{\boldsymbol{i}} \cdot \boldsymbol{G}^n({\bf x}_D) = \mathcal{G}^n_x({\bf x}_D)\hspace{1mm}\hat{\boldsymbol{i}} \cdot\hat{\boldsymbol{i}}_D+ \mathcal{G}^n_y({\bf x}_D)\hspace{1mm}\hat{\boldsymbol{i}} \cdot\hat{\boldsymbol{j}}_D+ \mathcal{G}^n_z({\bf x}_D)\hspace{1mm}\hat{\boldsymbol{i}} \cdot\hat{\boldsymbol{k}}_D, \nonumber$$ $$\hspace{1mm}\hat{\boldsymbol{j}} \cdot \boldsymbol{G}^n({\bf x}_D) = \mathcal{G}^n_x({\bf x}_D)\hspace{1mm}\hat{\boldsymbol{j}} \cdot\hat{\boldsymbol{i}}_D+ \mathcal{G}^n_y({\bf x}_D)\hspace{1mm}\hat{\boldsymbol{j}} \cdot\hat{\boldsymbol{j}}_D+ \mathcal{G}^n_z({\bf x}_D)\hspace{1mm}\hat{\boldsymbol{j}} \cdot\hat{\boldsymbol{k}}_D, \nonumber$$ $$\hspace{1mm} \hat{\boldsymbol{k}} \cdot \boldsymbol{G}^n({\bf x}_D) = \mathcal{G}^n_x({\bf x}_D)\hat{\boldsymbol{k}} \cdot\hat{\boldsymbol{i}}_D+ \mathcal{G}^n_y({\bf x}_D)\hat{\boldsymbol{k}} \cdot\hat{\boldsymbol{j}}_D+ \mathcal{G}^n_z({\bf x}_D)\hat{\boldsymbol{k}} \cdot\hat{\boldsymbol{k}}_D, \nonumber$$ i.e., in matrix notation: $$\begin{pmatrix} \hspace{1mm}\hat{\boldsymbol{i}} \cdot \left[ E(t^{n}, \Delta t) \boldsymbol{G} \right]({\bf x}) \\ \hspace{1mm}\hat{\boldsymbol{j}} \cdot \left[ E(t^{n}, \Delta t) \boldsymbol{G} \right]({\bf x}) \\ \hspace{1mm}\hat{\boldsymbol{k}}\cdot \left[ E(t^{n}, \Delta t) \boldsymbol{G} \right]({\bf x}) \end{pmatrix} = {\bf R} \begin{pmatrix} \mathcal{G}^n_x \\ \mathcal{G}^n_y \\ \mathcal{G}^n_z \end{pmatrix} \ \ \ \ \ \ \ {\rm where }$$ $${\bf R} = \begin{bmatrix} \hat{\boldsymbol{i}} \cdot\hat{\boldsymbol{i}}_D & \hat{\boldsymbol{i}} \cdot\hat{\boldsymbol{j}}_D& \hat{\boldsymbol{i}} \cdot\hat{\boldsymbol{k}}_D \\ \hat{\boldsymbol{j}} \cdot\hat{\boldsymbol{i}}_D & \hat{\boldsymbol{j}}\cdot\hat{\boldsymbol{j}}_D & \hat{\boldsymbol{j}} \cdot\hat{\boldsymbol{k}}_D \\ \hat{\boldsymbol{k}} \cdot\hat{\boldsymbol{i}}_D & \hat{\boldsymbol{k}} \cdot\hat{\boldsymbol{j}}_D & \hat{\boldsymbol{k}} \cdot\hat{\boldsymbol{k}}_D \\ \end{bmatrix}.$$ Under the shallow atmosphere approximation [@thuburn:2013], $\bf R$ can be reduced to the $2\times2 $ rotation matrix $${\boldsymbol \Lambda } = {\boldsymbol \Lambda }({\bf x},{\bf x}_D) = \begin{bmatrix} \Lambda_{11} & \Lambda_{12} \\ \Lambda_{21} & \Lambda_{22} \end{bmatrix}, \label{lambdadef}$$ where, as shown in [@staniforth:2010], $\Lambda_{11}=\Lambda_{22}=(R_{11}+R_{22})/(1+R_{33}), \hspace{1mm} \Lambda_{12}=-\Lambda_{21}=(R_{12}-R_{21})/(1+R_{33}).$ Therefore, in the following the evolution operator for vector fields will be defined componentwise as $$\begin{pmatrix} \hspace{1mm}\hat{\boldsymbol{i}} \cdot \left[ E(t^{n}, \Delta t) \boldsymbol{G} \right]({\bf x}) \\ \hspace{1mm}\hat{\boldsymbol{j}} \cdot \left[ E(t^{n}, \Delta t) \boldsymbol{G} \right]({\bf x}) \end{pmatrix} = {\boldsymbol \Lambda } \begin{pmatrix} \mathcal{G}^n_x({\bf x}_D) \\ \mathcal{G}^n_y ({\bf x}_D) \end{pmatrix}. \label{evoloponvectors2}$$ A novel SISL time integration approach for the shallow water equations on the sphere {#sisldg} ==================================================================================== The SISL discretization of equations.(\[continuityeq\])-(\[vectmomentumeq\]) based on (\[trbdf2\]) is then obtained by performing the two stages in (\[trbdf2\]) after reinterpretation of the intermediate values in a semi-Lagrangian fashion. Furthermore, in order to avoid the solution of a nonlinear system, the dependency on $h$ in $h \nabla \cdot {\bf u}$ is linearized in time, as common in semi-implicit discretizations based on the trapezoidal rule, see e.g. [@casulli:1994],[@tumolo:2013]. Numerical experiments reported in the following show that this does not prevent to achieve second order accuracy in the regimes of interest for numerical weather prediction. The TR stage of the SISL time semi-discretization of the equations in vector form (\[continuityeq\])-(\[vectmomentumeq\]) is given by $$\begin{aligned} \label{TR_SWEcontinuityeq} h^{n+2\gamma} &+& \gamma \Delta t \hspace{1.0mm} h^n \hspace{1.0mm} \nabla \cdot {\bf u}^{n+2\gamma}\nonumber \\ & = & E\left(t^n, 2 \gamma \Delta t\right) \left[ h -\gamma \Delta t \hspace{1.0mm} h \hspace{1.0mm} \nabla \cdot {\bf u} \right], \end{aligned}$$ $$\begin{aligned} \label{TR_SWEmomentumeq} && {\bf u}^{n+2\gamma} + \gamma \Delta t \Big[ g \nabla h^{n+2\gamma} + f \hat{\bf k} \times { \bf u}^{n+2\gamma}\Big] = -\gamma \Delta t \hspace{1.0mm} g\nabla b \nonumber \\ && + E\big(t^n, 2 \gamma \Delta t\big) \left\{ {\bf u} -\gamma \Delta t \left[ g ( \nabla h + \nabla b ) + f \hat{\bf k} \times {\bf u} \right] \right\}. \end{aligned}$$ The TR stage is then followed by the BDF2 stage: $$\begin{aligned} \label{BDF2_SWEcontinuityeq} h^{n+1} &+& \gamma_2 \Delta t \hspace{1.0mm} h^{n+2\gamma} \hspace{1.0mm} \nabla \cdot {\bf u}^{n+1} \nonumber \\ &= & \big( 1-\gamma_3\big) E\big(t^n,\Delta t\big) h \nonumber \\ &+& \gamma_3 E\big(t^n+2\gamma\Delta t,(1-2\gamma)\Delta t\big) h, \end{aligned}$$ $$\begin{aligned} \label{BDF2_SWEmomentumeq} &&{\bf u}^{n+1} + \gamma_2 \Delta t \Big[ g \nabla h^{n+1} + f \hat{\bf k} \times {\bf u}^{n+1}\Big] = - \gamma_2 \Delta t \hspace{1.0mm} g \nabla b \nonumber \\ && + \big( 1-\gamma_3\big) E\big(t^n,\Delta t\big) {\bf u} + \gamma_3 E\big(t^n+2\gamma\Delta t,(1-2\gamma)\Delta t\big) {\bf u}. \end{aligned}$$ For each of the two stages, the spatial discretization can be performed along the lines described in [@tumolo:2013], allowing for variable polynomial order to locally represent the solution in each element. The spatial discretization approach considered is independent of the nature of the mesh and could also be implemented for fully unstructured and even non conforming meshes. For simplicity, however, in this paper only an implementation on a structured mesh in longitude-latitude coordinates has been developed. In principle, either Lagrangian or hierarchical Legendre bases could be employed. We will work almost exclusively with hierarchical bases, because they provide a natural environment for the implementation of a $p-$adaptation algorithm, see for example [@zienkiewicz:1983]. A central issue in finite element formulations for fluid problems is the choice of appropriate approximation spaces for the velocity and pressure variables (in the context of SWE, the role of the pressure is played by the free surface elevation). An inconsistent choice of the two approximation spaces indeed may result in a solution that is polluted by spurious modes, for the specific case of SWE see for example [@leroux:2005; @walters:1983a; @walters:1983b] as well as the more recent and comprehensive analysis in [@leroux:2013]. Here, we have not investigated this issue in depth, but the model implementation allows for approximations of higher polynomial degree $ p^u $ for the velocity fields than $ p^h $ for the height field. Even though no systematic study was performed, no significant differences were noticed between results obtained with equal or unequal degrees. In the following, only results with unequal degrees $ p^u =p^h+1 $ are reported, with the exception of an empirical convergence test for a steady geostrophic flow. All the integrals appearing in the elemental equations are evaluated by means of Gaussian numerical quadrature formulae, with a number of quadrature nodes consistent with the local polynomial degree being used. In particular, notice that integrals of terms in the image of the evolution operator $E,$ i.e. of functions evaluated at the departure points of the trajectories arriving at the quadrature nodes, cannot be computed exactly (see e.g. [@morton:1988; @priestley:1994]), since such functions are not polynomials. Therefore a sufficiently accurate approximation of these integrals is needed, which may entail the need to employ numerical quadrature formulae with more nodes than the minimal requirement implied by the local polynomial degree. This overhead is actually compensated by the fact that, for each Gauss node, the computation of the departure point is only to be executed once for all the quantities to be interpolated. After spatial discretization has been performed, the discrete degrees of freedom representing velocity unknowns can be replaced in the respective discrete height equations, yielding in each case a linear system whose structure is entirely analogous to that obtained in [@tumolo:2013]. The non-symmetric linear systems obtained from the TR-BDF2 stages are solved in our implementation by the GMRES method [@saad:1986]. A classical stopping criterion based on a relative error tolerance of $10^{-10} $ was employed (see e.g. [@kelley:1995]). For the GMRES solver, so far, only a block diagonal preconditioning was employed. As it will be shown in section \[tests\], the condition number of the systems to be solved can be greatly reduced if lower degree elements are employed close to the poles. In any case, the total computational cost of one TR-BDF2 step is entirely analogous to that of one step of the standard off centered trapezoidal rule employed in [@tumolo:2013], since the structure of the systems is the same but for each stage only a fraction of the time step is being computed. Once ${h}^{n+1}$ has been computed by solving this linear system, then $ {\bf u}^{n+1}$ can be recovered by back substituting into the momentum equation. Extension of the time integration approach to the Euler equations {#nhydro} ================================================================= In this section, we show that the previously proposed method can be extended seamlessly to the fully compressible Euler equations as formulated in equations (\[vslice\_conteq\]) - (\[vslice\_eneq\]). For simplicity, only the application to the $(x,z)$ two dimensional vertical slice case is presented, but the extension to three dimensions is straightforward. Again, in order to avoid the solution of a nonlinear system, the dependency on $\Pi$ in $\Pi \nabla \cdot {\bf u}$ and the dependency on $\Theta$ in $\Theta \nabla \pi$ are linearized in time, as common in semi-implicit discretizations based on the trapezoidal rule, see e.g. [@cullen:1990], [@bonaventura:2000]. The semi-Lagrangian counterpart of the TR substep of (\[trbdf2\]) is first applied to to (\[vslice\_conteq\]) - (\[vslice\_eneq\]), so as to obtain: $$\begin{aligned} \label{TR_VSLcontinuityeq} && \pi^{n+2\gamma} + \gamma \Delta t \hspace{1.0mm} \left( c_p/c_v - 1 \right) \Pi^n \nabla \cdot {\bf u}^{n+2\gamma} = -\pi^* \nonumber \\ && + E\left(t^n, 2 \gamma \Delta t\right) \left[ \Pi -\gamma \Delta t \left( c_p/c_v - 1 \right) \Pi \hspace{1.0mm} \nabla \cdot {\bf u} \right], \end{aligned}$$ $$\begin{aligned} \label{TR_VSLmomentumeq_x} && u^{n+2\gamma} + \gamma \Delta t \hspace{1.0mm} c_p \Theta^n \frac{\partial \pi}{\partial x}^{n+2\gamma} = \nonumber \\ && E(t^n, 2\gamma \Delta t) \left[ u - \gamma \Delta t \hspace{1.0mm} c_p \Theta \frac{\partial \pi}{\partial x} \right], \end{aligned}$$ $$\begin{aligned} \label{TR_VSLmomentumeq_z} &&w^{n+2\gamma} + \gamma \Delta t \left( c_p \Theta^n \frac{\partial \pi}{\partial z}^{n+2\gamma} - g \frac{\theta^{n+2\gamma} }{ \theta^* } \right) = \nonumber \\ && E(t^n, 2\gamma \Delta t) \left[ w - \gamma \Delta t \left( c_p \Theta \frac{\partial \pi}{\partial z} - g \frac{\theta}{ \theta^* }\right) \right], \end{aligned}$$ $$\label{TR_VSLenereq} \theta^{n+2\gamma} + \gamma \Delta t \frac{d \theta^*}{d z} w^{n+2\gamma} = E(t^n, 2\gamma \Delta t) \left[ \theta - \gamma \Delta t \frac{d \theta^*}{d z} w \right].$$ Following [@cullen:1990] the time semi-discrete energy equation (\[TR\_VSLenereq\]) can be inserted into the time semi-discrete vertical momentum equation (\[TR\_VSLmomentumeq\_z\]), in order to decouple the momentum and the energy equations as follows $$\begin{aligned} \label{TR_VSLmomentumeq_z+ener} && \left( 1 + (\gamma \Delta t)^2 \frac{g}{\theta^*} \frac{d \theta^*}{d z} \right) w^{n+2\gamma} + \gamma \Delta t c_p \Theta^n \frac{\partial \pi}{\partial z}^{n+2\gamma} = \nonumber \\ && E(t^n, 2\gamma \Delta t) \left[ w - \gamma \Delta t \left( c_p \Theta \frac{\partial \pi}{\partial z} - g \frac{\theta}{ \theta^* }\right) \right] \nonumber \\ && + \gamma \Delta t \frac{g}{\theta^*} E(t^n, 2\gamma \Delta t) \left[ \theta - \gamma \Delta t \frac{d \theta^*}{d z} w \right]. \end{aligned}$$ Equations (\[TR\_VSLcontinuityeq\]), (\[TR\_VSLmomentumeq\_x\]) and (\[TR\_VSLmomentumeq\_z+ener\]) are a set of three equations in three unknowns only, namely $\pi, u,$ and $w$ that can be compared with equations (\[TR\_SWEcontinuityeq\]), (\[TR\_SWEmomentumeq\]) with $f=0$ and $m_x=m_y=1$ (Cartesian geometry). From the comparison it is clear that the two formulations are isomorphic under correspondence $\pi \longleftrightarrow h, u \longleftrightarrow u, w \longleftrightarrow v.$ We can then consider the semi-Lagrangian counterpart of the BDF2 substep of (\[trbdf2\]) applied to (\[vslice\_conteq\]) - (\[vslice\_eneq\]) to obtain: $$\begin{aligned} \label{BDF2_VSLcontinuityeq} \pi^{n+1} &+& {\gamma}_2 \Delta t \left( c_p/c_v - 1 \right) \Pi^{n+2\gamma} \nabla \cdot {\bf u}^{n+1} \nonumber \\ &=& -\pi^* + (1 - {\gamma}_3) [ E\left(t^n, \Delta t\right) \Pi ] \nonumber \\ &+& {\gamma}_3 [ E\left(t^n+2\gamma \Delta t, (1-2\gamma)\Delta t\right) \Pi ], \end{aligned}$$ $$\begin{aligned} \label{BDF2_VSLmomentumeq_x} u^{n+1} &+& {\gamma}_2 \Delta t \hspace{1.0mm} c_p \Theta^{n+2\gamma} \frac{\partial \pi}{\partial x}^{n+1} \nonumber \\ &=& (1 - {\gamma}_3) [ E\left(t^n, \Delta t\right) u ] \nonumber \\ &+& {\gamma}_3 [ E\left(t^n+2\gamma \Delta t, (1-2\gamma)\Delta t\right) u ], \end{aligned}$$ $$\begin{aligned} \label{BDF2_VSLmomentumeq_z} w^{n+1} &+& {\gamma}_2 \Delta t \left( c_p \Theta^{n+2\gamma} \frac{\partial \pi}{\partial z}^{n+1} - g \frac{\theta^{n+1} }{ \theta^* } \right)\nonumber \\ &=& (1 - {\gamma}_3) [ E\left(t^n, \Delta t\right) w ] \nonumber \\ &+& {\gamma}_3 [ E\left(t^n+2\gamma \Delta t, (1-2\gamma)\Delta t\right) w ], \end{aligned}$$ $$\begin{aligned} \label{BDF2_VSLenereq} \theta^{n+1} &+& {\gamma}_2 \Delta t \frac{d \theta^*}{d z} w^{n+1} \nonumber\\ &=& (1 - {\gamma}_3) [ E\left(t^n, \Delta t\right) \theta ] \nonumber \\ &+& {\gamma}_3 [ E\left(t^n+2\gamma \Delta t, (1-2\gamma)\Delta t\right) \theta ]. \end{aligned}$$ Again, following [@cullen:1990], the time semi-discrete energy equation (\[BDF2\_VSLenereq\]) can be inserted into the time semi-discrete vertical momentum equation (\[BDF2\_VSLmomentumeq\_z\]), in order to decouple the momentum and the energy equations: $$\begin{aligned} \label{BDF2_VSLmomentumeq_z+ener} && \left( 1 + ({\gamma}_2 \Delta t)^2 \frac{g}{\theta^*} \frac{d \theta^*}{d z} \right) w^{n+1} + {\gamma}_2 \Delta t \hspace{1.0mm} c_p \Theta^{n+2\gamma} \frac{\partial \pi}{\partial z}^{n+1} = \nonumber \\ && (1 - {\gamma}_3) [ E\left(t^n, \Delta t\right) w ] + {\gamma}_3 [ E\left(t^n+2\gamma \Delta t, (1-2\gamma)\Delta t\right) w ] + \\ && {\gamma}_2 \Delta t \frac{g}{\theta^*} \left\{ (1 - {\gamma}_3) [ E\left(t^n, \Delta t\right) \theta ] + {\gamma}_3 [ E\left(t^n+2\gamma \Delta t, (1-2\gamma)\Delta t\right) \theta ] \right\}. \nonumber\end{aligned}$$ Now equations (\[BDF2\_VSLcontinuityeq\]), (\[BDF2\_VSLmomentumeq\_x\]) and (\[BDF2\_VSLmomentumeq\_z+ener\]) are a set of three equations in three unknowns only, namely $\pi, u,$ and $w,$ that can be compared with equations (\[BDF2\_SWEcontinuityeq\]), (\[BDF2\_SWEmomentumeq\]) with $f=0$ and $m_x=m_y=1$ (Cartesian geometry). Again, it is easy to see that also in this case exactly the same structure results as in equations (\[BDF2\_SWEcontinuityeq\])-(\[BDF2\_SWEmomentumeq\]) with the correspondence $\pi \longleftrightarrow h, u \longleftrightarrow u, w \longleftrightarrow v$, so that the approach (and code) proposed for the shallow water equations can be extended to the fully compressible Euler equation in a straightforward way. Numerical experiments {#tests} ===================== The numerical method introduced in section \[sisldg\] has been implemented and tested on a number of relevant test cases using different initial conditions and bathymetry profiles, in order to assess its accuracy and stability properties and to analyze the impact of the $p-$adaptivity strategy. Whenever a reference solution was available, the relative errors were computed in the $L^1,L^2 $ and $L^\infty $ norms at the final time $t_f$ of the simulation according to [@williamson:1992] as: $$\begin{aligned} \label{errornorms} && l_1(h) = \frac{I \left[ \hspace{1mm} \left| h(\cdot, t_f) - h_{ref}(\cdot, t_f) \right| \hspace{1mm} \right]} {I \left[ \hspace{1mm} \left| h_{ref}(\cdot, t_f) \right| \hspace{1mm} \right] }, \\ && l_2(h) = \frac{ \Bigl\{ I \Bigl[ \hspace{1mm} \bigl( \hspace{1mm} h(\cdot, t_f) - h_{ref}(\cdot, t_f) \hspace{1mm} \bigr)^2 \hspace{1mm} \Bigr] \Bigr\}^{1/2}} { \bigl\{ I \bigl[ \hspace{1mm} h_{ref}(\cdot, t_f)^2 \hspace{1mm} \bigr] \bigr\}^{1/2} }, \\ && l_{\infty}(h) = \frac{\max \hspace{1mm} \left| h(\cdot, t_f) - h_{ref}(\cdot, t_f) \right| } {\max \hspace{1mm} \left| h_{ref}(\cdot, t_f) \right| },\end{aligned}$$ where $h_{ref}$ denotes the reference solution for a model variable $h$ and $I$ is a discrete approximation of the global integral $$I(h)= \frac{\int_{\Omega} \, h \, m_x m_y \, d{\bf x}}{\int_{\Omega} \, m_x m_y \, d{\bf x}}, \label{normalizedintegral}$$ computed by an appropriate numerical quadrature rule, consistent with the numerical approximation being tested, and the maximum is computed over all nodal values. The test cases considered for the shallow water equations in spherical geometry are - a steady-state geostrophic flow: in particular, we have analyzed results in test case 2 of [@williamson:1992] in the configuration least favorable for methods employing longitude-latitude meshes; - the unsteady flow with exact analytical solution described in [@lauter:2005]; - the polar rotating low-high, introduced in [@mcdonald:1989], aimed at showing that no problems arise even in the case of strong cross polar flows; - zonal flow over an isolated mountain and Rossby-Haurwitz wave of wavenumber 4, corresponding respectively to test cases 5 and 6 in [@williamson:1992]. For the first two tests, analytic solutions are available and empirical convergence tests can be performed. The test cases considered for the discretization of equations - are - inertia gravity waves involving the evolution of a potential temperature perturbation in a channel with periodic boundary conditions and uniformly stratified environment with constant Brunt-Wäisälä frequency, as described in [@skamarock:1994]; - a rising thermal bubble given by the evolution of a warm bubble in a constant potential temperature environment, as described in [@carpenter:1990]. In all the numerical experiments performed for this paper, neither spectral filtering nor explicit diffusion of any kind were employed, the only numerical diffusion being implicit in the time discretization approach. We have not yet investigated to which extent the quality of the solutions is affected by this choice, but this should be taken into account when comparing quantitatively the results of the present method to those of reference models, such as the one described in [@jakob:1995], in which explicit numerical diffusion is added. Sensitivity of the comparison results to the amount of numerical diffusion has been highlighted in several model validation exercises, see e.g. [@ripodas:2009]. Since semi-implicit, semi-Lagrangian methods are most efficient for low Froude number flows, where the typical velocity is much smaller than that of the fastest propagating waves, all the tests considered fall in this hydrodynamical regime. Therefore, in order to assess the method efficiency, a distinction has been made between the maximum Courant number based on the velocity, on one hand, and, on the other hand, the maximum Courant number based on the celerity, or the maximum Courant number based on the sound speed, defined respectively as $$C_{vel}= \max \frac{\|{\bf u}\|_{\infty}\Delta t}{\Delta x / p}$$ $$C_{cel}= \max \frac{\sqrt{g h}\Delta t}{\Delta x / p}, \ \ C_{snd}= \max \frac{\sqrt{(c_p/c_v) R \Theta \Pi }\Delta t}{\Delta x / p},$$ where $\Delta x$ is to be interpreted as generic value of the meshsize in either coordinate direction. For the tests in which $p-$adaptivity was employed, if $p_I^n$ denotes the local polynomial degree used at timestep $t^n $ to represent a model variable inside the $I-th$ element of the mesh, while $p_{max}$ is the maximum local polynomial degree considered, the efficiency of the method in reducing the computational effort has been measured by monitoring the evolution of the quantities $$\Delta_{dof}^n = \frac{ \sum_{I=1}^N (p_I^n +1)^2 }{ N (p_{max}+1)^2 }, \ \ \ \ \ \Delta_{iter}^n = \frac{{\rm ITN}^n_{adapt}}{{\rm ITN}^n_{max}},$$ where $N$ is the total number of elements, ${\rm ITN}^n_{adapt} $ denotes the total number of GMRES iterations at time step $n $ for the adapted local degrees configuration and ${\rm ITN}^n_{max}$ the total number of GMRES iterations at time step $n $ for the configuration with maximum degree in all elements, respectively. Average values of these indicators over the simulations performed are reported in the following, denoted by $\Delta_{dof}^{average}$ and $\Delta_{iter}^{average}$ respectively. The error between the adaptive solution and the corresponding one obtained with uniform maximum polynomial degree everywhere has been measured in terms of (\[errornorms\]). Finally, in some cases conservation of global invariants has been monitored by evaluating at each time step the following global integral quantities: $$J(q^n) = \frac{ I(q(\cdot,t^n)) - I(q(\cdot,t^0)) }{I(q(\cdot,t^0))},$$ where $I(q)$ has been defined in (\[normalizedintegral\]) and $q^n=q(\cdot,t^n)$ is the density associated to each global invariant. According to the choice of $q$, following invariants are considered: mass, i.e. $q=q_{mass}=h$, total energy, i.e. $q=q_{energ}=\frac{1}{2} ( h \boldsymbol{u} \cdot \boldsymbol{u} + g (h^2 - b^2)$), and potential enstrophy, i.e. $q=q_{enstr}=\frac{1}{2h} (\hat{\boldsymbol{k}} \cdot \nabla \times \boldsymbol{u}+f)^2.$ Steady-state geostrophic flow {#test2} ----------------------------- We first consider the test case 2 of [@williamson:1992], where the solution is a steady state flow with velocity field corresponding to a zonal solid body rotation and $h$ field obtained from the velocity ones through geostrophic balance. All the parameter values are taken as in [@williamson:1992]. The flow orientation parameter has been chosen here as $\alpha = \pi/2 -0.05,$ making the test more challenging on a longitute-latitude mesh. Error norms associated to the solution obtained on a mesh of $10 \times 5$ elements for different polynomial degrees are shown in tables \[t2convrate\_h\_tab\], \[t2convrate\_u\_tab\] and \[t2convrate\_v\_tab\] for $h, $ $u $ and $v,$ respectively. All the results have been computed at $t_f = 10 $ days at fixed maximum Courant numbers $C_{cel}=8, C_{vel}=2, $ so that different values of $\Delta t $ have been employed for different polynomial order. We remark that the resulting time steps are significantly larger than those allowed by typical explicit time discretizations for analogous DG space discretizations, see e.g. the results in [@nair:2005b]. The spectral decay in the error norms can be clearly observed, until the time error becomes dominant. For better comparison with the results in [@nair:2005b], we consider again the configuration with $p^h=6, p^u=7$ on $10 \times 5$ elements, which corresponds to the same resolution in space as for the $150 \times 8 \times 8$ grid used in [@nair:2005b]. While $\Delta t = 36 $ s is used in [@nair:2005b] giving a $l_{\infty}(h) \approx 8 \times 10^{-6},$ the proposed SISLDG formulation can be run with $\Delta t = 3600 $ s, in which case $l_{\infty}(h) \approx 3 \times 10^{-7},$ and the average number of iterations required by the linear solver is 1 for the TR substep and 4 for the BDF2 substep. $$\begin{array}{cccccc} \toprule p^h & p^u & \Delta t \ [s] & l_1(h) & l_2(h) & l_{\infty}(h) \\ \midrule 2 & 3 & 4800 & 5.558 \times 10^{-3} & 6.805 \times 10^{-3} & 1.914 \times 10^{-2} \\ 3 & 4 & 3600 & 6.017 \times 10^{-4} & 8.176 \times 10^{-4} & 2.569 \times 10^{-3} \\ 4 & 5 & 2880 & 1.743 \times 10^{-5} & 2.405 \times 10^{-5} & 9.024 \times 10^{-5} \\ 5 & 6 & 2400 & 1.586 \times 10^{-6} & 2.281 \times 10^{-6} & 1.058 \times 10^{-5} \\ 6 & 7 & 2057 & 8.829 \times 10^{-8} & 1.206 \times 10^{-7} & 4.926 \times 10^{-7} \\ 7 & 8 & 1800 & 1.246 \times 10^{-8} & 1.590 \times 10^{-8} & 4.158 \times 10^{-8} \\ 8 & 9 & 1600 & 5.641 \times 10^{-9} & 5.952 \times 10^{-9} & 6.320 \times 10^{-9} \\ \bottomrule \end{array}$$ $$\begin{array}{cccccc} \toprule p^h & p^u & \Delta t \ [s] & l_1(u) & l_2(u) & l_{\infty}(u) \\ \midrule 2 & 3 & 4800 & 6.351 \times 10^{-2} & 6.432 \times 10^{-2} & 1.143 \times 10^{-1} \\ 3 & 4 & 3600 & 9.505 \times 10^{-3} & 1.037 \times 10^{-2} & 2.106 \times 10^{-2} \\ 4 & 5 & 2880 & 4.288 \times 10^{-4} & 4.887 \times 10^{-4} & 2.393 \times 10^{-3} \\ 5 & 6 & 2400 & 4.598 \times 10^{-5} & 4.830 \times 10^{-5} & 1.706 \times 10^{-4} \\ 6 & 7 & 2057 & 2.057 \times 10^{-6} & 2.262 \times 10^{-6} & 5.879 \times 10^{-6} \\ 7 & 8 & 1800 & 2.162 \times 10^{-7} & 2.358 \times 10^{-7} & 6.428 \times 10^{-7} \\ 8 & 9 & 1600 & 2.013 \times 10^{-8} & 2.276 \times 10^{-8} & 3.268 \times 10^{-8} \\ \bottomrule \end{array}$$ $$\begin{array}{cccccc} \toprule p^h & p^u & \Delta t \ [s] & l_1(v) & l_2(v) & l_{\infty}(v) \\ \midrule 2 & 3 & 4800 & 1.001 \times 10^{-1} & 1.016 \times 10^{-1} & 2.698 \times 10^{-1} \\ 3 & 4 & 3600 & 1.859 \times 10^{-2} & 1.823 \times 10^{-2} & 6.848 \times 10^{-2} \\ 4 & 5 & 2880 & 7.376 \times 10^{-4} & 7.428 \times 10^{-4} & 2.884 \times 10^{-3} \\ 5 & 6 & 2400 & 8.185 \times 10^{-5} & 8.307 \times 10^{-5} & 2.574 \times 10^{-4} \\ 6 & 7 & 2057 & 3.074 \times 10^{-6} & 3.173 \times 10^{-6} & 1.123 \times 10^{-5} \\ 7 & 8 & 1800 & 3.370 \times 10^{-7} & 3.432 \times 10^{-7} & 1.323 \times 10^{-6} \\ 8 & 9 & 1600 & 2.175 \times 10^{-8} & 2.317 \times 10^{-8} & 5.124 \times 10^{-8} \\ \bottomrule \end{array}$$ Another convergence test was performed for $p^h = p^u = 3, $ increasing the number of elements and correspondingly decreasing the value of the time step. In this case, the maximum Courant numbers vary because of the mesh inhomogeneity, so that $ 2 < C_{cel} < 18,$ $ 0.5 < C_{vel} < 4.$ The results are reported in tables \[t2convrate\_h\_tab\_pfix\], \[t2convrate\_u\_tab\_pfix\] and \[t2convrate\_v\_tab\_pfix\] for $h, $ $u $ and $v,$ respectively. The empirical convergence order $q_2^{emp}$ based on the $l_2 $ norm errors has also been estimated, showing that in this stationary test convergence rates above the second order of the time discretization can be achieved. $$\begin{array}{ccccccc} \toprule N_x \times N_y & \Delta t \ [s] & l_1(h) & l_2(h) & l_{\infty}(h) & q_2^{emp} \\ \midrule 10 \times 5 & 3600 & 2.557 \times 10^{-4} & 3.495 \times 10^{-4} & 1.403 \times 10^{-3} & - \\ 20 \times 10 & 1800 & 2.187 \times 10^{-5} & 2.889 \times 10^{-5} & 1.566 \times 10^{-4} & 3.6 \\ 40 \times 20 & 900 & 2.530 \times 10^{-6} & 3.353 \times 10^{-6} & 1.430 \times 10^{-5} & 3.1 \\ 80 \times 40 & 450 & 3.996 \times 10^{-7} & 5.534 \times 10^{-7} & 3.134 \times 10^{-6} & 2.6 \\ \bottomrule \end{array}$$ $$\begin{array}{ccccccc} \toprule N_x \times N_y & \Delta t \ [s] & l_1(u) & l_2(u) & l_{\infty}(u) & q_2^{emp} \\ \midrule 10 \times 5 & 3600 & 2.769 \times 10^{-3} & 3.358 \times 10^{-3} & 8.948 \times 10^{-3} & - \\ 20 \times 10 & 1800 & 2.896 \times 10^{-4} & 3.720 \times 10^{-4} & 2.414 \times 10^{-3} & 3.2 \\ 40 \times 20 & 900 & 3.647 \times 10^{-5} & 4.563 \times 10^{-5} & 2.473 \times 10^{-4} & 3.0 \\ 80 \times 40 & 450 & 6.826 \times 10^{-6} & 1.035 \times 10^{-5} & 9.525 \times 10^{-5} & 2.1 \\ \bottomrule \end{array}$$ $$\begin{array}{cccccc} \toprule N_x \times N_y & \Delta t \ [s] & l_1(v) & l_2(v) & l_{\infty}(v) & q_2^{emp} \\ \midrule 10 \times 5 & 3600 & 3.309 \times 10^{-3} & 3.346 \times 10^{-3} & 8.250 \times 10^{-3} & - \\ 20 \times 10 & 1800 & 4.016 \times 10^{-4} & 4.233 \times 10^{-4} & 1.255 \times 10^{-3} & 3.0 \\ 40 \times 20 & 900 & 5.180 \times 10^{-5} & 5.578 \times 10^{-5} & 2.329 \times 10^{-4} & 2.9 \\ 80 \times 40 & 450 & 9.405 \times 10^{-6} & 1.214 \times 10^{-5} & 7.763 \times 10^{-5} & 2.2 \\ \bottomrule \end{array}$$ Unsteady flow with analytic solution {#lauter} ------------------------------------ In a second, time dependent test, the analytic solution of (\[continuityeq\])-(\[vectmomentumeq\]) derived in [@lauter:2005] has been employed to assess the performance of the proposed discretization. More specifically, the analytic solution defined in formula (23) of [@lauter:2005] was used. Since the exact solution is periodic, the initial profiles also correspond to the exact solution an integer number of days later. The proposed SISLDG scheme has been integrated up to $t_f= 5 $ days with $p^h =4 $ and $ p^u = 5 $ on meshes with increasing number of elements, while the time step has been decreased accordingly. In this case, the maximum Courant numbers vary because of the mesh dishomogeneity, so that $ 4 < C_{cel} < 26,$ $ 1.25 < C_{vel} < 8.$ Error norms for $h, u, v$ of the above-mentioned integrations have been computed at $t_f=5 $ days and displayed in tables \[tlauterconvrate\_h\_tab\] - \[tlauterconvrate\_v\_tab\]. An empirical order estimation shows that full second order accuracy in time is attained. $N_x \times N_y$ $\Delta t \ [s]$ $l_1(h)$ $l_2(h)$ $l_{\infty}(h)$ $q_2^{emp} $ --------------------------- ------------------- ------------------------ ------------------------ ------------------------ -------------- $10 \times \hspace{2mm}5$ 3600 $5.456 \times 10^{-3}$ $6.120 \times 10^{-3}$ $9.537 \times 10^{-3}$ - $20 \times 10 $ 1800 $1.246 \times 10^{-3}$ $1.397 \times 10^{-3}$ $2.143 \times 10^{-3}$ 2.1 $40 \times 20 $ 900 $3.039 \times 10^{-4}$ $3.410 \times 10^{-4}$ $5.207 \times 10^{-4}$ 2.0 $80 \times 40 $ 450 $7.548 \times 10^{-5}$ $8.475 \times 10^{-5}$ $1.292 \times 10^{-4}$ 2.0 : Relative errors on $h $ at different resolutions, Läuter test case.[]{data-label="tlauterconvrate_h_tab"} $N_x \times N_y$ $\Delta t \ [s]$ $l_1(u)$ $l_2(u)$ $l_{\infty}(u)$ $q_2^{emp}$ --------------------------- ------------------- ------------------------ ------------------------ ------------------------ ------------- $10 \times \hspace{2mm}5$ 3600 $6.567 \times 10^{-2}$ $7.848 \times 10^{-2}$ $1.670 \times 10^{-1}$ - $20 \times 10 $ 1800 $1.665 \times 10^{-2}$ $1.994 \times 10^{-2}$ $3.931 \times 10^{-2}$ 2.0 $40 \times 20 $ 900 $4.210 \times 10^{-3}$ $5.032 \times 10^{-3}$ $9.811 \times 10^{-3}$ 2.0 $80 \times 40 $ 450 $1.057 \times 10^{-3}$ $1.261 \times 10^{-3}$ $2.452 \times 10^{-3}$ 2.0 : Relative errors on $u $ at different resolutions, Läuter test case.[]{data-label="tlauterconvrate_u_tab"} $N_x \times N_y$ $\Delta t \ [s]$ $l_1(v)$ $l_2(v)$ $l_{\infty}(v)$ $q_2^{emp}$ --------------------------- ------------------- ------------------------ ------------------------ ------------------------ ------------- $10 \times \hspace{2mm}5$ 3600 $1.174 \times 10^{-1}$ $1.198 \times 10^{-1}$ $2.316 \times 10^{-1}$ - $20 \times 10 $ 1800 $2.939 \times 10^{-2}$ $3.002 \times 10^{-2}$ $5.561 \times 10^{-2}$ 2.0 $40 \times 20 $ 900 $7.336 \times 10^{-3}$ $7.497 \times 10^{-3}$ $1.390 \times 10^{-2}$ 2.0 $80 \times 40 $ 450 $1.833 \times 10^{-3}$ $1.874 \times 10^{-3}$ $3.464 \times 10^{-3}$ 2.0 : Relative errors on $v $ at different resolutions, Läuter test case.[]{data-label="tlauterconvrate_v_tab"} For comparison, analogous errors have been computed with the same discretization parameters but employing the off centered Crank Nicolson method of [@tumolo:2013] with $\theta=0.6$. The resulting improvement in the errors between the TRBDF2 scheme and the off-centered Crank Nicolson is achieved at an essentially equivalent computational cost in terms of total CPU time employed. $N_x \times N_y$ $\Delta t \ [s]$ $l_1(h)$ $l_2(h)$ $l_{\infty}(h)$ $q_2^{emp} $ --------------------------- ------------------- ------------------------ ------------------------ ------------------------ -------------- $10 \times \hspace{2mm}5$ 3600 $1.444 \times 10^{-2}$ $1.633 \times 10^{-2}$ $2.398 \times 10^{-2}$ - $20 \times 10 $ 1800 $8.742 \times 10^{-3}$ $9.894 \times 10^{-3}$ $1.445 \times 10^{-2}$ 0.7 $40 \times 20 $ 900 $4.814 \times 10^{-3}$ $5.451 \times 10^{-3}$ $7.956 \times 10^{-3}$ 0.9 $80 \times 40 $ 450 $2.526 \times 10^{-3}$ $2.861 \times 10^{-3}$ $4.177 \times 10^{-3}$ 0.9 : Relative errors on $h $ at different resolutions, Läuter test case with off centered Crank Nicolson, $\theta=0.6$.[]{data-label="tlauterconvrate_h_tab_thet06"} $N_x \times N_y$ $\Delta t \ [s]$ $l_1(u)$ $l_2(u)$ $l_{\infty}(u)$ $q_2^{emp}$ --------------------------- ------------------- ------------------------ ------------------------ ------------------------ ------------- $10 \times \hspace{2mm}5$ 3600 $1.800 \times 10^{-1}$ $2.092 \times 10^{-1}$ $3.810 \times 10^{-1}$ - $20 \times 10 $ 1800 $1.077 \times 10^{-1}$ $1.255 \times 10^{-1}$ $2.155 \times 10^{-1}$ 0.7 $40 \times 20 $ 900 $5.895 \times 10^{-2}$ $6.880 \times 10^{-2}$ $1.186 \times 10^{-1}$ 0.9 $80 \times 40 $ 450 $3.084 \times 10^{-2}$ $3.603 \times 10^{-2}$ $6.234 \times 10^{-2}$ 0.9 : Relative errors on $u $ at different resolutions, Läuter test case with off centered Crank Nicolson, $\theta=0.6$.[]{data-label="tlauterconvrate_u_tab_thet06"} $N_x \times N_y$ $\Delta t \ [s]$ $l_1(v)$ $l_2(v)$ $l_{\infty}(v)$ $q_2^{emp}$ --------------------------- ------------------- ------------------------ ------------------------ ------------------------ ------------- $10 \times \hspace{2mm}5$ 3600 $3.608 \times 10^{-1}$ $3.665 \times 10^{-1}$ $5.166 \times 10^{-1}$ - $20 \times 10 $ 1800 $2.164 \times 10^{-1}$ $2.198 \times 10^{-1}$ $3.041 \times 10^{-1}$ 0.7 $40 \times 20 $ 900 $1.185 \times 10^{-1}$ $1.203 \times 10^{-1}$ $1.671 \times 10^{-2}$ 0.9 $80 \times 40 $ 450 $6.195 \times 10^{-2}$ $6.291 \times 10^{-2}$ $8.809 \times 10^{-2}$ 0.9 : Relative errors on $v $ at different resolutions, Läuter test case with off centered Crank Nicolson, $\theta=0.6$.[]{data-label="tlauterconvrate_v_tab_thet06"} Zonal flow over an isolated mountain {#test5} ------------------------------------ We have then performed numerical simulations reproducing the test case 5 of [@williamson:1992], given by a zonal flow impinging on an isolated mountain of conical shape. The geostrophic balance here is broken by orographic forcing, which results in the development of a planetary wave propagating all around the globe. Plots of the fluid depth $h$ as well as of the velocity components $u$ and $v$ at 15 days are shown in figures \[fig:t5\_h\]-\[fig:t5\_v\]. The resolution used corresponds to a mesh of $60 \times 30$ elements with $p^h = 4,$ $p^u = 5,$ and $\Delta t = 900 \ \text{s},$ giving a Courant number $C_{cel} \approx 58$ in elements close to the poles. It can be observed that all the main features of the flow are correctly reproduced. In particular, no significant Gibbs phenomena are detected in the vicinity of the mountain, even in the initial stages of the simulation. ![$h$ field after 15 days, isolated mountain wave test case, $C_{cel} \approx 58.$ Contour lines spacing is 50 m.[]{data-label="fig:t5_h"}](figures/wil92_t5/15_days_noada/label/eta_contours.eps){width="\textwidth"} ![$u$ field after 15 days, isolated mountain wave test case, $C_{cel} \approx 58.$ Contour lines spacing is 4 m s$^{-1}$.[]{data-label="fig:t5_u"}](figures/wil92_t5/15_days_noada/label/u_contours.eps){width="\textwidth"} ![$v$ field after 15 days, isolated mountain wave test case, $C_{cel} \approx 58.$ Contour lines spacing is 4 m s$^{-1}$.[]{data-label="fig:t5_v"}](figures/wil92_t5/15_days_noada/label/v_contours.eps){width="\textwidth"} The evolution in time of global invariants during this simulation is shown in figures \[fig:test5\_invariant\_mass\], \[fig:test5\_invariant\_energy\], \[fig:test5\_invariant\_enstrophy\], respectively. Error norms for $h$ and $u$ at different resolutions corresponding to a $C_{cel} \approx 6$ and $p^h=p^u=3,$ have been computed at $t_f=5$ days and are displayed in tables \[tab:t5convrate\_h\] - \[tab:t5convrate\_u\], with respect to a reference solution given by the National Center for Atmospheric Research (NCAR) spectral model [@jakob:1995] at resolution T511. It is apparent the second order of the proposed SISLDG scheme in time. Since, as observed in [@jakob:1995], the National Center for Atmospheric Research (NCAR) spectral model incorporates diffusion terms in the governing equations, while the proposed SISLDG scheme does not employ any diffusion terms nor filtering, nor smoothing of the topography, for this test it seemed more appropriate to compute relative errors with respect to NCAR spectral model [@jakob:1995] solution at an earlier time, $t_f=5$ days, when it can be assumed that the effects of diffusion have less impact. Error norms for $h$ and $u$ have been computed at $t_f=15$ days at different resolutions (corresponding to a $C_{cel} \approx 7$), $p^h=p^u=3,$ and displayed in tables \[tab:t5convrate\_h\_15dd\] - \[tab:t5convrate\_u\_15dd\]. $N_x \times N_y$ $\Delta t \text{[min]}$ $l_1(h)$ $l_2(h)$ $l_{\infty}(h)$ $q_2^{emp}$ --------------------------- ------------------------- ----------------------- ----------------------- ----------------------- ------------- $12 \times \hspace{2mm}6$ 20 $8.19 \times 10^{-4}$ $1.08 \times 10^{-3}$ $5.90 \times 10^{-3}$ - $24 \times 12 $ 10 $1.49 \times 10^{-4}$ $2.08 \times 10^{-4}$ $1.92 \times 10^{-3}$ 2.4 $48 \times 24 $ 5 $2.88 \times 10^{-5}$ $4.25 \times 10^{-5}$ $8.40 \times 10^{-4}$ 2.3 : Relative errors on $h $ at different resolutions, isolated mountain wave test case, $t_f=5$ days.[]{data-label="tab:t5convrate_h"} $N_x \times N_y$ $\Delta t \text{[min]}$ $l_1(u)$ $l_2(u)$ $l_{\infty}(u)$ $q_2^{emp}$ --------------------------- ------------------------- ----------------------- ----------------------- ----------------------- ------------- $12 \times \hspace{2mm}6$ 20 $4.33 \times 10^{-2}$ $5.81 \times 10^{-2}$ $1.39 \times 10^{-1}$ - $24 \times 12 $ 10 $5.70 \times 10^{-3}$ $7.33 \times 10^{-3}$ $1.06 \times 10^{-1}$ 2.9 $48 \times 24 $ 5 $1.11 \times 10^{-3}$ $1.72 \times 10^{-3}$ $1.56 \times 10^{-2}$ 2.2 : Relative errors on $u $ at different resolutions, isolated mountain wave test case, $t_f=5$ days.[]{data-label="tab:t5convrate_u"} $N_x \times N_y$ $\Delta t \text{[s]}$ $l_1(h)$ $l_2(h)$ $l_{\infty}(h)$ $q_2^{emp}$ --------------------------- ----------------------- ----------------------- ----------------------- ----------------------- ------------- $12 \times \hspace{2mm}6$ 1500 $2.34 \times 10^{-3}$ $2.92 \times 10^{-3}$ $1.49 \times 10^{-2}$ - $24 \times 12 $ 750 $5.99 \times 10^{-4}$ $7.72 \times 10^{-4}$ $3.87 \times 10^{-3}$ 1.9 $48 \times 24 $ 375 $2.00 \times 10^{-4}$ $2.74 \times 10^{-4}$ $1.87 \times 10^{-3}$ 1.5 : Relative errors on $h $ at different resolutions, isolated mountain wave test case, $t_f=15$ days.[]{data-label="tab:t5convrate_h_15dd"} $N_x \times N_y$ $\Delta t \text{[s]}$ $l_1(u)$ $l_2(u)$ $l_{\infty}(u)$ $q_2^{emp}$ --------------------------- ----------------------- ----------------------- ----------------------- ----------------------- ------------- $12 \times \hspace{2mm}6$ 1500 $1.12 \times 10^{-1}$ $1.29 \times 10^{-1}$ $2.97 \times 10^{-1}$ - $24 \times 12 $ 750 $2.09 \times 10^{-2}$ $2.37 \times 10^{-2}$ $5.73 \times 10^{-2}$ 2.4 $48 \times 24 $ 375 $6.37 \times 10^{-3}$ $7.92 \times 10^{-3}$ $3.39 \times 10^{-2}$ 1.6 : Relative errors on $u $ at different resolutions, isolated mountain wave test case, $t_f=15$ days.[]{data-label="tab:t5convrate_u_15dd"} Finally the mountain wave test case has been run on the same mesh of $60 \times 30 $ elements, $\Delta t = 900 $ s, with either static or static plus dynamic adaptivity. The tolerance $\epsilon$ for the dynamic adaptivity [@tumolo:2013] has been set to $\epsilon = 10^{-2}$. Results are reported in terms of error norms with respect to a nonadaptive solution at the maximum uniform resolution and in terms of efficiency gain, measured through the saving of number of linear solver iterations per time-step $\Delta_{iter}^{average} $ as well as through the saving of number of degrees of freedom actually used per timestep $\Delta_{dof}^{average} $; these results are summarized in tables \[t5\_adaptivity\_h\_tab\] - \[t5\_adaptivity\_v\_tab\]: the use of static adaptivity only resulted in $\Delta_{iter}^{average} \approx 10.7\%$ and $\Delta_{dof}^{average} \approx 88\%,$ while the use of both static and dynamic adaptivity led to $\Delta_{iter}^{average} \approx 13\%$ and $\Delta_{dof}^{average} \approx 45\%.$ The distribution of the statically and dynamically adapted local polynomial degree used to represent the solution after 15 days is shown in figure \[fig:t5\_ph\]. It can be noticed how, even after 15 days, higher polynomial degrees are still automatically concentrated around the location of the mountain. $\text{adaptivity}$ $l_1(h)$ $l_2(h)$ $l_{\infty}(h)$ --------------------------- ------------------------ ------------------------ ------------------------ $\text{static}$ $1.415 \times 10^{-4}$ $3.314 \times 10^{-4}$ $2.117 \times 10^{-3}$ $\text{static + dynamic}$ $1.660 \times 10^{-4}$ $3.419 \times 10^{-4}$ $2.038 \times 10^{-3}$ : Relative errors between (statically and statically plus dynamically) adaptive and nonadaptive solution for isolated mountain wave test case, $h $ field.[]{data-label="t5_adaptivity_h_tab"} $\text{adaptivity}$ $l_1(u)$ $l_2(u)$ $l_{\infty}(u)$ --------------------------- ------------------------ ------------------------ ------------------------ $\text{static}$ $1.289 \times 10^{-2}$ $3.275 \times 10^{-2}$ $1.524 \times 10^{-1}$ $\text{static + dynamic}$ $1.509 \times 10^{-2}$ $3.309 \times 10^{-2}$ $1.475 \times 10^{-1}$ : Relative errors between (statically and statically plus dynamically) adaptive and nonadaptive solution for isolated mountain wave test case, $u $ field.[]{data-label="t5_adaptivity_u_tab"} $\text{adaptivity}$ $l_1(v)$ $l_2(v)$ $l_{\infty}(v)$ --------------------------- ------------------------ ------------------------ ------------------------ $\text{static}$ $2.501 \times 10^{-2}$ $6.824 \times 10^{-2}$ $6.597 \times 10^{-1}$ $\text{static + dynamic}$ $2.833 \times 10^{-2}$ $7.019 \times 10^{-2}$ $6.975 \times 10^{-1}$ : Relative errors between (statically and statically plus dynamically) adaptive and nonadaptive solution for isolated mountain wave test case, $v $ field.[]{data-label="t5_adaptivity_v_tab"} Rossby-Haurwitz wave {#rossby} -------------------- We have then considered test case 6 of [@williamson:1992], where the initial datum consists of a Rossby-Haurwitz wave of wave number 4. This case actually concerns a solution of the nondivergent barotropic vorticity equation, that is not an exact solution of the system (\[continuityeq\]) - (\[vectmomentumeq\]). For a discussion about the stability of this profile as a solution of (\[continuityeq\]) - (\[vectmomentumeq\]) see [@thuburn:2000]. Plots of the fluid depth $h$ as well as of the velocity components $u$ and $v$ at 15 days are shown in figures \[fig:t6\_h\]-\[fig:t6\_v\]. The resolution used corresponds to a mesh of $64 \times 32$ elements with $p^h = 4,$ $p^u = 5,$ and $\Delta t = 900 \ \text{s},$ giving a Courant number $C_{cel} \approx 83$ in elements close to poles. It can be observed that all the main features of the flow are correctly reproduced. ![ $h$ field after 15 days, Rossby-Haurwitz wave test case, $C_{cel} \approx 83.$ Contour lines spacing is 100 m.[]{data-label="fig:t6_h"}](figures/wil92_t6/15_days_noada/eta_contours.eps){width="\textwidth"} ![$u$ field after 15 days, Rossby-Haurwitz wave test case, $C_{cel} \approx 83.$ Contour lines spacing is 8 m s$^{-1}$.[]{data-label="fig:t6_u"}](figures/wil92_t6/15_days_noada/u_contours.eps){width="\textwidth"} ![$v$ field after 15 days, Rossby-Haurwitz wave test case, $C_{cel} \approx 83.$ Contour lines spacing is 8 m s$^{-1}$.[]{data-label="fig:t6_v"}](figures/wil92_t6/15_days_noada/v_contours.eps){width="\textwidth"} The evolution in time of global invariants during this simulation is shown in figures \[fig:test6\_invariant\_mass\], \[fig:test6\_invariant\_energy\], \[fig:test6\_invariant\_enstrophy\], respectively. Error norms for $h$ and $u$ at different resolutions, corresponding to a $C_{cel} \approx 32 $ and $ p^h=4, p^u=5 $ have been computed at $t_f=15$ days and are displayed in tables \[tab:t6convrate\_h\] - \[tab:t6convrate\_u\], with respect to a reference solution given by the National Center for Atmospheric Research (NCAR) spectral model [@jakob:1995] at resolution T511. It is apparent the second order of the proposed SISLDG scheme in time. Unlike the NCAR spectral model, the proposed SISLDG scheme does not employ any explicit numerical diffusion. $N_x \times N_y$ $\Delta t \text{[min]}$ $l_1(h)$ $l_2(h)$ $l_{\infty}(h)$ $q_2^{emp}$ --------------------------- ------------------------- ----------------------- ----------------------- ----------------------- ------------- $10 \times \hspace{2mm}5$ 60 $2.92 \times 10^{-2}$ $3.82 \times 10^{-2}$ $6.75 \times 10^{-2}$ - $20 \times 10 $ 30 $5.50 \times 10^{-3}$ $6.80 \times 10^{-3}$ $1.11 \times 10^{-2}$ 2.4 $40 \times 20 $ 15 $1.40 \times 10^{-3}$ $1.80 \times 10^{-3}$ $3.20 \times 10^{-3}$ 2.0 : Relative errors on $h $ at different resolutions, Rossby-Haurwitz wave test case.[]{data-label="tab:t6convrate_h"} $N_x \times N_y$ $\Delta t \text{[min]}$ $l_1(u)$ $l_2(u)$ $l_{\infty}(u)$ $q_2^{emp}$ --------------------------- ------------------------- ------------------------ ------------------------ ------------------------ ------------- $10 \times \hspace{2mm}5$ 60 $4.065 \times 10^{-1}$ $3.775 \times 10^{-1}$ $2.305 \times 10^{-1}$ - $20 \times 10 $ 30 $7.79 \times 10^{-2}$ $7.33 \times 10^{-2}$ $5.67 \times 10^{-2}$ 2.4 $40 \times 20 $ 15 $2.04 \times 10^{-2}$ $1.95 \times 10^{-2}$ $1.76 \times 10^{-2}$ 1.9 : Relative errors on $u $ at different resolutions, Rossby-Haurwitz wave test case.[]{data-label="tab:t6convrate_u"} Finally, the Rossby-Haurwitz wave test case has been run on the same mesh of $64 \times 32 $ elements, $\Delta t = 900 $ s, with either static or static plus dynamic adaptivity. The tolerance $\epsilon$ for the dynamic adaptivity [@tumolo:2013] has been set to $\epsilon = 5 \times 10^{-2}$. Results are reported in terms of error norms with respect to a nonadaptive solution at the maximum uniform resolution and in terms of efficiency gain, measured through the saving of number of linear solver iterations per time-step $\Delta_{iter}^{average} $ as well as through the saving of number of degrees of freedom actually used per timestep $\Delta_{dof}^{average} $; these results are summarized in tables \[t6\_adaptivity\_h\_tab\] - \[t6\_adaptivity\_v\_tab\]: the use of static adaptivity only resulted in $\Delta_{iter}^{average} \approx 10.7\%$ and $\Delta_{dof}^{average} \approx 88\%,$ while the use of both static and dynamic adaptivity led to $\Delta_{iter}^{average} \approx 13\%$ and $\Delta_{dof}^{average} \approx 45\%.$ The distribution of the statically and dynamically adapted local polynomial degree used to represent the solution after 15 days is shown in figure \[fig:t6\_ph\]. It can be noticed how, even after 15 days, and even if the maximum allowed $p^h$ is 4, the use of the adaptivity criterion with $\epsilon = 5 \times 10^{-2}$ leads to the use of at most cubic polynomials for the local representation of $h.$ $\text{adaptivity}$ $l_1(h)$ $l_2(h)$ $l_{\infty}(h)$ --------------------------- ------------------------ ------------------------ ------------------------ $\text{static}$ $2.182 \times 10^{-4}$ $3.434 \times 10^{-4}$ $2.856 \times 10^{-4}$ $\text{static + dynamic}$ $2.358 \times 10^{-3}$ $2.963 \times 10^{-3}$ $5.157 \times 10^{-3}$ : Relative errors between (statically and statically plus dynamically) adaptive and nonadaptive solution for Rossby-Haurwitz wave test case, $h $ field.[]{data-label="t6_adaptivity_h_tab"} $\text{adaptivity}$ $l_1(u)$ $l_2(u)$ $l_{\infty}(u)$ --------------------------- ------------------------ ------------------------ ------------------------ $\text{static}$ $7.041 \times 10^{-3}$ $1.236 \times 10^{-2}$ $2.834 \times 10^{-2}$ $\text{static + dynamic}$ $3.639 \times 10^{-2}$ $3.387 \times 10^{-2}$ $2.678 \times 10^{-2}$ : Relative errors between (statically and statically plus dynamically) adaptive and nonadaptive solution for Rossby-Haurwitz wave test case, $u $ field.[]{data-label="t6_adaptivity_u_tab"} $\text{adaptivity}$ $l_1(v)$ $l_2(v)$ $l_{\infty}(v)$ --------------------------- ------------------------ ------------------------ ------------------------ $\text{static}$ $3.158 \times 10^{-3}$ $3.250 \times 10^{-3}$ $1.148 \times 10^{-2}$ $\text{static + dynamic}$ $2.723 \times 10^{-2}$ $2.432 \times 10^{-2}$ $2.646 \times 10^{-2}$ : Relative errors between (statically and statically plus dynamically) adaptive and nonadaptive solution for Rossby-Haurwitz wave test case, $v $ field.[]{data-label="t6_adaptivity_v_tab"} Nonhydrostatic inertia gravity waves {#nh_igw} ------------------------------------ In this section we consider the test case proposed in [@skamarock:1994]. It consists in a set of inertia-gravity waves propagating in a channel with a uniformly stratified reference atmosphere characterized by a constant Brunt-Wäisälä frequency $N^2 = 0.01$. The domain and the initial and boundary conditions are identical to those of [@skamarock:1994]. The initial perturbation in potential temperature radiates symmetrically to the left and to the right, but because of the superimposed mean horizontal flow ($u=20 $m/s), does not remain centered around the initial position. Contours of potential temperature perturbation, horizontal velocity, and vertical velocity time $t_f=3000 $ s are shown in figures \[fig:igw\_pottemp\], \[fig:igw\_u\], \[fig:igw\_w\], respectively. The computed results compare well with the structure displayed by the analytical solution of the linearized equations proposed in [@baldauf:2013] and with numerical results obtained with other numerical methods, see e.g. [@bonaventura:2000]. It is to be remarked that for this experiment $300\times 10$ elements, $p^{\pi}=4,$ $p^{u}=5$ and a timestep $\Delta t = 15 $ s were used, corresponding to a Courant number $C_{snd} \approx 25.$ Rising thermal bubble {#warm_bubble} --------------------- As nonlinear nonhydrostatic time-dependent experiment, we consider in this section the test case proposed in [@carpenter:1990]. It consists in the evolution of a warm bubble placed in an isentropic atmosphere at rest. All data are as in [@carpenter:1990]. Contours of potential temperature perturbation at different times are shown in figure \[fig:bubble\_w\_bn\_nofill\]. These results were obtained using $64\times 80$ elements, $p^{\pi}=4,$ $p^{u}=5$ and a timestep $\Delta t = 0.5 $ s, corresponding to a Courant number $C_{snd} \approx 17.$ [![Contours (every 0.2 K and the zero contour is omitted) of perturbation potential temperature in the rising thermal bubble test at time 10 min, 14 min, 15 min and 16 min respectively in clockwise sense.[]{data-label="fig:bubble_w_bn_nofill"}](figures/wbubbleMsl60_Tpt_600s_15m_bn.eps "fig:"){width=".55\linewidth"}]{} [![Contours (every 0.2 K and the zero contour is omitted) of perturbation potential temperature in the rising thermal bubble test at time 10 min, 14 min, 15 min and 16 min respectively in clockwise sense.[]{data-label="fig:bubble_w_bn_nofill"}](figures/wbubbleMsl60_Tpt_840s_15m_bn.eps "fig:"){width=".55\linewidth"}]{}\ [![Contours (every 0.2 K and the zero contour is omitted) of perturbation potential temperature in the rising thermal bubble test at time 10 min, 14 min, 15 min and 16 min respectively in clockwise sense.[]{data-label="fig:bubble_w_bn_nofill"}](figures/wbubbleMsl60_Tpt_900s_15m_bn.eps "fig:"){width=".55\linewidth"}]{} [![Contours (every 0.2 K and the zero contour is omitted) of perturbation potential temperature in the rising thermal bubble test at time 10 min, 14 min, 15 min and 16 min respectively in clockwise sense.[]{data-label="fig:bubble_w_bn_nofill"}](figures/wbubbleMsl60_Tpt_960s_15m_bn.eps "fig:"){width=".55\linewidth"}]{} Conclusions and future perspectives {#conclu} =================================== We have introduced an accurate and efficient discretization approach for typical model equations of atmospheric flows. We have extended to spherical geometry the techniques proposed in [@tumolo:2013], combining a semi-Lagrangian approach with the TR-BDF2 semi-implicit time discretization method and with a spatial discretization based on adaptive discontinuous finite elements. The resulting method is unconditionally stable and has full second order accuracy in time, thus improving standard off-centered trapezoidal rule discretizations without any major increase in the computational cost nor loss in stability, while allowing the use of time steps up to 100 times larger than those required by stability for explicit methods applied to corresponding DG discretizations. The method also has arbitrarily high order accuracy in space and can effectively adapt the number of degrees of freedom employed in each element in order to balance accuracy and computational cost. The $p-$adaptivity approach employed does not require remeshing and is especially suitable for applications, such as numerical weather prediction, in which a large number of physical quantities is associated to a given the mesh. Furthermore, although the proposed method can be implemented on arbitrary unstructured and nonconforming meshes, like reduced Gaussian grids employed by spectral transform models, even in applications on simple Cartesian meshes in spherical coordinates the $p-$adaptivity approach can cure effectively the pole problem by reducing the polynomial degree in the polar elements, yielding a reduction in the computational cost that is comparable to that achieved with reduced grids. Numerical simulations of classical shallow water and non-hydrostatic benchmarks have been employed to validate the method and to demonstrate its capability to achieve accurate results even at large Courant numbers, while reducing the computational cost thanks to the adaptivity approach. The proposed numerical framework can thus provide the basis of for an accurate and efficient adaptive weather prediction system. Acknowledgements {#acknowledgements .unnumbered} ================ This research work has been supported financially by the The Abdus Salam International Center for Theoretical Physics, Earth System Physics Section. We are extremely grateful to Filippo Giorgi of ICTP for his strong interest in our work and his continuous support. Financial support has also been provided by the INDAM-GNCS 2012 project *Sviluppi teorici ed applicativi dei metodi Semi-Lagrangiani* and by Politecnico di Milano. We would also like to acknowledge useful conversations on the topics of this paper with C. Erath, F. X. Giraldo, M. Restelli, N. Wood.
--- abstract: 'Motivated by the ring of integers of cyclic number fields of prime degree, we introduce the notion of Lagrangian lattices. Furthermore, given an arbitrary non-trivial lattice $\L$ we construct a family of full-rank sub-lattices $\{\L_{\alpha}\}$ of $\L$ such that whenever $\L$ is Lagrangian it can be easily checked whether or not $\L_{\alpha}$ has a basis of minimal vectors. In this case, a basis of minimal vectors of $\L_{\alpha}$ is given.' author: - 'Mohamed Taoufiq Damir[^1],  Guillermo Mantilla-Soler[^2]' bibliography: - 'ref.bib' title: 'Bases of Minimal Vectors in Lagrangian Lattices.' --- Introduction and Background ============================ In [@conway1995lattice] Conway and Sloane constructed the first example of an $11$-dimensional lattice that is generated by its minimal vectors but in which no set of $N$ minimal vectors forms a basis. This construction implies that such lattices exist in every dimension $N\geq 11$. In [@martinet1] and [@martinet2], Martinet showed that a lattice of dimension $N\leq 8$, which is generated by its minimal vectors also has a basis of minimal vectors. This study was completed in [@martinet3] by Martinet and Schürmann, where the authors showed a similar result for $9$-dimensional lattices and provided a counter-example in dimension $10$. Thus, the ${\mathbb{Z}}$-span of $S(\L)$ the set of minimal vectors in a lattice $\L$ can span $\L$ without containing a subset of $N$ ${\mathbb{R}}$-linearly independent vectors. If ${\mathbb{R}}^N=\spn_{{\mathbb{R}}}(S(\L))$ then $\L$ is called *well-rounded*. A stronger condition is when $\L=\spn_{{\mathbb{Z}}}(S(L))$, in this case $\L$ is called *strongly well-rounded*. From the previous, a stronger condition is when $L$ has a basis of minimal vectors. Computationally, checking the weakest of these conditions, i.e., well-roundedness, for an arbitrary lattice is an $NP$-hard problem [@khot2005hardness]. Note that this is equivalent to finding $S(\L)$ the set of minimal vectors in $\L$. Up to dimension $4$, the three conditions are equivalent. Thus, we will first give a short overview on the weakest of the previously mentioned properties, namely, the well-roundedness property. Then we will proceed on the study of Lagrangian lattices and their full-rank sub-lattices with a minimal basis. Well-rounded lattices appear in various arithmetic and geometric problems. In particular, in discrete geometry, number theory and topology. For example, a classical theorem due to Voronoi [@voronoi] implies that the local maxima of the sphere packing function are all realized at well-rounded lattices. In [@mcmullen] well-rounded lattices has been investigated in the context of Minkowski conjecture. Furthermore, topological properties of the set of well-rounded lattices has also been of interest. To name just a few examples, in [@ash], Ash proved that the space of all (unimodular) lattices retracts to the space of well-rounded lattices. More recently, a result by Solan [@solan2019stable] state that for any lattice $L$ there exists a unimodular diagonal real matrix $a$ with positive entries such that $a\cdot L$ is a well-rounded lattice. In addition to their arithmetic and geometric appeal, well-rounded lattices are studies in communication theory. In particular, in physical layer communication reliability [@gnilke] and security [@Damir]. It is also worth mentioning that most of the lattice-based cryptographic protocols lays on the hardness of the shortest vector problem. On the other hand, the problem of determining all the successive minima of an arbitrary lattice is believed to be strictly harder [@micciancio]. However, if the lattice is well-rounded, these two problems are equivalent. Hence, from a theoretical point of view, as well as a practical, it is of interest to explicitly construct well-rounded lattices and to study when a given lattice has a well-rounded sub-lattice or a sub-lattice generated by its minimal vectors. It turns out, see §\[TopoLattices\] for details, that among all lattices well-rounded lattices are scarce. So in a probabilistic sense, such lattices are difficult to find. Studying the geometric structure of well-rounded sub-lattices, strongly well-rounded and lattices with a minimal basis is a non-trivial question that has been investigated by several authors; for instance, in dimension $2$ work in this direction has been done in [@fukshansky], [@baake] and [@kuhnlein]. One example of an infinite family of well-rounded lattices, in fact lattices with a minimal basis, comes from the study of the ring of integers of Galois number fields of prime degree. Suppose that $p$ is an odd prime and that $K$ is a degree $p$ tame Galois number field. In [@sueli] and [@oliviera] the authors give a set of conditions on positive integers $m \equiv 1 \pmod{p}$ so that the sub-lattice of $O_{K}$ given by $\{x \in O_{K}: {\rm Tr}_{K/{\mathbb{Q}}}(x) \equiv 0\pmod{m}\}$ has a minimal basis. One of the key properties of $O_{K}$ behind such result is a Theorem of Conner and Perlis that shows that $O_{K}$ has a [*Lagrangian basis*]{}.[^3] Motivated by the results of Conner and Perlis, see [@CoPe IV.8], we define the notion of [*Lagrangian lattice*]{}(see Definition \[Lagrangian\]). Moreover, given an arbitrary non-trivial lattice $\L$ we construct a family of sub-lattices of it such that whenever $\L$ is Lagrangian we can give easy conditions to decide if a given sub-lattice in the family has a minimal basis. More explicitly, Let $\L \subseteq {\mathbb{R}}^{N}$ be a Lagrangian lattice with Lagrangian basis $\{e_{1},...,e_{N}\}$. Let $a:=\langle e_{1}, e_{1}\rangle$ and $h=-\langle e_{1}, e_{2}\rangle.$ Let $r, s$ be integers such that $0 \neq |r| <N$ and let $m:=r+sN.$ Suppose that $$\frac{Na-1}{N^2-1}\leq \left(\frac{m}{r}\right)^2\leq \frac{(aN-1)(N+1)}{N-1}.$$ Then the lattice $\L_{\v_1}^{(r,s)}$ is a sub-lattice of $\L$ of index $m|r|^{N-1}$, minimum $$\lambda_1(L_{\v_1}^{(r,s)})=ar^2+\frac{m^2-r^2}{N}$$ and basis of minimal vectors $$\{re_1+s\v_1,re_2+s\v_1,\dots,re_N+s\v_1\}.$$ Suppose that $K$ is a Galois number field of prime degree $p$, unramified at $p$. If one uses the theorem above with $r=1$, over the Lagrangian lattice $O_{K}$, then the results of [@sueli] and [@oliviera] are recovered. General background on Lattices {#BackgroundLattices} ============================== A *lattice* $\L$ in $\mathbb{R}^N$ is a discrete additive subgroup of $\mathbb{R}^N$. If $t$ is the dimension of the sub-space generated by $\L$, it can be shown that $\L$ is a free ${\mathbb{Z}}$-module of rank $t$. In other words, a rank $t$ lattice in ${\mathbb{R}}^{N}$ is a set of the form $$\label{def} \L =\Big\{\sum_{i=1}^{t}a_i e_i~|~a_i\in\mathbb{Z}\Big\}.$$ where $\{e_1,\dots,e_t\} \subseteq {\mathbb{R}}^{N}$ is a set independent vectors. We call $ M:= [e_1 | \cdots | e_t]$ a *generator matrix* of $\L$, where the vectors $e_1,\dots,e_t$ are considered as column vectors, i.e., $\L=M{\mathbb{Z}}^{N}$. The matrix $ G=M^T M$ is called a *gram matrix* of $\L$. The lattice $\L$ is said to be [*integral*]{} if the matrix $G$ has all its entries in ${\mathbb{Z}}$. We say that $\L$ [*has full rank*]{} if $t=n$. If $\L$ is a full lattice its [*volume*]{}, or more precisely co-volume, is defined as $\operatorname{vol}(\L):=\sqrt{\det(G)}$. It can be shown that this definition is independent on the choice of Gram matrix. If $\L'$ is a full sub-lattice of $\L$ it can be shown that $$[\L:\L']=\frac{\operatorname{vol}(\L')}{\operatorname{vol}(\L)}.$$ In this paper we will deal mostly with full lattices. Thus, from this point whenever we say lattice we mean full unless we explicitly say the contrary. An important invarian in the study of the sphere packing problem is the *center density* of $\L$ defined by $$\delta(\L):=\frac{\lambda_1(\L)^{N/2}}{2^N\operatorname{vol}(\L)}.$$ Given a lattice $\L$, the quantity $\displaystyle \lambda_1(\L):= \min_{\bx \in \L \setminus \{\bo\}} \|\bx\|^2$ is called *the minimum of* $\L$. The set of [*minimal vectors*]{} in $\L$ is the set of vectors of minimum norm, i.e., $$S(\L) := \{x\in \L: ||x||^2 = \lambda_1(\L) \}.$$ The cardinality of $S(\L)$ is known as *kissing number* of $L$. For a detailed exposition on lattices we refer the reader to [@conway2013sphere]. Well-rounded lattices {#TopoLattices} --------------------- Now we are ready to proceed on the study of well-rounded lattices and the lattices with minimal basis. Let $n$ be a positive integer and let $\L \subset{\mathbb{R}}^n$ be a lattice. - The lattice $\L$ is *well-rounded* (abbreviated [WR]{}) if $\spn_{\mathbb{R}}(S(\L)) = {\mathbb{R}}^n.$ - We say that $\L$ is [*strongly well-rounded*]{} (abbreviated [SWR]{}) if $\L=\spn_{{\mathbb{Z}}}(S(\L))$. - The lattice $\L$ is said to have a minimal basis if $L =\spn_{{\mathbb{Z}}} \{ \bv_1,\dots,\bv_k \},$ for some ${\mathbb{R}}$-linearly independent vectors $\bv_1,\dots,\bv_k$ in $S(\L)$. Every SWR lattice is WR. The following example illustrates that the converse does not always hold. Let $n, k$ be positive integers with $n>k^2>1$. Let $\L$ be the lattice generated by ${\mathbb{Z}}^n$ and the vector $v=(1/k,\dots,1/k)$. By the hypothesis on $n$ and $k$, $S(\L)$ consists of the standard basis vectors, i.e, $\pm e_i=(0,\dots,\pm 1,\dots,0)$ and ${\mathbb{Z}}^{n} \neq \L $. In particular, $\L$ is WR. On the other hand $\spn_{{\mathbb{Z}}}(S(L))={\mathbb{Z}}^n\neq L$ so $\L$ is not SWR. We define two lattices $\L$ and $\L'$ to be *similar* if we can obtain $\L$ from $\L'$ using a rotation and a real dilation of $\L'$. It is clear that the well-roundedness property is invariant under similarity, so it is natural to consider the space of lattices of fixed volume $1$, namely, $$\mathcal{S}_N= \SO_N({\mathbb{R}})\backslash \SL_N(\mathbb{R})/\SL_N(\mathbb{Z}).$$ It is well-known that $\mathcal{S}_N$ has a unique measure $\mu_n$ (Haar measure) that is right $\SL_N(\mathbb{R})$-invariant. Unfortunately, the set of WR lattices in $\mathcal{S}_N$ has measure zero with respect to $\mu_n$. Given a lattice $\L$, the $i^th$ successive minimum is $$\lambda_i(\L) = \inf\{r~|~ \dim({\rm span}(\L\cap \overline{B}(0, r)))=r\},$$ where $B(0, r)$ is the closed ball of radius $r$ around $0$.\ It follows from definitions that a lattice $\L$ is WR if and only if all its successive minima are equal. With this in mind, we can see the set of WR lattices of dimension $N$ as a set defined by $N-1$ (successive minima) equalities, this shows that the space of WR is not a full-dimensional space in $\mathcal{S}_N$, hence the vanishing measure. The following figure illustrates $\mathcal{S}_2$ and $\mathcal{W}_2$ the sets of similarity classes and well-rounded planar lattices. \[w2\] (3,2)–(3,5)–(4,5)–(4,1.75) arc\[x radius=2, y radius=2, start angle=60, end angle=90\]–cycle; (2,0)–(2,5); (3,0)–(3,5); (4,0)–(4,5); (5,0) arc\[x radius=2, y radius=2, start angle=0, end angle=180\]; (0,0)–(6,0); (1,0) circle (1pt) node\[anchor=north\][-1]{}; (2,0) circle (1pt) node\[anchor=north\][$-\frac{1}{2}$]{}; (3,0) circle (1pt) node\[anchor=north\][0]{}; (4,0) circle (1pt) node\[anchor=north\][$\frac{1}{2}$]{}; (5,0) circle (1pt) node\[anchor=north\][1]{}; (A) at (3.5,3.5)[$\mathcal{S}_2$]{}; (B) at (4.7,1.5)[$\mathcal{W}_2$]{}; Classical lattices from algebraic number theory ----------------------------------------------- One of the most common source of lattices coming from number theory are the so called ideal lattices. We briefly recall their construction since it is precisely a subset of such lattices that motivated the definitions of this paper. Let $N$ be a positive integer and let $K$ be a degree $N$ number field. Let ${\rm Hom}_{{\mathbb{Q}}-alg}(K,\mathbb{C})=\{\sigma_1,\dots,\sigma_{r_1},\tau_{1},\overline{\tau_{1}},\dots,\tau_{r_2},\overline{\tau_{r_2}}\}$ be the set of complex embeddings of $K$, where the $\sigma_{i}$ are the embeddings such that $\sigma_{i}(K) \subseteq {\mathbb{R}}$ and the $\{\tau_{j}, \overline{\tau_{j}}\}$ are the pairs with images outside ${\mathbb{R}}$. Let $\Re$ and $\Im$ be the real part function resp. imaginary part function on complex numbers. An element $\alpha \in K$ is calle [*totally positive*]{} if $\sigma(\alpha)>0$ for all $\sigma \in {\rm Hom}_{{\mathbb{Q}}-alg}(K,\mathbb{C})$. Let $\alpha \in K$ be a totally positive element. The [*$\alpha$-twisted Minkowski embedding*]{} from $K$ into ${\mathbb{R}}^{N}$ is the ${\mathbb{Q}}$-linear map $j_{K,\alpha}: K \to {\mathbb{R}}^{N}$ defined by maping x t$o$ $$\left(\sqrt{\sigma_1(\alpha)}\sigma_{1}(x),\dots,\sqrt{\sigma_{r_1}(\alpha)}\sigma_{r_1}(x),\sqrt{\tau{_1}(\alpha)}\Re(\tau_{1}(x)),\sqrt{\tau_{1}(\alpha)}\Im(\tau_{1}(x)),\dots,\sqrt{\tau_{r_{2}}(\alpha)}\Re{\tau_{r_2}},\sqrt{\tau_{r_{2}}(\alpha)}\Im{{\tau_{r_2}}}\right).$$ In the case tha $\alpha=1$ the map $j_{K,\alpha}$ is denoted by $j_{K}$ and it is called the Minkowski embedding. It is a classical fact from the geometry of numbers that $j_{K,\alpha}(O_{K})$ is a full lattice in ${\mathbb{R}}^{N}$, hence the same is true for $j_{K,\alpha}(I)$ for any abelian sub-group $I$ of $O_{K}$ of finite index. In particular, this is the case for $I$ a non-zero ideal of $O_{K}$. If the field $K$ is totally real we have, as a simple calculation shows, that for all $x,y \in K$ $${\rm Tr}_{K/{\mathbb{Q}}}(\alpha xy)=\langle j_{K,\alpha}(x), j_{K,\alpha}(y) \rangle.$$ Hence, in this case, for any non-zero ideal $I \lhd O_{K}$ we can identify the lattice $j_{K,\alpha}(I) \subseteq {\mathbb{R}}^{N}$ with the ideal $I$ together with the bilinear form on it given by ${\rm Tr}_{K/{\mathbb{Q}}}(\alpha xy)$. The latter is sometimes denoted by $\langle I_{\alpha}, {\rm Tr}_{K/{\mathbb{Q}}}\rangle$ and it is called [*an ideal lattice.*]{} Sub-lattices from co-dimension one linear maps {#sub} ============================================== Let $N$ be a positive integer and let $\L\subset{\mathbb{R}}^N$ be a full lattice. Suppose that ${\rm T}: \L \to {\mathbb{Z}}$ is a non-trivial linear map. Let $\v_{1} \in \L \setminus \ker {\rm T}$. \[A\] Let $r, s$ be integers and let $m:=r +s{\rm T}(\v_{1})$. The map $$\begin{aligned} \Phi_{(r,s)}: \L&\rightarrow \L\\ x &\mapsto rx+s{\rm T}(x)\v_1\end{aligned}$$ is a linear. Moreover, $T \circ \Phi_{(r,s)} = [m] \circ {\rm T}$ where $[m]: {\mathbb{Z}}\to {\mathbb{Z}}$ is the usual multiplication by $m$ map. In particular, if r and m are different from zero the map $\Phi_{(r,s)}$ is an injection. Since [T]{} is linear $\Phi_{(r,s)}$ is a composition of linear maps hence it is linear as well. To verify the second claim let $x \in \L$. Then, $${\rm T}(\Phi_{(r,s)}(x))={\rm T}(rx +s{\rm T}(x)\v_{1})=r{\rm T}(x)+s{\rm T}(\v_{1}){\rm T}(x)= m {\rm T}(x).$$ For the last claim let $x \in \ker({\rm T})$. Since $m \neq 0$ it follows from the second claim that ${\rm T}(x)=0$ hence, by the definition, $\Phi_{(r,s)}(x)=rx$ and since $\Phi_{(r,s)}(x)=0$ and $r\neq 0$ it follows that $x=0$. \[CoroA\] Let $L \subseteq {\mathbb{R}}^N$ be a full lattice, let ${\rm T} \in {\rm Hom}_{{\mathbb{Z}}}(\L, {\mathbb{Z}}) \setminus \{0\} $ and let $\v_{1} \in \L \setminus \ker {\rm T}$. Let $r, s$ be integers, and define $m:=r +s{\rm T}(\v_{1})$. Suppose that $r$ satisfies - $r \neq 0$ - $|r| < | {\rm T}(\v_{1}) |.$ Then $\Phi_{(r,s)}(\L)$ a full sub-lattice of $\L$. Furthermore, $$\Phi_{(r,s)}(\L) \subseteq \L^{(m)}_{{\rm T}}:= \{x \in \L : {\rm T}(x) \equiv 0 \pmod{mn_{T}}\}$$ where $n_{T}$ is the size of the co-kernel of $T$, i.e., $[{\mathbb{Z}}: {\rm T}(\L)].$ This is just a reformulation of Proposition \[A\]. By the hypothesis we have that $r$ and $m$ are non-zero. Hence from the proposition we see that $\Phi_{(r,s)}(\L)$ and $\L$ have the same ${\mathbb{Z}}$-rank. The claimed inclusion follows from $T \circ \Phi_{(r,s)} = [m] \circ {\rm T}$. \[ConditionsOnr\] By definition $m$ is defined in terms of $r,s$ and ${\rm T}(\v_{1})$. The conditions of $r$ defined in the corollary are there to guarantee the reverse; in this case we have that the lattice $\Phi_{(r,s)}(\L)$ is determined by $m$ and and ${\rm T}(\v_{1})$ since $r$ is congruent to $m$ modulo ${\rm T}(\v_{1})$. \[CongruencemLattice\] Let $L \subseteq {\mathbb{R}}^N$ be a full lattice, let ${\rm T} \in {\rm Hom}_{{\mathbb{Z}}}(\L, {\mathbb{Z}}) \setminus \{0\} $ and let $n_{T}$ be the order of the co-kernel of $T$. For a given positive integer $m$ let $\L^{(m)}_{{\rm T}}:= \{x \in \L : {\rm T}(x) \equiv 0 \pmod{mn_{T}}\}$. Then, $\L^{(m)}_{T}$ is a full sub-lattice of $\L$ of index $m$. We claim that $\L/\L^{(m)}_{T} \cong {\mathbb{Z}}/m{\mathbb{Z}}$, from which the result follows. Composing $T$ with the reduction map ${\mathbb{Z}}\to {\mathbb{Z}}/mn_{T}{\mathbb{Z}}$ we get a linear map $\L \to {\mathbb{Z}}/mn_{T}{\mathbb{Z}}$ with image $n_{T}{\mathbb{Z}}/mn_{T}{\mathbb{Z}}\cong {\mathbb{Z}}/m{\mathbb{Z}}$ and with kernel equal to $\L^{(m)}_{{\rm T}}$, hence proving our claim. Let $\L, {\rm T }$ and $\v_{1}$ as above. Let $r, s$ be integers. The lattice $\L_{{\rm T},\v_1}^{(r,s)}$ is defined as the sub-lattice of $\L$ given by the image of $\Phi_{(r,s)}$ i.e., $$\L_{{\rm T},\v_1}^{(r,s)} := \Phi_{(r,s)}(\L).$$ It follows from Corollary \[CoroA\] and Lemma \[CongruencemLattice\] that, for $m=r+s{\rm T}(\v_{1})$, if $r\neq 0, |r| < |{\rm T}(\v_{1})|$ then we have a sequence of of rank $N$ lattices $$\L_{{\rm T},\v_1}^{(r,s)} \subseteq \L^{(m)}_{{\rm T}} \subseteq \L$$ such that $[\L:\L_{ {\rm T},\v_1}^{(r,s)}]$ is a multiple of $m$. If $\L$ has a [*rigid basis with respect to [T]{}*]{}, meaning that ${\rm T}$ takes the same value in all the elements of the basis, it turns out that such multiple is $|r|^{N-1}.$ \[CubicLatticeFirstEx\] Suppose $N \ge 2$ and let $\L ={\mathbb{Z}}^{N}$ the standard cubic lattice. Let ${\rm T}: \L \to {\mathbb{Z}}$ the map that sends a vector to the sum of its coordinates, i.e., [*the trace map*]{}. Notice that the standard basis of ${\mathbb{Z}}^{n}$ is rigid with respect to ${\rm T}$; all of the canonical vectors have trace equal to $1$. In this case the lattice $\L_{{\rm T}}^{(2)}$ is the lattice $\mathbb{D}_{N}$. 1. Choosing as $\v_{1}$ any vector with ${\rm T}(\v_{1})=3$, for instance $\v_{1}=[1,1,1,0...,0]^{t}$ if $N>2$ and $\v_{1}=[1,2]^{t}$ for $N=2$, we obtain that $\L_{ {\rm T},\v_1}^{(-1,1)}=\L_{{\rm T}}^{(2)}=\mathbb{D}_{N}$. This can be seen by calculating the image of the standard basis of ${\mathbb{Z}}^{N}$ under $\Phi_{(r,s)}$. Such image is the set $$\{[0,1,1,0,...,0]^t, [1,0,1,0,...,0]^t, [1,1,0,0,...,0]^t , [1,1,1,-1,...,0]^t,...,[1,1,1,0,...0,-1]^t \}$$ for $N >2$ and $\{[0,2]^{t}, [1,1]^{t}\}$ for $N=2$. In either case such set is a basis of $\mathbb{D}_{N}$. 2. Suppose that $N>2$. Choosing as $\v_{1}$ the vector $[1,...,1]^{t}$, which has trace equal to $N$, we see that for $m=2=r+sN$ the tuple $(r,s)$ is either $(2,0)$ or $(2-N, 1)$ which define the lattices $\L_{ {\rm T},\v_1}^{(2,0)}$ and $\L_{ {\rm T},\v_1}^{(2-N,1)}$ respectively. In the former case the lattice $\L_{ {\rm T},\v_1}^{(2,0)}$ is $(2{\mathbb{Z}})^{N}$. In the latter case the structure of the lattice $\L_{ {\rm T},\v_1}^{(2-N,1)}$ varies with $N$. Similar to the previous item a basis for $\L_{ {\rm T},\v_1}^{(2-N,1)}$ can be calculated as the image of the standard basis of ${\mathbb{Z}}^{N}$ under $\Phi_{(r,s)}$ to obtain the basis $$\{[3-N,1,1,1,...,1]^t, [1,3-N,1,1,...,1]^t, ...,[1,1,...,1,3-N]^t \}.$$ For instance: - If $N=3 $ the lattice $\L_{ {\rm T},\v_1}^{(-1,1)}$ is isomorphic to the root lattice $\mathbb{A}_{3}$, - If $N=4$ the lattice $\L_{ {\rm T},\v_1}^{(-2,1)}$ is isomorphic to the lattice $(2{\mathbb{Z}})^{4}$ - If $N=5$ the lattice $\L_{ {\rm T},\v_1}^{(-3,1)}$ is isomorphic to the lattice $L_{9,5}$ defined in [@BaNe Theorem 4.1]. 3. For $N=2$ and $\v_{1}=[1,1]^t$ the lattices $\L_{ {\rm T},\v_1}^{(-1,2)} , \L_{ {\rm T},\v_1}^{(1,1)}$ and $\L_{{\rm T}}^{(3)}$ are all isomorphic to $\mathbb{A}_{2}$. \[Index\] Let $\L, {\rm T }$ and $\v_{1}$ as above. Let $r, s$ be integers with $r\neq 0$ and $|r| < |{\rm T}(\v_{1})|$ and let $m:= r+s {\rm T}(\v_{1})$. Suppose that there exists a basis of $\L$ that it is rigid with respect to ${\rm T}$. Then, $$[\L:\L_{ {\rm T},\v_1}^{(r,s)}]=m|r|^{N-1}.$$ In particular, $\L_{{\rm T},\v_1}^{(r,s)} = \L^{(m)}_{{\rm T}}$ if and only if $r=\pm 1$. Let $\mathcal{B}=\{e_1,\dots,e_N\}$ be a basis of $\L$ that is rigid with respect to ${\rm T}$ i.e., ${\rm T }(e_{i})={\rm T }(e_{j})$ for all $1\leq i,j \leq N$. Thanks to Proposition \[A\] and Corollary \[CoroA\] the set $\mathcal{B}_{\Phi_{(r,s)}}=\{\Phi_{(r,s)}(e_1),\dots,\Phi_{(r,s)}(e_N)\}$ is a basis for $L_{{\rm T },\v_{1}}^{(r,s)}$. Hence, the index $[L:L_{{\rm T },\v_1}^{(r,s)}]$ is equal to $|\det(A)|$ where $A$ is the matrix representation of the basis $\mathcal{B}_{\Phi_{(r,s)}}$ in terms of the basis $\mathcal{B}_{\Phi_{(r,s)}}$. If $\displaystyle \v_1=\sum_{j=1}^{N}a_j e_j$ then, for all $1 \leq i \leq N$, we have that $$\Phi_{(r,s)}(e_i) = re_i +sT(e_i)\sum_{j=1}^{N}a_j e_j$$ hence $$A = \begin{bmatrix} r+sa_1{\rm T }(e_1) & s a_1{\rm T }(e_2) & \dots & & & sa_1{\rm T }(e_N) \\ sa_2{\rm T }( e_1) & r+sa_2{\rm T }( e_2) & \dots & & & sa_2{\rm T }(e_N) \\ \vdots & & \ddots & & & \vdots \\ & & & & & sa_{N-1}{\rm T }(e_N) \\ sa_N{\rm T }( e_1) & s a_N{\rm T }(e_2) & \dots & & & r+s a_N{\rm T }( e_N) \end{bmatrix}.$$ Since $\mathcal{B}$ is rigid with respect to ${\rm T}$ the matrix $A$ is equal to $$A = \begin{bmatrix} r+sa_1{\rm T }(e_1) & s a_1{\rm T }(e_1) & \dots & & & sa_1{\rm T }(e_1) \\ sa_2{\rm T }( e_2) & r+sa_2{\rm T }( e_2) & \dots & & & sa_2{\rm T }(e_2) \\ \vdots & & \ddots & & & \vdots \\ & & & & & sa_{N-1}{\rm T }(e_{N-1}) \\ sa_N{\rm T }( e_N) & s a_N{\rm T }(e_N) & \dots & & & r+s a_N{\rm T }( e_N) \end{bmatrix}.$$ Notice that the sum of the elements of an arbitrary column of $A$ is equal to $$r+s\sum_{i=1}^{N}a_{i}{\rm T}(e_{i})=r+s{\rm T}(\v_{1})=m.$$ Therefore after adding all the rows of $A$, factorizing $m$ and then substracting to the $i$-th row $sa_{i}{\rm T}(e_{i})$ times the first we see that $\det(A)=mr^{N-1}.$ \[Cong1\] Let $N \ge 2$ be an integer and let $K$ be a degree $N$ number field. Let $m$ be an integer such that $m\equiv \pm 1 \pmod{N}$. By the Minkowski embedding we can view $\L:=O_{K}$ as a lattice in ${\mathbb{R}}^{N}$, so for this lattice let $T: \L \to {\mathbb{Z}}$ be the trace map. Suppose that $K$ is tame, i.e., that there is no prime that ramifies wildly in $K$. Then, $$\{x \in O_{K}: {\rm Tr}_{K/{\mathbb{Q}}}(x) \equiv 0\pmod{m}\}= \L_{T,1}^{\pm 1, s}$$ where $s=\frac{m \mp 1}{N}.$ Since $K$ is tame the trace map $T: \L \to {\mathbb{Z}}$ is surjective, see [@narkiewicz2004elementary Corollary 5 to Theorem 4.24]. Hence the result follows from Proposition \[Index\] and Lemma \[CongruencemLattice\]. Construction ------------- In this section we specialize the above construction to lattices that have properties that are motivated by the structure of the ring of integers of certain tame abelian totally real number fields. Suppose that $\L$ is a rank $N$ lattice such that $\L \cap \L^{*} \neq 0$. This for instance can be achieved if $\L$ is integral. By definition any non-zero $\v_{1} \in \L \cap \L^{*} $ defines an element ${\rm T}_{\v_{1}} \in {\rm Hom}_{{\mathbb{Z}}}(\L, {\mathbb{Z}}) \setminus \{0\} $; namely $${\rm T}_{\v_{1}} (x)=\langle x, \v_{1} \rangle.$$ Let $\L$ and $\v_{1}$ as above. Let $r, s$ be integers with $0 \neq |r| < {\rm T}(\v_{1})= \| \v_{1} \|^2$. The lattice $\L_{\v_1}^{(r,s)}$ is defined as $$\L_{\v_1}^{(r,s)}:=\L_{ {\rm T}_{\v_{1}},\v_1}^{(r,s)}.$$ From now on $\v_{1}$ will be a fixed non-zero element in $\L \cap \L^{*}$ and, unless clarification is necessary, we will denote the map ${\rm T}_{\v_{1}}$ by ${\rm T}$. \[prop1\] Let $\L$, $\v_{1}$, $r$ and $s$ as above. Let $m:=r+s {\rm T}(\v_{1})=r+s \|\v_{1}\|^{2}.$ For all $\alpha \in \L$ we have $$||\Phi_{(r,s)}(\alpha)||^2 = A ||\alpha||^2 + B {\rm T}^2(\alpha),$$ where $\displaystyle A=r^2$ and $\displaystyle B=\frac{m^2-r^2}{\| \v_{1}\|^2}$. Let $\alpha \in \L$ and recall that by definition $\Phi_{(r,s)}(\alpha) =r\alpha +s{\rm T}(\alpha) \v_{1}$. Then, $$\begin{split} ||\Phi_{(r,s)}(\alpha)||^2 & = \left \langle \Phi_{(r,s)}(\alpha),\Phi_{(r,s)}(\alpha) \right\rangle \\ & = \left \langle r\alpha +s{\rm T}(\alpha) \v_{1} , r\alpha +s{\rm T}(\alpha) \v_{1} \right\rangle \\ & = r^2||\alpha||^2 + 2rs{\rm T}(\alpha) \langle \alpha, \v_{1} \rangle + s^2{\rm T}(\alpha)^{2} \|\v_{1}\|^{2} \\ & = r^2||\alpha||^2 + 2rs{\rm T}(\alpha)^{2} + s^2{\rm T}(\alpha)^{2} \|\v_{1}\|^{2} \\ & = r^2||\alpha||^2 + \left(2rs+s^2\|\v_{1}\|^{2}\right){\rm T}(\alpha)^2\\ & = r^2||\alpha||^2 + s(r+m){\rm T}(\alpha)^2\\ &= r^2||\alpha||^2 + \frac{(m^2-r^2)}{\|\v_{1}\|^{2}}{\rm T}(\alpha)^2 \end{split}$$ Lagrangian lattices ------------------- In this section we define the notion of Lagrangian lattice and we see how from the we can obtain strongly well rounded lattices. Lagrangian lattices appear naturally in several context, e.g., ring of integers of certain number fields (see [@bayer]). Suppose $N \ge 2$ and let $\L ={\mathbb{Z}}^{N}$ the standard cubic lattice. Let $\v_{1}:=[1,...,1]^{t}$. For this choice of $\v_{1}$ the trace map ${\rm T}$ is equal to the sum of entries of a vector in ${\mathbb{Z}}^{N}$. As we seen in example \[CubicLatticeFirstEx\] for any pair of integers $(r,s)$ with $0< |r| <N$ the lattice $\L_{\v_1}^{(r,s)}$ can be very interesting and diverse. For instance, for $N>2$, by just picking $r \equiv 2 \pmod N$ we obtained $\mathbb{D}_{N}$, $\mathbb{A}_{2}$, $(2{\mathbb{Z}})^{4}$, $L_{9,5}$. Another important feature of these examples is that all of them are strongly well rounded. We isolated some characteristics of ${\mathbb{Z}}^{N}$ and $\v_{1}=[1,...,1]^{t}$ that we think lead to the nice behaviour of the above examples. \[Lagrangian\] Let $N$ be a positive integer and let $\L$ be a rank $N$ lattice. We say that $\L$ is Lagrangian if there is a basis $\{e_{1},...,e_{N}\}$ of $\L$ and a non-zero $\v_{1} \in \L \cap \L^{*}$ such that 1. $e_{1}+...+e_{N}=\v_{1}.$ 2. ${\rm T}_{\v_{1}}(e_{i})=\langle e_{1}, \v_{1} \rangle = 1$ for all $1 \leq i \leq N$. 3. $\langle e_{i}, e_{i} \rangle = \langle e_{j}, e_{j} \rangle$ for all $1 \leq i, j \leq N$. 4. $ \langle e_{i}, e_{j} \rangle = \langle e_{k}, e_{l} \rangle$ for all $1 \leq i, j, k,l \leq N$ with $i\neq j$ and $k \neq l$. The above definition is an attempt to axiomatize the following result: Let $N \geq 2$ and let $K$ be a totally real tame ${\mathbb{Z}}/N{\mathbb{Z}}$-number field. Let $n$ be the conductor of $K$. Suppose that either $n$ or $N$ is prime. Then, $O_{K}$ via the Minkowski embedding is a Lagrangian lattice in ${\mathbb{R}}^{N}$. The case $N$ prime is done in [@CoPe IV. 8]. The case $n$ prime is done in [@BoMa Lemma 3.3]; there, such basis is constructed for $N$ a prime power however the same exact proof works for any $N$. We sketch the general idea of how the Lagrangian basis is found. By definition of conductor $K \subseteq {\mathbb{Q}}(\zeta_{n})$, and in both cases the hypotheses imply that $O_{K}$ is a free ${\mathbb{Z}}[{\rm Gal}(K/{\mathbb{Q}})]$-module of rank $1$. Moreover, a generator for $O_{K}$ as a ${\mathbb{Z}}[{\rm Gal}(K/{\mathbb{Q}})]$-module is given by $e_{1}:={\rm Tr}_{{\mathbb{Q}}(\zeta_{n})/K}(\zeta_{n})$. Hence if $\sigma$ is a generator of the Galois group of $K$, and if $e_{i}=\sigma(e_{1})^{i-1}$, then the set $\{e_{1},...,e_{N}\}$ is an integral basis of $O_{K}$. Since $K$ is tame its conductor $n$ is square free, thus ${\rm Tr}_{{\mathbb{Q}}(\zeta_{n})/{\mathbb{Q}}}(\zeta_{n})=\pm 1$. Therefore replacing $e_{1}$ by $-e_{1}$ if necessary we may assume ${\rm Tr}_{K/{\mathbb{Q}}}(e_{1})=1$, in other words $\v_{1}=e_{1}+e_{2}+...+e_{n}=1$. This shows that (1) and (2) for the basis $\{e_{1},...,e_{N}\}$, in fact since $1$ goes under the Minkowski embedding into the vector with 1’s in all its entries we also have shown that $T_{\v_{1}}(x)={\rm Tr}_{K/{\mathbb{Q}}}(x)$ for all $x \in O_{K}$. Conditions (3) and (4) require more work but the Lagrangian basis constructed in the references mentioned above is the one we just described. Sets of vectors $v_1,\dots,v_k$ in ${\mathbb{R}}^N$ satisfying $\langle v_{i}, v_{i}\rangle=1$ and $\alpha=|\langle v_{i}, v_{j}\rangle|$ for $i\neq j$ and some $\alpha\in {\mathbb{R}}$ are sometimes called equiangular unit frames (see [@sustik2007existence]). \[NormaV1\] Let $\L$ be a rank $N$ Lagrangian lattice and let $\v_{1}$ be the vector defining the map ${\rm T}$. Then $$\|\v_{1}\|^{2}={\rm T}(\v_{1})=N.$$ By definition of [T ]{} we have that$ \|\v_{1}\|^{2} =\langle \v_{1}, \v_{1} \rangle= {\rm T}(\v_{1})$. On the other hand let $\{e_{1},...,e_{N}\}$ be a basis of $\L$ satisfying the conditions of definition \[Lagrangian\]. Then ${\rm T}(\v_{1})={\rm T}(e_{1}+...+e_{N})={\rm T}(e_{1})+...+{\rm T}(e_{N})=1+...+1=N.$ \[prop2\] Let $\L$ be a rank $N$ Lagrangian lattice with $\v_{1} \in \L$ and $\{e_{1},...,e_{N}\}$ satisfying the conditions of the definition. Let $a:=\langle e_{1}, e_{1}\rangle$ and $h=-\langle e_{1}, e_{2}\rangle.$ Let $r, s$ be integers such that $0 \neq |r| <N$ and let $m:=r+sN.$ - $\displaystyle \L_{\v_1}^{(r,s)}$ is a sub-lattice of $\L$ of index $m|r|^{N-1}$. - $\displaystyle ||\Phi_{(r,s)}(e_i)||^2 = ar^2 +\frac{m^2-r^2}{N}$ for all $1 \leq i \leq N$. - $\displaystyle \langle \Phi_{(r,s)}(e_i),\Phi_{(r,s)}(e_j) \rangle= -r^2 h +\frac{m^2-r^2}{N}$ for all $1 \leq i \neq j\leq N$. The first two conditions follow from Propositions \[Index\], \[prop1\] and Lemma \[NormaV1\]. To finish the proof take $i\neq j$. Then, $$\langle \Phi_{(r,s)}(e_i),\Phi_{(r,s)}(e_j) \rangle= \langle re_{i} +s \v_{1} , re_{j} +s\v_{1} \rangle= r^2\langle e_{i} , e_{j} \rangle +2rs+s^2N=-r^2 h +\frac{m^2-r^2}{N}.$$ Let $\L$ be a rank $N$ lattice and let ${\rm T}: \L \to {\mathbb{Z}}$ be a non-trivial linear map. We denote by $\L_{\rm T}^{0}$ the rank $N-1$ lattice given by de Kernel of $T$; $\L_{\rm T}^{0}:=\ker({\rm T})$. We will denote this sub-lattice by $\L^{0}$ since in general ${\rm T}$ will be clear from the context. Taking $\L={\mathbb{Z}}^{n+1}$ the usual cubic lattice and ${\rm T}$ the sum of coordinates function, then $\L^{0}$ is the root lattice $\mathbb{A}_{n}$. The above example is just a particular case of what happens in general Lagrangian lattices. \[An\] Let $\L$ be a rank $N$ Lagragian lattice with $\v_{1} \in \L$ and $\{e_{1},...,e_{N}\}$ satisfying the conditions of the definition. Let $a:=\langle e_{1}, e_{1}\rangle$ and $h=-\langle e_{1}, e_{2}\rangle.$ Then, $$\L^{0} \cong (a+h)\mathbb{A}_{N-1}.$$ Let $w_{1}:=e_{1}-e_{2}, w_{1}:=e_{2}-e_{3},...,w_{N-1}:=e_{N-1}-e_{N}$. Since $\rm T$ is constant on the $e_{i}'s$ then $w_{i} \in \L^{0}$ for all $i$. We claim that $\{w_{1},...,w_{N-1}\}$ is a basis of $\L^{0}$. They are clearly linearly independent, so it’s enough to show that ${\rm spam}_{{\mathbb{Z}}} \{w_{1},...,w_{N-1}\}= L^{0}$. Let $v =a_{1}e_{1}+...+a_{N}e_{N} \in L^{0}.$ Then $a_{1}+...+a_{N}=0$. Showing that $v \in {\rm spam}_{{\mathbb{Z}}} \{w_{1},...,w_{N-1}\}$ is equivalent to show that the following linear system is solvable in ${\mathbb{Z}}$ $$\begin{split} b_{1} & = a_{1} \\ b_{2} -b_{1} & = a_{2}\\ b_{3} -b_{2} & = a_{3}\\ \vdots \ \ \ & = \ \ \vdots\\ b_{N-1} -b_{N-2} & = a_{N-1}\\ -b_{N-1} & = a_{N} \end{split}$$ Since $b_{1}+ (b_{2} -b_{1})+ (b_{3} -b_{2}) +...+(b_{N-1} -b_{N-2})=b_{N-1}$ and $a_{1}+...+a_{N-1}=-a_{N-1}$ the system above has a solution if and only if the system given by the first $N-1$ equations has a solution. The $N-1 \times N-1$ clearly has a unique solution. Returning o the main proof notice that for all $1 \leq i \leq N-1$ $$\langle w_{i}, w_{i} \rangle=\langle e_{i}, e_{i} \rangle-2\langle e_{i}, e_{i+1} \rangle+\langle e_{i+1}, e_{i+1} \rangle=2(a+h).$$ Suppose that $1 \leq i<j \leq N-1$. If $j\neq i+1$ then $$\langle w_{i}, w_{j} \rangle=\langle e_{i}, e_{j} \rangle-\langle e_{i}, e_{j+1} \rangle-\langle e_{j}, e_{i+1} \rangle+\langle e_{j+1}, e_{j+1} \rangle=-h+h+h-h=0.$$ f $j =i+1$ then $$\langle w_{i}, w_{i+1} \rangle=\langle e_{i}, e_{i+1} \rangle-\langle e_{i}, e_{i+2} \rangle-\langle e_{i+1}, e_{i+1} \rangle+\langle e_{i+1}, e_{i+2} \rangle=-h+h-a-h=-(a+h).$$ Therefore the Gram matrix of $\L^{0}$ in the basis $\{w_{1},...,w_{N-1}\}$ is $(a+h)M$ where $M$ is one of know Gram matrices of $\mathbb{A}_{N-1}$ (See [@conway]). \[TraceZero\] Let $\L$ be a rank $N$ Lagragian lattice with $\v_{1} \in \L$ and $\{e_{1},...,e_{N}\}$ satisfying the conditions of the definition. Let $a:=\langle e_{1}, e_{1}\rangle$ and $h=-\langle e_{1}, e_{2}\rangle.$ Then $$\min_{v \in \L^{0}\setminus \{0\} } \| v\|^{2}=2(a+h).$$ Moreover, such minimum is obtained at any of the vectors $e_{i}-e_{j}$ for $i\neq j$ This follows from Theorem \[An\] and from the fact that for any integer $n>1$ the root lattice $\mathbb{A}_{n}$ has minimal distance equal to $2$. Conditions ========== As we seen in previous examples by considering the construction $\L^{(r,s)}_{\v_{1}}$ applied to the Lagrangian lattice ${\mathbb{Z}}^{N}$, for some values $(r,s)$, we obtained several well rounded lattices, moreover they are strongly well rounded. Here we show how to do this for an arbitrary Lagrange lattice $\L$ provided we have some restrictions on the values $(r,s)$ with respect to $\L$.\ Throughout this section $\L \subseteq {\mathbb{R}}^{N}$ will denote a Lagrangian lattice with $\v_{1}$ and $\{e_{1},...,e_{N}\}$ satisfying the conditions of Definition \[Lagrangian\]. Let $a:=\langle e_{1}, e_{1}\rangle$ and $h=-\langle e_{1}, e_{2}\rangle.$ Let $r, s$ be integers such that $0 \neq |r| <N$ and let $m:=r+sN.$ Also recall the definitions $A=r^2$ and $\displaystyle B=\frac{m^2-r^2}{N}$. Shortest vector problem for $\L_{\v_{1}}^{(r,s)}$. -------------------------------------------------- The shortest non-zero norm in $\L_{\v_{1}}^{(r,s)}$ is by definition $$\lambda_{1}(\L_{\v_{1}}^{(r,s)}):=\min_{v \in \L_{\v_{1}}^{(r,s)} \setminus \{0\} } \| v\|^{2}.$$ Thanks to Proposition \[prop1\] and Lemma \[NormaV1\] we have that $$\lambda_{1}(\L_{\v_{1}}^{(r,s)}):=\min_{\alpha \in \L \setminus \{0\} } \left(A ||\alpha||^2 + B {\rm T}^2(\alpha)\right).$$ Therefore we are left with the task of minimising the function $$f(\alpha):=A ||\alpha||^2 + B {\rm T}^2(\alpha)$$ on $\L \setminus \{0\}$. To do this we use the natural partition of $\L$ given by taking the quotient with the sub-lattice $\L^{0}$. Since $\L/\L^{0} \cong {\mathbb{Z}}$ via the map [T]{} such partition is $$\L = \bigcup_{d\in{\mathbb{Z}}}S_d$$ where $S_jd:= \{x\in \L \ | \ {\rm T}(x)= d\}.$ Hence we have $$\lambda_{1}(\L_{\v_{1}}^{(r,s)}):=\min_{d \ge 0 } (\min_{\alpha \in S_{d} \setminus \{0\}} f(\alpha)).$$ We included only non-negative values of $d$ above since $f$ is even and $S_{-d}=-S_{d}$. \[Restar\] Let $d$ be an integer. Let $\alpha=a_{1}e_{1}+...+a_{N}e_{N} $ be an element of $S_{d}$. Suppose there are $a_{i}, a_{j}$ such that $a_{i}-a_{j} > 1$. Then there is $\beta \in S_{d}$ such that $\| \beta\| < \| \alpha \|.$ Let $\beta= \alpha + e_{j} -e_{i}.$ Since $e_{j}-e_{i} \in \ker({\rm T})$ we have that $\beta \in S_{d}$. Taking square norms we get $$\|\beta\|^{2}=\|\alpha\|^{2}+ 2\left(\langle \alpha, e_{j}\rangle- \langle \alpha, e_{i}\rangle\right) + \|e_{i}\|^{2}- 2\langle e_{i}, e_{j}\rangle+\|e_{j}\|^{2}=\|\alpha\|^{2}+ 2\left(\langle \alpha, e_{j}\rangle- \langle \alpha, e_{i}\rangle +a+h \right).$$ On the other hand for all $1 \leq k \leq N$ $$\langle \alpha, e_{k}\rangle= \sum_{l \neq k} a_{l}\langle e_{l}, e_{k}\rangle+ a_{k}a=-h\sum_{l \neq k} a_{l}+a_{k}a=-h({\rm T}(\alpha) -a_{k} )+ a_{k}a=-h{\rm T}(\alpha) + a_{k}(a+h).$$ In particular, $\langle \alpha, e_{j}\rangle- \langle \alpha, e_{i}\rangle = (a_{j}-a_{i})(a+h)$ and thus $$\|\beta\|^{2}=\|\alpha\|^{2}+2(a+h)(a_{j}-a_{1}+1) =\|\alpha\|^{2}-2(a+h)(a_{i}-a_{j}-1) > \|\alpha\|^{2}.$$ For a subset $I \subset \{1,...,N\}$ we denote by $\displaystyle E_{I}:=\sum_{i \in I} e_{i}$, for instance $E_{\emptyset}=0$ and $E_{\{1,...,N\}}=\v_{1}$. Notice that ${\rm T}(E_{I})=\#I$ for all subset $I$. \[minlemma\] Let $d$ be a positive integer. Then $\displaystyle \min_{\substack{x\in S_d\\d\neq 0}}\{||x||^2\}$ is achieved at some $$\alpha=E_I +C\v_{1}$$ where $C$ is a non-negative integer and $\#I < N$. Moreover, \#I is the residue class of $d$ modulo $N$ and $C=(d-\#I)/N.$ Let $\displaystyle \alpha=\sum_{i=1}^{N}a_i e_i\in S_d$ a non-zero element having minimal norm. Let $$a_k=\max\{a_i~:~1\leq i\leq N\}.$$ Replacing $\alpha$ by $-\alpha$ if necessary we may assume that $a_k \ge 1$. Suppose that there is some $a_{j} \leq 0$. Then, since the $a_{j}$’s can not be more than one integer apart, by Lemma \[Restar\], we must have that all non positive coefficients are equal to $0$ and all positives are equal to $1$. Thus in such case $\alpha$ is of the form $E_{I}$ where $\#I<N$ since there are coefficients equal to $0$. Now, if all $a_{j}$ are positive let $$c=\min\{a_i~:~1\leq i\leq N\}.$$ If $c=a_{k}$ then $\alpha =c\v_{1}$, otherwise $a_{k}=c+1$ and $\alpha=E_{I}+c\v_{1}$ where $I$ is the subset of elements $i$ such that $a_{i}=a_{k}$. \[MinPartd\] Let $d$ be a positive integer. Let $k$ be the residue class of $d$ modulo $N$ and let $c=(d-k)/N$. Then, $$\min_{\alpha \in S_{d} \setminus \{0\}} f(\alpha)=f(E_{I})+c^2f(\v_{1})+ 2ck(A+NB)$$ where $I$ is a subset of size $k$. Since in $S_{d}$ the function ${\rm T}$ is constant the minimum value of $f(\alpha)=A\|\alpha\|^{2}+B {\rm T}(\alpha)^{2}$ is attained whenever $\|\alpha\|^{2}$ is minimal. Thanks to corollary \[minlemma\] such minimum is attained at $\alpha=E_I +c\v_{1}$, hence the minimum value of $f$ over $S_{d}$ is $f(E_I +c\v_{1})=f(E_{I})+c^2f(\v_{1})+ 2ck(A+NB).$ \[lemmaB\] Let $k$ be an integer $0\leq k\leq N$, and let $I\subset\{1,\dots,N\}$ such that $|I|=k$, then $$||E_I||^2=k(a-(k-1)h)=k(1+(N-k)h).$$ In particular, $$f(E_{I})= Ak(1+(N-k)h)+Bk^2.$$ We may assume that $k \neq 0$. $$\||E_I||^2 =\langle E_{I}, E_{I} \rangle =\sum_{i\in I} \langle e_{i}, e_{j} \rangle + \sum_{\substack{ {i,j\in I} \\ i\neq j}} \langle e_{i}, e_{j}\rangle =\sum_{i\in I} a - \sum_{\substack{ {i,j\in I} \\ i\neq j}}h=ak-(k^2-k)h=k(a-(k-1)h).$$ Using the case $k=N$, i.e., $E_{I}=\v_{1}$ we see that $N=\|E_I||^2= N(a-(N-1)h)$ hence $a=1+(N-1)h$. Replacing this in the first equality the second follows, and so it does the claim about $f(E_{I})$. Conditions on minimal basis. ---------------------------- To see whether or nor $\L^{(r,s)}_{\v_{1}}$ is strongly well rounded, firstly we should find a basis in which all the vectors have the same norm. Since we have calculated the min values of $f$ over each $S_{d}$, we then should compare the value of the norms of the proposed basis vs the minimal values of $f$. Thanks to Proposition \[prop2\] we have that all the elements of the basis $\{\Phi_{r,s}(e_{1}), ..., \Phi_{r,s}(e_{N})\}$ have norm $aA+B$. Hence, if $aA+B$ were to be equal to $\lambda_{1}(\L_{\v_{1}}^{(r,s)})$ we should have at least $aA+B \leq f(\v_{1}) =N(A+NB).$ As it turns out this is already a pretty strong condition as the next theorem shows. \[AlmostMain\] Suppose that $aA+B \leq f(\v_{1}).$ Then, $$aA+B =\min_{d > 0 } (\min_{\alpha \in S_{d} \setminus \{0\}} f(\alpha)).$$ Since $e_{1} \in S_{1}$ and $aA+b=f(e_{1})$ we have that $\displaystyle \min_{d > 0 } (\min_{\alpha \in S_{d} \setminus \{0\}} f(\alpha)) \leq aA+B$. To show the oposite inequality let $d$ be a positive integer. Let $k$ be the residue class of $d$ modulo $N$ and $c=(d-k)/N$. Thanks to Theorem \[MinPartd\] we have that $$\min_{\alpha \in S_{d} \setminus \{0\}} f(\alpha)=f(E_{I})+c^2f(\v_{1})+ 2ck(A+NB)$$ where $I$ is subset of $\{1,...,N\}$ of size $k$. If $c\neq 0$ we have that $f(E_{I})+c^2f(\v_{1})+ 2ck(A+NB) \ge f(\v_{1}) \ge aA+B$. If $c=0$ then $k \neq 0$ and thanks to the next proposition $f(E_{I}) \ge aA+B$. Thus, in either case t $$\min_{\alpha \in S_{d} \setminus \{0\}} f(\alpha) \ge aA+B$$ from which the result follows. \[prop6\] Let $1 \leq k\leq N$ be an integer, and $I\subset\{1,\dots,N\}$ a subset of size $k$. Suppose that $aA+B \leq f(\v_{1})$. Then, $$aA+B \leq f(E_{I}).$$ Consider the parabola $$g(t):=At(1+(N-t)h)+Bt^2.$$ By Lemma \[lemmaB\] we have that $aA+B=g(1), f(\v_{1})=g(N)$ and $f(E_{I})=g(k)$. Also, notice that $g(0)=0$ hence $ g(0) < g(1) \leq g(N)$. Since $g$ is a parabola and $0<1<N$ the function $g$ is increasing in the interval $[1,N]$ hence $aA+B=g(1) \leq g(k)=f(E_{I})$. \[ElUltimoCoro\] Suppose that $aA+B \leq N(A+NB).$ Then, $$\lambda_{1}(\L_{\v_{1}}^{(r,s)})=\min\{ 2A(a+h), aA+B \}.$$ Recall that by doing the quotient partition of $\L/\L^{0}$ we have that $$\lambda_{1}(\L_{\v_{1}}^{(r,s)})=\min_{d \ge 0 } (\min_{\alpha \in S_{d} \setminus \{0\}} f(\alpha)).$$ Therefore, thanks to Theorem \[AlmostMain\], $\displaystyle \lambda_{1}(\L_{\v_{1}}^{(r,s)})=\min\{ \min_{\alpha \in S_{0} \setminus \{0\}} f(\alpha), aA+B \}$. On the other hand $f(\alpha)=A\| \alpha\|^{2}$ for $\alpha \in S_{0}=\L^{0}$. The result follows from Corollary \[TraceZero\]. We are ready to summarize our results in the main theorem of the paper: \[main\] Let $\L \subseteq {\mathbb{R}}^{N}$ be a Lagrangian lattice with $\v_{1}$ and $\{e_{1},...,e_{N}\}$ satisfying the conditions of Definition \[Lagrangian\]. Let $a:=\langle e_{1}, e_{1}\rangle$ and $h=-\langle e_{1}, e_{2}\rangle.$ Let $r, s$ be integers such that $0 \neq |r| <N$ and let $m:=r+sN.$ Suppose that $$\frac{Na-1}{N^2-1}\leq \left(\frac{m}{r}\right)^2\leq \frac{(aN-1)(N+1)}{N-1}.$$ Then the lattice $\L_{\v_1}^{(r,s)}$ is a sub-lattice of $\L$ of index $m|r|^{N-1}$, with minimum $$\lambda_1(\L_{\v_1}^{(r,s)})=ar^2+\frac{m^2-r^2}{N}$$ and basis of minimal vectors $$\{re_1+s\v_1,re_2+s\v_1,\dots,re_N+s\v_1\}.$$ By Corollary \[ElUltimoCoro\] $\L_{\v_1}^{(r,s)}$ is a strongly well-rounded with minimal norm $\displaystyle aA+B$ if and only if - $Aa+B \leq N(A+NB)$ - $Aa+ B \leq 2A(a+h).$ Recall that $A=r^2$ and $B=\frac{m^2-r^2}{N}$. Thus, using that $\frac{B}{A}= \frac{1}{N} \left(\left(\frac{m}{r}\right)^2 -1\right)$ the inequality $$Aa+B \leq N(A+NB) \ \mbox{turns into} \ \displaystyle \frac{Na-1}{N^2-1}\leq \left(\frac{m}{r}\right)^2$$and using that $a=1+(N-1)h$, see the proof of Lemma \[lemmaB\], $$Aa+ B \leq 2A(a+h) \ \mbox{turns into } \ \left(\frac{m}{r}\right)^2\leq \frac{(aN-1)(N+1)}{N-1}.$$ \[LagragianNumberFieldCoro\] Let $N \ge 2$ be an integer. Let $K$ be a tame totally real degree $N$ number field. Let $m$ be an integer such that $m\equiv \pm 1 \pmod{N}$. Suppose that $O_{K}$ has an integral Lagrangian basis $\{e_{1},...,e_{N}\}$ such that $1=e_{1}+e_{2}+...+e_{N}$. Let $a:=\langle e_{1}, e_{1}\rangle$ and $h=-\langle e_{1}, e_{2}\rangle$ and suppose that $$\frac{Na-1}{N^2-1}\leq m^2\leq \frac{(aN-1)(N+1)}{N-1}.$$ Then, the lattice $$\{x \in O_{K}: {\rm Tr}_{K/{\mathbb{Q}}}(x) \equiv 0\pmod{m}\}$$ is a sub-lattice of $O_{K}$ that has a minimal basis with minimum $\lambda_{1}=a+\frac{m^2-1}{N}.$ The result follows immediately from Lemma \[Cong1\] and Theorem \[main\]. Note that for simplicity have only mentioned here the case $m\equiv \pm 1 \pmod{N}$ however, the more general values of $m=r+sN$ yield to lattices with minimal basis inside of $\{x \in O_{K}: {\rm Tr}_{K/{\mathbb{Q}}}(x) \equiv 0\pmod{m}\}.$ There are several examples of real number fields containing an integral Lagrangian basis. One of the first families of such number fields was found in the mid 80’s by Conner and Perlis while studying integral traces over tame Galois number fields of prime degree: \[ex1\] Let $p$ be a prime and let $K$ be a Galois number field of degree $p$. If $K$ is tame, which in this case is equivalent to say that $p$ does not ramify, then $O_{K}$ has an integral Lagrangian basis $\{e_{1},...,e_{p}\}$ with $a=\frac{n(p-1)+1}{p}$ and $h=\frac{n-1}{p}$ where $n$ is the conductor of $K$ (see [@CoPe]). Note that such values of $a$ and $h$ for for $N=p$ we have that $\frac{Na-1}{N^2-1}=\frac{n}{p+1}$ and $\frac{(aN-1)(N+1)}{N-1}=n(p+1)$. Therefore, if $m \equiv \pm 1\pmod{p}$ is such that $\frac{n}{p+1} \leq m^2 \leq n(p+1) $ then, thanks to Corollary, \[LagragianNumberFieldCoro\] $$\{x \in O_{K}: {\rm Tr}_{K/{\mathbb{Q}}}(x) \equiv 0\pmod{m}\}$$ is a sub-lattice of $O_{K}$ with a minimal basis and minimum $\lambda_{1}=\frac{n(p-1)+m^2}{p}.$ When restricting this example to the case $m \equiv 1\pmod{p}$ the results \[Theorem 4.1,[@sueli]\] and \[Theorem 3.3,[@oliviera]\] are recovered. Recently the results of Conner and Perlis about Trace forms over cyclic number fields, see [@BoMa], have been generalized. Using these ideas the example above can be extended to number fields of not necessarily prime degree: \[ex2\] Let $K$ be a tame totally real abelian number field of degree $N$. If the conductor $n$ of $K$ is prime, then $O_{K}$ has an integral Lagrangian basis $\{e_{1},...,e_{N}\}$ with $a=\frac{n(N-1)+1}{N}$ and $h=\frac{n-1}{N}$. (The construction of such basis can be done as in [@BoMa Lemma 3.3]; there such basis is constructed for $N$ a prime power however the same exact proof works for any $N$.) Thus, as in the previous example our general construction can also be applied to such fields. For instance, the field $K={\mathbb{Q}}(\zeta_{13}+\zeta_{13}^{-1})$ is a tame real Galois extension of ${\mathbb{Q}}$ with Galois group ${\mathbb{Z}}/2{\mathbb{Z}}\times {\mathbb{Z}}/3{\mathbb{Z}}$ and of conductor $13$. The lattices $\{x \in O_{K}: {\rm Tr}_{K/{\mathbb{Q}}}(x) \equiv 0\pmod{5}\}$ and $\{x \in O_{K}: {\rm Tr}_{K/{\mathbb{Q}}}(x) \equiv 0\pmod{7}\}$ are sub-lattices of $O_{K}$ with minimal basis and with respective minima equal to $15=\frac{13(6-1)+5^2}{6}$ and $19=\frac{13(6-1)+7^2}{6}$. The two families in the above examples are not the only examples of number fields with Lagrangian integral basis; the following example shows that there are abelian fields that are neither of prime degree or prime conductor having a Lagrangian integral basis. \[ex3\] Let $K$ be the number field defined by the polynomial $f:=x^4 - x^3 - 24x^2 + 4x + 16$. The field $K$ is ${\mathbb{Z}}/4{\mathbb{Z}}$-extension of ${\mathbb{Q}}$, has discriminant $5^3\cdot 13^3$ and conductor $n=65$. If $\{a_{1},a_{2}, a_{3}, a_{4}\}$ is the set of roots of $f$ then they form an integral basis of $O_{K}$, $a_{1}+a_{2}+a_{3}+a_{4}=1$, and the Gram matrix of the trace in such basis is $$\begin{bmatrix} \ \ 49 & -16 & -16 & -16 \\ -16 & \ \ 49 & -16 & -16 \\ -16 & -16 & \ \ 49 & -16 \\ -16 & -16 & -16 & \ \ 49 \end{bmatrix}.$$ Hence, $O_{K}$ is a Lagrangian lattice with $a=49, h=16$ and $N=4$. Applying the bounds of Corollary \[LagragianNumberFieldCoro\] here, we obtain $13 \leq m^2 \leq \frac{(aN-1)(N+1)}{N-1}=325$ for $m \equiv \pm 1 \pmod{4}$. This is equivalent to $m \in \{5,7,9,11,13,15,17\}.$ Hence, by applying Corollary \[LagragianNumberFieldCoro\] to $K$ a non-degree prime with no prime conductor number field, we have constructed seven non-isometric sub-lattices of $O_{K}$ with minimal basis. The above constructions are obtained from Galois number fields. In the next section, we will present a generic construction of Lagrangian integral lattices, and with it we will show how to construct Lagrangian lattices arising from ideal lattices over number fields that are non-Galois over ${\mathbb{Q}}$. Integral Lagrangian lattices ============================ Recall that $\L$ is a Lagrangian lattice if there is a basis $\{e_{1},...,e_{N}\}$ of $\L$ and a non-zero $\v_{1} \in \L \cap \L^{*}$ such that: 1. $e_{1}+...+e_{N}=\v_{1}.$ 2. ${\rm T}(e_{i})=\langle e_{1}, \v_{1} \rangle = 1$ for all $1 \leq i \leq N$. 3. $\langle e_{i}, e_{i} \rangle = \langle e_{j}, e_{j} \rangle=a$ for all $1 \leq i, j \leq N$. 4. $ \langle e_{i}, e_{j} \rangle = \langle e_{k}, e_{l} \rangle=-h$ for all $1 \leq i, j, k,l \leq N$ with $i\neq j$ and $k \neq l$. Many of the lattices of interest in arithmetic are integral, hence it is natural to see when a Lagrangian lattice $\L$ is integral. Since conditions (1) and (2) are equivalent to $a-(N-1)h=1$ the integrality of $\L$ is equivalent to have that $a$ and $h$ are integers. More precisely: Let $N>1$ be an integer and let $\L$ be an integral lattice in ${\mathbb{R}}^N$. The lattice $\L$ is Lagrangian if and only if there is a basis $\B$ of $\L$ such that the Gram matrix of $\L$ in the basis $\B$ is of the form $$G_{\B} =\begin{bmatrix} a & -h & \dots & & & -h \\ -h & a & \dots & & & -h\\ \vdots & & \ddots & & & \vdots \\ & & & & & -h \\ -h & -h & \dots & & & a \end{bmatrix}$$ where $a,h$ are non-negative integers, $a\neq 0$, such that $a-(N-1)h=1$. Suppose that $\L$ is Lagrangian and let $\B:=\{e_{1},...,e_{N}\}$ be a basis satisfying the conditions of (1)-(4) of the definition. Since $\L$ is integral, $a=\langle e_{1}, e_{1} \rangle \in {\mathbb{Z}}^{+}$ and $h=-\langle e_{1}, e_{2} \rangle \in {\mathbb{Z}}$. In the final part of the proof of \[lemmaB\] we showed that $a-(N-1)h=1$, so $h$ can’t be negative since $a$ and $N-1$ are positive, thus the Gram matrix of $\L$ in the basis $\B$ satisfies the claim. Conversely suppose that $\B:=\{e_{1},...,e_{N}\}$ is a basis of $\L$ with Gram matrix as claimed. By definition of Gram matrix the basis $\B$ satisfies conditions (3) and (4) above. Since $\L$ is integral $\L$ is a subset of its dual in particular, $\v=e_1+\dots+e_N\in\L\cap\L^*$ and (1) is satisfied. Finally we have condition (2) since $\langle \v,e_i \rangle= \langle e_1+\dots+e_N , e_{i} \rangle= a-(N-1)h=1$. Constructing integral Lagrangian lattices ----------------------------------------- So far most of the examples of Lagrangian lattices we have encountered, see \[ex1\], \[ex2\] and \[ex3\], are of the form $\langle O_{K} , {\rm Tr(x^2)} \rangle $ where $K$ is a cyclic number field. It is natural to ask if there are other type of Lagrangian lattices. Since lattices of the form $\langle O_{K} , {\rm Tr(x^2)} \rangle$ are integral this question is interesting only for integral Lagrangian lattices. As it turns out, thanks to results due to Taussky [@taussky2] and Krauskemper [@kruskemper], every integral lattice is isomorphic to an ideal lattice over some totally real number field. In particular, any integral Lagrangian lattice is an ideal lattice over a totally real number field. Here we will show examples of integral Lagrangian lattices that can be described as ideal lattices over non-Galois number fields. For the sake of completeness, we first sketch Taussky-Kruskemper method. ### Taussky-Kruskemper method A key result linking matrices with ideal bases is Theorem 1 in [@taussky2]. Let $N$ be a positive integer. The mentioned theorem states that any matrix $M \in {\rm M}_{N \times N}({\mathbb{Z}})$ with an irreducible characteristic polynomial $f$, has an eigenvector $w_{\gamma}=(w_1,\dots,w_N)$ associated to a root $\gamma$ of $f$, such that $\{w_1,\dots,w_N\}$ form a basis of an ideal in the order ${\mathbb{Z}}[\gamma]$ of ${\mathbb{Q}}(\gamma)$. In fact, and explicit expression for a choice of basis is $$\label{eigen} w_j=(-1)^{1+j}\Delta_{1j}(M-\gamma I_N),$$ where $\Delta_{1j}(M-\gamma I_N)$ is the $j^{th}$ minor of the first row of $M-\gamma I_N$, and $I_{N}$ is the identity matrix of dimension $N$. \[taussky11\] Let $N$ be a positive integer and let $G \in {\rm M}_{N \times N}({\mathbb{Z}})$ be a symmetric matrix with non-zero determinant. Let $M\in {\rm M}_{N \times N}({\mathbb{Z}})$ be its characteristic polynomial $f$ irreducible and such that $MG=GM^{T}$. Then, there exists a degree $N$ number field $K$, $\alpha \in K$, $O$ an order in $K$ and $\mathcal{I}$ an ideal in $O$ such that $G$ is the gram matrix of the bilinear pairing on $I$ given by the twisted trace; $(x,y) \mapsto \operatorname{Tr}_{K/{\mathbb{Q}}}(\alpha xy)$. The idea behind the theorem is as follows. Let $\gamma$ be a root of $f$ and $K$ be the number field generated by $\gamma$. Let $w_{\gamma}=(w_1,\dots,w_N)$ as in , generating an ideal $\mathcal{I}$ in ${\mathbb{Z}}[\gamma]$. Taussky showed that there exists $\alpha\in{\mathbb{Q}}(\gamma)$ such that for $1 \leq i,j \leq N$ $$\label{coeff} G_{i,j}={\rm Tr}_{K/{\mathbb{Q}}}(\alpha w_iw_j)$$ Moreover, the element $\alpha$ satisfies the relation $$\label{alpha} Gw_{\gamma}'=\alpha w_{\gamma},$$ where $w'_{\gamma}=(w'_1,\dots,w'_N)$ is an eigenvector of $M^T$ associated to the eigenvalue $\gamma$. The ordered set $\{w'_1,\dots,w'_N)\}$ is the dual basis of $\{w_1,\dots,w_N)\}$ with respect the trace pairing, see [@berhuy2000realisation], hence for all $1 \leq j \leq N$ $$\label{ultimatew} w_j'=\sum_{i=1}^{N}\mathcal{M}_{ij}\gamma^{i-1},$$ where $\mathcal{M}=A^{-1}(P^T)^{-1}$, $P$ is the matrix of $\{w_1,\dots,w_i\}$ in the basis $\{1,\dots,\gamma^{N-1}\}$ and $A$ is the gram matrix of the trace pairing in the basis $\{1,\gamma,.., \gamma^{N-1}\}$. Therefore, $G$ is the Gram matrix, in the basis $\{w_1,\dots,w_i\}$ of $\mathcal{I}$, of the twisted trace pairing $ \operatorname{Tr}_{K/{\mathbb{Q}}}(\alpha xy)$. If $G$ is positive definite, the polynomial f can be chosen to have only real roots so $K$ is totally real and hence $\alpha$ is totally positive. In particular, $G$ represents a gram matrix of the ideal lattice $J_{K,\alpha}(\mathcal{I})$. Using Taussky’s theorem Kruskemper [@kruskemper] showed that every integral lattice is an ideal lattice. His idea was to show that given $G$ a gram matrix of an integral lattice, there is $S$ a symmetric matrix such that $GS$ has an irreducible characteristic polynomial with only real roots. Taking $M=GS$, since clearly $MG=GM^{T}$ the result follows from Taussky’s theorem. [@kruskemper]\[Krus\] Any integral lattice is isometric to an ideal lattice $J_{K,\alpha}(\mathcal{I})$, where $\mathcal{I}$ is an ideal in ${\mathbb{Z}}[\gamma]\subset K$ for some algebraic integer $\gamma$. Furthermore, $K$ can be assumed to be totally real. The proof of Theorem \[Krus\] uses the fact that if $G$ is a symmetric integer matrix, then by seeing it as the gram matrix of a rational quadratic form in some basis, we can find an integral matrix $E$, with non-zero determinant, and a diagonal integer matrix $D$ such that $$\label{transposetrans} E^T GE=D.$$ Now, for a diagonal matrix $D$, positive definite, Kruskemper shows, using Hilbert’s irreducibility theorem, that there exists a symmetric matrix $S'$ such that $DS'$ has an irreducible characteristic polynomial $f$ with only real roots. Letting $S=ES'E^{T}$ and $M=GS$ we see that $M$ and $DS'$ are conjugate hence have the same characteristic polynomial.\ We illustrate Taussky-Kruskemper’s method in the cubic case in the case of integral Lagrangian lattices of dimension $N=3$. Let $a, h$ be non-negative integers with $a\neq 0$ and such that $a-2h=1$ and let $$G_{a,h} = \begin{bmatrix} \ a & -h & -h\\ -h& \ a & -h \\ -h& -h & \ a \end{bmatrix}$$ By doing row/column reduction we find that if $$E = \begin{bmatrix} \ 6 & -2 & -3 \\ \ 6 & \ 4 & \ 0 \\ \ 6 & -2 & \ 3 \end{bmatrix}.$$ then, $$E^T G_{a,h} E=D,$$ where $$D= 6 \begin{bmatrix} 36(a-2h)& 0 & 0\\ 0& 4(a+h) & 0 \\ 0& 0 & 3(a+h) \end{bmatrix} = 6 \begin{bmatrix} 36& 0 & 0\\ 0& 4(a+h) & 0 \\ 0& 0 & 3(a+h) \end{bmatrix}.$$ Assume that $S'$ is a symmetric integral matrix such that $DS'$ has an irreducible characteristic polynomial. We fix a root $\gamma$ of $f$, and we consider $w_\gamma$ and $w_\gamma'$ as in and respectively. Using equation , we obtain $$\begin{bmatrix} a & -h & -h\\ -h& a & -h \\ -h& -h & a \end{bmatrix}w_\gamma'=\alpha w_\gamma.$$ Multiplying this on the left by the row vector $[1,1,1]$, and using that $a-2h=1$, yields $$\label{alphpha} \alpha=\frac{\sum_{i=1}^3 w'_i}{\sum_{i=1}^3 w_i}.$$ Let $N\geq 2$, $w_\gamma =(w_1, \dots,w_N)$ and $w_\gamma' =(w_1', \dots,w_N')$ as in and respectively. Then, multiplying by $[1,\cdots 1]$ and using that $a-(N-1)h=1$, we see that $\alpha$ is $$\displaystyle \alpha=\frac{\sum_{i=1}^N w'_i}{\sum_{i=1}^N w_i}.$$ The above method is presented in [@oggier2003best], the authors proposed this technique to construct full-diversity rotations of the orthogonal lattice ${\mathbb{Z}}^n$,i.e., $a=1$ and $h=0$. Using the same above notation, we consider $a=5$ and $h=2$. Thus $$D=\begin{bmatrix} 108 & 0 & 0\\ 0 & 168 & 0\\ 0 & 0 & 126 \end{bmatrix}.$$ We take $$S'=\begin{bmatrix} \ 1 & -2 & -1\\ -2 & \ 2 & \ 1\\ -1 & \ 1 & \ 0 \end{bmatrix}.$$ The matrix $DS'$ has characteristic polynomial $f(x)=x^3 - 444x^2 - 71064x -2286144$ which is irreducible over ${\mathbb{Q}}$, and with discriminant $2^{10}\cdot3^{8}\cdot7^{2}\cdot580639$. Furthermore, fixing a real root $\gamma$ of $f$, the number field $K={\mathbb{Q}}(\gamma)$ is cubic number field of discriminant $2^2\cdot 580639$. In particular, $K$ is totally real, is not Galois over ${\mathbb{Q}}$ and the order ${\mathbb{Z}}[\gamma]$ is a sub-ring of index $9072$ in the ring of integers of $K$. Now we proceed to calculate the ideal basis. We have $$M=G_{5,2}ES'E^T=\begin{bmatrix} \ 512 & \ 86 & \ 392\\ -454 & -124 & -322\\ \ 176 & \ 2 & \ 56 \end{bmatrix}.$$ Using , we obtain that $G_{5,2}$ is a Gram of an ideal lattice obtained from an ideal $\mathcal{I}$ with basis $\{w_1,w_2,w_3\}$, where $$\label{www} \begin{cases} w_1 = \gamma^2 +68\gamma - 6300.\\ w_2= -454\gamma - 31248.\\ w_3=176\gamma +20916. \end{cases}$$ In order to calculate $\alpha$, we first need to compute $w_\gamma'$ as in . Using , we get that the matrix of the basis $\{w_1,w_2,w_3\}$ in terms of the basis $\{1, \gamma, \gamma^{2}\}$ is $$P=\begin{bmatrix} - 6300 & - 31248 & \ 20916\\ \ 68 & -454 & \ 176\\ \ 1 & 0 & 0 \end{bmatrix}.$$ Furthermore, the gram matrix of the trace pairing in the basis $\{1, \gamma, \gamma^{2}\}$ is $$A=\begin{bmatrix} 3& 444 & 339264\\ 444 & 339264 & 189044064\\ 339264 & 189044064 & 109060069248 \end{bmatrix}.$$ Thus $$\begin{bmatrix}w_1'\\w_2'\\w_3'\end{bmatrix} =A^{-1}\left(P^T\right)^{-1}\begin{bmatrix}1\\ \gamma\\\gamma^2\end{bmatrix}.$$ Hence, $\alpha$ is given by $$\alpha=\frac{\sum_{i=1}^3 w'_i}{\sum_{i=1}^3 w_i}= \frac{65191931\gamma^{2} - 39226301250\gamma + 1018601862840}{143063823242500989696}.$$ Summarizing; inside the order ${\mathbb{Z}}[\gamma]$ the ideal $\mathcal{I} \subseteq {\mathbb{Z}}[\gamma]$ with basis $\{w_{1}, w_{2}, w_{3}\}=\{ \gamma^2 +68\gamma - 6300, -454\gamma - 31248, 176\gamma +20916 \}$ together with $\alpha$ define an ideal lattice $\left \langle \mathcal{I}_{\alpha}, {\rm Tr}_{K/{\mathbb{Q}}} \right \rangle$ which gram matrix in the basis $\{w_{1}, w_{2}, w_{3}\}$ is equal to $\displaystyle \begin{bmatrix} \ 5 & -2 & -2\\ -2& \ 5 & -2 \\ -2& -2 & \ 5 \end{bmatrix}$. We should observe that this particular Lagrangian lattice is also the ideal lattice of the full ring of integers of the maximal real sub-field of ${\mathbb{Q}}(\zeta_{7})$ and $\alpha=1$. Integral Lagrangian lattices and units in real quadratic fields =============================================================== Let $N>2$ be an integer such that $N-1$ is square-free. The field $K={\mathbb{Q}}(\sqrt{N-1})$ is a real quadratic field with ring of integers $\mathcal{O}_K={\mathbb{Z}}[\omega]$ where $$\omega ={\begin{cases}{\sqrt{N-1}}&{\mbox{if }}N\equiv 0,3{\pmod4}\\{\frac{1+{\sqrt{N-1}}} { 2}}&{\mbox{if }}N\equiv 2{\pmod4}\end{cases}}.$$ Let $\eta\in\mathcal{O}_K$. We denote by $\eta_x$ and $\eta_y$ the unique integers such that $\eta=\eta_x+\eta_y\omega$. We define the map $$\begin{aligned} \pi_K~:\mathcal{O}_K&\longrightarrow{\mathbb{Z}}^N\\ \eta&\mapsto (\eta_x^2,-\eta_y^2,\dots,-\eta_y^2).\end{aligned}$$ Clearly, the map $\pi_K$ is well-defined. Let $\tau $ be an $N-$cycle in the permutation group $S_{N}$. Then, $\tau(\pi_K(\eta))\neq \pi_K(\eta)$ for any $0\neq \eta\in\mathcal{O}_K$. Furthermore, if $\eta$ is a non-zero element in $\mathcal{O}_K$, the set $$\mathcal{B}_\eta=\{\tau^i(\pi_K(\eta))~:~0\leq i\leq N-1\}$$ consists of $N$ ${\mathbb{R}}$-linearly independent vectors $e_{i}=\tau^i(\pi_K(\eta))$. Thus, $\mathcal{B}_\eta$ generates a full-rank sub-lattice of ${\mathbb{Z}}^N$. We denote by $\L_\eta$ the lattice with basis $\mathcal{B}_\eta$. A straightforward calculation yields $$\label{volunit} \operatorname{vol}(\L_\eta)=(\eta_x^2+\eta_y^2)^{2(N-1)}.$$ Let $0\neq\eta\in\mathcal{O}_K$. Then the basis $\mathcal{B}_\eta=\{e_0,\dots,e_{N-1}\}$ satisfies the following properties: - $\v_1=\sum_{i=0}^{N-1}e_i=(\eta_x^2-(N-1)\eta_y^2)(1,\dots,1)^T.$ - $\langle\v_1,e_i\rangle=(\eta_x^2-(N-1)\eta_y^2)^2$ for all $0\leq i\leq N-1$. - $\langle e_i,e_i\rangle=\eta_x^4+(N-1)\eta_y^4$ for all $0\leq i\leq N-1$. - $\langle e_i,e_j\rangle=-2\eta_x^2\eta_y^2+(N-2)\eta_y^4$ for all $0\leq i,j\leq N-1$ with $i\neq j$. Observing that the basis $\mathcal{B}_{\eta}$ is Lagrangian if and only if $\eta_x^2-(N-1)\eta_y^2=1$, we get: Let $K={\mathbb{Q}}(\sqrt{N-1})$ such that $N-1$ is a positive square-free integer and $N\equiv 0,3\pmod4$. Then the basis $\mathcal{B}_{\eta}$ is Lagrangian if and only if $\eta\in \mathcal{O}_K^*$. Using the same notation as in \[main\], we denote by $a=\eta_x^4+(N-1)\eta_y^4$ and $h=2\eta_x^2\eta_y^2-(N-2)\eta_y^4$. Hereon, we consider $N\equiv 0,3\pmod4$ and recall that $N-1$ square-free.\ Let $r, s$ be integers such that $0 \neq |r| <N$, let $\eta\in\mathcal{O}_K^*$ and let $m=r+sN$. Denoting by $$L_{\eta}^{(r,s)}:=\Phi_{(r,s)}(\L_{\eta}),$$ we have by Proposition \[Index\] that $\L_{\eta}^{(r,s)}$ is a sub-lattice of $\L_{\eta}$ of index $m|r|^{N-1}$. Consequently, $$\operatorname{vol}(\L_{\eta}^{(r,s)})=m(|r|(\eta_x^2+\eta_y^2)^2)^{N-1}.$$ Let $N$ be a positive integer such that $N\equiv 0,3\pmod4$ and $N-1$ square-free. Then the $N$-dimensional lattice $\L_{\eta}^{(r,\eta_y^2)}$ has a minimal basis and a minimum $$\lambda_1(\L_\eta^{(r,\eta_y^2)})=\frac{r^2(N-1)(N\eta_y^2+1)^2+(N\eta_y^2+r)^2}{N},$$ for any norm $1$ unit $\eta$ in ${\mathbb{Q}}(\sqrt{N-1})$ and $1\leq r\leq \sqrt{N}$. Moreover, the lattice $\L_{\eta}^{(r,\eta_y^2)}$ is orthogonal if and only if $r=1$. Assume that $\N_{K/{\mathbb{Q}}}(\eta)=1$. Remarking that $$\eta_x^4+(N-1)^2\eta_y^4=1+2\eta_x^2\eta_y^2(N-1),$$ we get $$\begin{aligned} Na-1&= N\eta_x^4 +N(N-1)\eta_y^4-1\\ &=(N-1)(\eta_x^2+\eta_y^2)^2\\ &=(N-1)(N\eta_y^2+1)^2.\end{aligned}$$ Similarly, we obtain $Nh+1=(\eta_x^2+\eta_y^2)^2$. Finally, by Theorem \[main\], the lattice $\L_{\eta}^{(r,\eta_y^2)}$ has a minimal basis whenever $$\label{minimalunits} \frac{(N\eta_y^2+1)^2}{N+1}\leq \left(\frac{m}{r}\right)^2\leq (N\eta_y^2+1)^2(N+1).$$ Moreover, $$\lambda_1(\L_\eta^{(r,\eta_y^2)})=\frac{r^2(N-1)(N\eta_y^2+1)^2+(N\eta_y^2+r)^2}{N}.$$ Let $s=\eta_y^2$ and $1\leq r\leq\sqrt{N}$. Then the trivial bound $$\frac{(N\eta_y^2+1)^2}{N}\leq \Big(\frac{m}{r}\Big)^2\leq N(\sqrt{N}\eta_y^2+1)^2$$ holds. Thus, the lattice $\L_\eta^{(\eta_y^2,r)}$ has a minimal basis for all $1\leq r\leq \sqrt{N}$. Let $1\leq i,j\leq N$ with $i\neq j$. Then, Proposition \[prop2\] yields $$\displaystyle \langle \Phi_{(r,\eta_y^2)}(e_i),\Phi_{(r,\eta_y^2)}(e_j) \rangle= \frac{m^2-r^2(Nh+1)}{N}=\frac{(N\eta_y^2+r)^2-r^2(N\eta_y^2+1)^2}{N}.$$ Hence, the lattice $\L_\eta^{(\eta_y^2,r)}$ is the orthogonal lattice if and only if $r= 1$. Now we assume that $\N_{K/{\mathbb{Q}}}(\eta)=-1$. Again the lattice $\L_{\eta}^{(r,s)}$ has volume $$\operatorname{vol}(\L_{\eta}) =m|r|^{N-1}(N\eta_y^2 -1)^{2(N-1)}.$$ A similar manipulation of the condition $\N_{K/{\mathbb{Q}}}(\eta)^2=1$ yields $$Na-1=(N-1)(N\eta_y^2-1)^2.$$ Let $m=r+sN$. By virtue of Theorem \[main\], the lattice $\L_{\eta}^{(r,s)}$ has a minimal basis whenever $$\label{exbound} \frac{(N\eta_y^2-1)^2}{N+1}\leq \left(\frac{m}{r}\right)^2\leq (N\eta_y^2-1)^2(N+1).$$ Taking $r=-1$ and $s=\eta_y^2$, then $\L_{\eta}^{(-1,\eta_y^2)}$ is generated by its minimal vectors and has a minimum $$\begin{aligned} \lambda_1(\L_{\eta}^{(-1,\eta_y)})&= \frac{Na-1+(N\eta_y^2-1)^2}{N}\\ &=(N\eta_y^2-1)^2.\end{aligned}$$ Moreover, $$\operatorname{vol}({\L_{\eta}^{(-1,\eta_y^2)}})=(N\eta_y^2-1)^{2N-1}.$$ Recalling that the center density of a lattice $L$ is given by $$\delta(L) =\frac{\lambda_1(L)^{N/2}}{2^N\operatorname{vol}(L)},$$ we conclude that $$\delta(\L_{\eta}^{(-1,\eta_y^2)}) =\frac{1}{2^N(N\eta_y^2-1)^{N-1}}.$$ These are different for different values of $\eta_y$. Hence, the lattices $\L_{\eta}^{(-1,\eta_y^2)}$ are non-similar for every $\eta_y\in\mathcal{O}_K^*$ of norm $-1$. Note that if $K$ has a unite of norm $-1$, then the lattices $\L_{\eta}^{(-1,\eta_y^2)}$ are completely determined by the odd powers of the fundamental unit in $\mathcal{O}_K$. Let $s$ be a positive odd integer such that $s^2+1$ is square-free. Let $N=s^2+2$. Then $N\equiv 3\pmod4$ and $\varepsilon=s+\sqrt{s^2+1}$ is the fundamental unit in $\mathcal{O}_K$, where $K={\mathbb{Q}}(\sqrt{N-1})$. Moreover, we have $\N_{K/{\mathbb{Q}}}(\varepsilon)=-1$. It follows from the above analysis, that the $N-$dimensional lattices $\L_{\eta}^{(-1,\eta_y^2)}$ are non-similar and generated by their minimal bases for every $\eta=\varepsilon^{2k+1}$ with $k\geq 1$. It is a classic result due to Estermann, see [@Estermann1931], that there are infinitely many square free values of the form $s^2+1$, in fact a non-zero positive proportion of integers. (See also [@articleHB] for details). Hence the following theorem, which summarizes the analysis of this section, gives us a recipe to construct infinitely many non-isomorphic lattices with minimal basis. Let $N=s^2+2$, where $s$ is an odd integer such that $s^2+1$ is square-free. Then the $N$-dimensional lattice $\L_{\eta}^{(-1,\eta_y^2)}$ has a minimal basis, whenever $\eta=(s+\sqrt{s^2+1})^{2k+1}$ and $k\geq 0$. Furthermore, the lattices $\L_{\eta}^{(-1,\eta_y^2)}$ are non-similar for different values of $k$. [Mohamed Taoufiq Damir. Department of Mathematics and Systems Analysis, Aalto University, Espoo, Finland. ([mohamed.damir@aalto.fi]{})]{} [Guillermo Mantilla-Soler, Department of Mathematics, Universidad Konrad Lorenz, Bogotá, Colombia. Department of Mathematics and Systems Analysis, Aalto University, Espoo, Finland. ([gmantelia@gmail.com]{})]{} [^1]: M. T. Damir’s work was supported in part by the Academy of Finland, under Grant No. 318937 (Aalto University PROFI funding to C. Hollanti). [^2]: G. Mantilla Soler’s work was supported in part by the Aalto Science Institute [^3]: According to Conner and Perlis see [@CoPe IV.8.1] this name is due to Hilbert.
--- abstract: 'Geometrical frustration has led to rich insights into condensed matter physics, especially as a mechansim to produce exotic low energy states of matter. Here we show that frustration provides a natural vehicle to generate models exhibiting anomalous thermalization of various types within high energy states. We consider three classes of non-integrable translationally invariant frustrated spin models: (I) systems with local conserved quantities where the number of symmetry sectors grows exponentially with the system size but more slowly than the Hilbert space dimension (II) systems with exact eigenstates that are singlet coverings and (III) flat band systems hosting magnon crystals. We argue that several 1D and 2D models from class (I) exhibit disorder-free localization in high energy states so that information propagation is dynamically inhibited on length scales greater than a few lattice spacings. We further show that models of class (II) and (III) exhibit quantum many-body scars $-$ eigenstates of non-integrable Hamiltonians with finite energy density and anomalously low entanglement entropy. Our results demonstrate that magnetic frustration supplies a means to systematically construct classes of non-integrable models exhibiting anomalous thermalization in mid-spectrum states.' author: - 'Paul A. McClarty' - Masudul Haque - Arnab Sen - Johannes Richter bibliography: - 'references.bib' title: 'Disorder-Free Localization and Many-Body Quantum Scars from Magnetic Frustration' --- Introduction {#sec:Introduction} ============ There is strong evidence that most eigenstates of non-integrable many-body Hamiltonians, if sufficiently far from the spectral edges, are “thermal" in the sense that expectation values of local observables on such eigenstates match well to the predictions of statistical mechanics [@DAlessio_review2016]. This observation is formalized in the Eigenstate Thermalization Hypothesis (ETH) [@PhysRevA.43.2046; @PhysRevE.50.888; @Rigol_Nature2008; @Reimann_NJP2015; @DAlessio_review2016; @Deutsch_RepProgPhys2018], and is tied to the success of random matrix theory in describing some properties of the many-body spectrum, such as level repulsion. The complete breakdown of thermalization occurs only in extreme instances. One widely known example of anomalous thermalization is in integrable quantum systems where there is no level repulsion between eigenvalues and the long time averages of local observables approach a distribution that is tethered to the presence of an extensive number of conserved quantities [@vidmar2016generalized]. Another well-known example is the many-body localized (MBL) phase in interacting disordered systems in which high energy states have area law entanglement and in which an extensive number of local integrals of the motion are emergent [@nandkishore2015many]. In both the MBL phase and in integrable systems, the majority of eigenstates depart very much from random states compared to those of generic non-integrable models. In this paper, we discuss two other types of anomalous thermalization, in translationally invariant, non-integrable interacting models: [*disorder-free localization*]{} and [*many-body quantum scars*]{}. Exploiting insights from the field of frustrated quantum magnetism, we show how to design classes of many-body systems which display physics of one of these types. Disorder-free localization is a variant of many-body localization in translationally invariant systems [@PhysRevLett.118.266601; @PhysRevLett.120.030601; @PhysRevLett.121.040603; @vanNieuwenburg9269; @PhysRevX.10.011047; @PhysRevB.99.180302; @Kuno_2020; @2020arXiv200304901K; @danieli2020manybody]. In this phenomenon, information propagation is inhibited by the emergence of a localization length. For example, Ref.  introduces a spin chain coupled to complex fermions with an extensive number of conserved quantities that maps to free fermions in a disorder potential generated by the different configurations of the symmetry sectors so that each sector is Anderson localized. In the case of many-body quantum scars, an otherwise apparently unexceptional spectrum of eigenstates is peppered with highly athermal states. Such states were found to occur in the PXP chain — a kinetically constrained model of spins one-half — that has been simulated experimentally on a chain of Rydberg atoms [@bernien2017probing; @PhysRevB.69.075106]. These athermal eigenstates are called many-body quantum scars after their non-ergodic counterparts in single particle semi-classical chaos that trace out periodic trajectories in phase space but are perturbatively connected to chaotic states [@PhysRevLett.53.1515]. Many-body quantum scars are characterized by their anomalously low entanglement and through local observables that strongly depart from random matrix predictions. The dynamics of states prepared with significant overlap with scar eigenstates is also anomalous. For a family of special initial states, unitary evolution leads to large amplitude oscillations in the entanglement entropy and in local correlation functions. This anomalous non-thermalizing dynamics is tied to the large overlap of the initial state with quantum scars [@2018NatPh..14..745T; @PhysRevLett.122.220603; @PhysRevLett.122.040603; @PhysRevB.98.155134]. Indeed, the evolution can be thought of as taking place predominantly within the subspace spanned by the scar states. It is therefore analogous to precessional dynamics of a single spin, albeit one emerging from degrees of freedom across the many-body spectrum. Other than the PXP chain, a number of models have been found that exhibit quantum many-body scar states including the AKLT chain [@PhysRevB.98.235156; @PhysRevB.98.235155; @shiraishi2019connection], the 1D transverse field Ising model with longitudinal field that causes excitations to confine thus inhibiting their thermalization [@PhysRevLett.122.130603; @PhysRevB.99.195108], quantum Hall systems in the thin torus limit [@2019arXiv190605292M], the fermionic Hubbard model in higher than 1D [@2017ScPP....3...43V; @PhysRevLett.123.036403], in periodically driven matter [@PhysRevB.101.245107; @sugiura2019manybody; @mukherjee2020dynamics; @mizuta2020exact], in topologically ordered systems including fracton models [@PhysRevResearch.1.033144; @PhysRevB.101.174204; @PhysRevLett.123.136401; @shiraishi2019connection] among other examples [@PhysRevB.100.184312; @Hudomal_2020; @2019PhRvL.123n7201S; @PhysRevB.101.241111; @PhysRevLett.123.030601; @PhysRevLett.119.030601; @PhysRevX.10.011047; @PhysRevLett.124.180604; @hart2020random; @dooley2020enhancing; @turner2020correspondence]. Geometrical frustration is well-known to lead to many interesting and exotic phenomena, including flat bands, quantum and classical spin liquids and fractionalization [@lacroix2011introduction; @Starykh_2015; @SavaryBalents; @Derzhko_2015]. In this paper, we describe how geometrical frustration supplies a mechanism to construct models with anomalous thermalization including both disorder-free localization and many-body scar states. The models, as explained in Section \[sec:models\], are all localized spin models with antiferromagnetic couplings on lattices of triangular units that are the basic units underlying frustrated magnetism. We introduce three classes of models: (I) non-integrable models with local conservation laws, (II) models with protected singlet coverings that can be tuned through the spectrum and (III) flat band models hosting localized magnon states and magnon crystals. Models from class (I) are intermediate between non-integrable models that typically have $O(1)$ conservation laws and integrable models in which the number of conserved quantities equals the number of local degrees of freedom so that all states are specified by a quantum number associated to the conserved quantities. In Section \[sec:LIOM\], we discuss the thermalization properties of typical eigenstates in models from class (I). Then, in Section \[sec:scars\], we give various examples of models in 1D and 2D exhibiting many-body scar states from classes (II) and (III). Some of these models are realized to a good approximation in certain magnetic materials such as SrCu$_2($BO$_3)_2$ and are therefore rather well-known in a different context. Models and Mechanism {#sec:models} ==================== We now introduce the mechanism that we exploit to write down models exhibiting anomalous mid-spectrum states. This mechanism is based on the simplest frustrated unit: three spins coupled by antiferromagnetic Heisenberg exchange. We will then discuss separately three separate classes of magnetic systems combining such frustrated units. Consider the Heisenberg model with antiferromagnetic couplings on a triangle of spins one-half with one distinguished bond. The Hamiltonian is $$H_{\Delta}=J \boldsymbol{S}_1\cdot \boldsymbol{S}_2 + J' \boldsymbol{S}_1\cdot \boldsymbol{S}_3 + J' \boldsymbol{S}_2\cdot \boldsymbol{S}_3 \label{eqn:triang}$$ with $J,J'>0$. We refer to the $(\boldsymbol{S}_1,\boldsymbol{S}_2)$ bond as the distinguished bond, the $J$ bond, or the dimer. For this geometrically frustrated triangular unit, the total spin $(\boldsymbol{S}_1 + \boldsymbol{S}_2)^2$ is a conserved quantity. It follows that the singlet state on the distinguished bond, $\vert 0 \rangle \equiv \frac{1}{\sqrt{2}}(\lvert\uparrow\downarrow\rangle -\lvert\downarrow\uparrow\rangle)$, is protected — eigenstates will have well-defined total spin on the distinguished bond. In a loose sense, this feature arises from destructive interference on the two identical $J'$ bonds and so it is destroyed if those bonds are made inequivalent. To analyze further the spectrum of the triangular plaquette, we introduce projectors $P_{S=0}(\boldsymbol{S}_i, \boldsymbol{S}_j)$ and $P_{S=1}(\boldsymbol{S}_i, \boldsymbol{S}_j)$ — the total spin $0$ and $1$ projectors for spins $i$ and $j$, as well as $$P_{S=3/2}(\boldsymbol{S}_i, \boldsymbol{S}_j,\boldsymbol{S}_k) \equiv \frac{1}{3}\left( \boldsymbol{S}_i + \boldsymbol{S}_j + \boldsymbol{S}_k \right)^2 - \frac{1}{4}$$ — the projector onto the total spin $3/2$ sector of three spins. Then the Hamiltonian can be rewritten as $$\begin{aligned} H_{\Delta} & = \frac{3}{2}J' P_{S=3/2}(\boldsymbol{S}_1, \boldsymbol{S}_2,\boldsymbol{S}_3) - \frac{3J'}{4} \nonumber \\ & + (J-J') \left( -\frac{3}{4} P_{S=0}(\boldsymbol{S}_1, \boldsymbol{S}_2) + \frac{1}{4}P_{S=1}(\boldsymbol{S}_1, \boldsymbol{S}_2) \right) . \label{eq:proj}\end{aligned}$$ The projectors mutually commute. So, for typical couplings, the spectrum splits up into total spin $3/2$ and $1/2$ sectors, as well as singlet and triplet sectors on the distinguished ($J$) bond. This means that there is a four-fold degenerate spin $3/2$ level. These four states each has a triplet on the $J$ bond. There are also two doublets corresponding to total spin $1/2$. The $J$ bond is a singlet in one of these degenerate pairs and is a triplet in the other pair. At the fully frustrated point $J=J'$, the last term in Eq.  vanishes, so the two doublets merge — there is a level crossing at $J'/J=1$. The singlet is the ground state for $J'/J < 1$. While we focus in this work on spin-$1/2$ systems, the existence of a conserved total spin on the distinguished bond generalizes to any spin, $S$: the singlet state of two spins $S$ on the $J$ bond is an exact eigenstate with energy $-JS(S+1)$. Diagonalization of the spin $S$ Hamiltonian reveals that new protected states (with $J'$-independent energy) can arise for $S\geq 3/2$. The triangular plaquette with Heisenberg exchange and one distinguished $J$ bond provides the basic unit to create lattice models with disorder-free localization and many-body scars. We distinguish three different cases. [**Class (I):**]{} In general, $P_{S=3/2}$ operators on adjacent triangular units do not commute with one another. However, there are various ways to combine the triangles such that the total spin conservation on $J$ bonds is preserved. For example, this is achieved by connecting triangular units back-to-back and then connecting these four spin structures via the dangling spins. Examples include the orthogonal dimer chain in Fig. \[fig:dimer\_lattices\](a) [@richter1998antiferromagnetic; @PhysRevB.62.5558; @PhysRevB.65.054420] and the diamond chain in Fig. \[fig:dimer\_lattices\](b). These models have spin conservation on the vertical bonds. So does the fully frustrated ladder in Figs. \[fig:dimer\_lattices\](c) and the bilayer in Fig. \[fig:dimer\_lattices\](f)). Ref.  argues that the latter model with XXZ couplings is realized in a particular material to a good approximation. In this class of lattices, the frustration mechanism is responsible for an extensive number of conservation laws that is, however, smaller than the number of degrees of freedom. For example, in the orthogonal dimer chain there is one conserved quantity per unit cell of four spins. Such models are intermediate between integrable models — in which the number of local conserved quantities equals the number of degrees of freedom — and generic non-integrable systems — which have $O(1)$ conserved quantities. We will address the question of whether typical states in class (I) models thermalize. ![image](Dimer_Lattice_Figure.pdf){width="95.00000%"} [**Class (II):**]{} If we relax the constraint that the total spin on each $J$ bond be conserved, we can nevertheless devise lattices that retain the $J$ bond singlet covering as an exact eigenstate. To see how this can be done we take the example of the sawtooth chain [@PhysRevB.53.6401; @PhysRevB.67.054412] Fig. \[fig:dimer\_lattices\](e) with Hamiltonian $$H_{\rm ST}= \sum_i \left( J \boldsymbol{S}_{i,1}\cdot \boldsymbol{S}_{i,2} + J' \boldsymbol{S}_{i,1}\cdot \boldsymbol{S}_{i+1,1} + J' \boldsymbol{S}_{i,2}\cdot \boldsymbol{S}_{i+1,1} \right). \label{eqn:sawtooth}$$ We write this in terms of projectors as indicated in Eq. \[eq:proj\] and note that a state that satisfies $P_{S=3/2}(\boldsymbol{S}_{i,1}, \boldsymbol{S}_{i,2},\boldsymbol{S}_{i+1,1})=0$ and $P_{S=1}(\boldsymbol{S}_{i,1}, \boldsymbol{S}_{i,2})=0$ is an exact eigenstate with energy $-3J' N_{\Delta}/4$, where $N_{\Delta}$ is the number of triangles. These conditions constrain each triangle to have total spin $S=1/2$ while the $J$ bonds have $S=0$. The product state with singlets on the $J$ bonds has these properties and is the unique state that does. These are many-body quantum scars because dimer coverings are highly atypical states (often with area law entanglement entropy) that can be embedded within the many-body spectrum of a translationally invariant model with no local conservation laws. This reasoning is reminiscent of the embedding argument of Shiraishi and Mori [@PhysRevLett.119.030601] that gives a systematic way to place athermal states into the spectrum of a many-body Hamiltonian. We briefly review this result. We introduce a set of local projection operators $P_\alpha$ that need not commute. The scar states are those that are annihilated by all the projectors $P_\alpha\vert \Psi\rangle=0$. There is a class of Hamiltonians with such states as eigenstates: $$H = \sum_{\alpha} P_\alpha \hat{h}_\alpha P_\alpha + H' \label{eq:Hscar}$$ where $[H',P_\alpha] =0$ and $h_{\alpha}$ is an arbitrary local operator. There are many-body scars in the concrete sense described above because $$P_\alpha H\vert \Psi\rangle = P_\alpha H' \vert \Psi\rangle = H' P_\alpha \vert \Psi\rangle = 0.$$ The example of the sawtooth chain is a special case of this kind of mechanism where the Hamiltonian is merely a sum of projectors with the remarkable feature that the conditions $P_\alpha\vert \Psi\rangle=0$ are solved by a dimer covering. It is evident from the foregoing that the dimer covering eigenstate appears in certain lattices composed of triangular units. There is a large class of such lattices. Apart from the sawtooth lattice, we show that the Shastry-Sutherland lattice (Fig. \[fig:dimer\_lattices\](g)) and the maple leaf lattice (Fig. \[fig:mllshsu\] (right)) exhibit similar physics. In addition to $J$-$J'$ Heisenberg models we consider the counterpart XXZ models by including the perturbation $$H'_{\lambda} = \lambda \sum_i \left( J S^z_{i,1} S^z_{i,2} + J' S^z_{i,1}S^z_{i+1,1} + J' S^z_{i,2}S^z_{i+1,1} \right). \label{eqn:sawtoothxxz}$$ This perturbation commutes with the projection operators and is therefore equivalent to switching on $H'$ in Eq. \[eq:Hscar\]. The physics we have presented above is thus preserved. When using an XXZ anisotropy, the total spin is no longer a conserved quantum number. Hence for nonzero $\lambda$, the spectrum is not separated into sectors corresponding to different values of the total spin. This is convenient, e.g., when calculating level statistics. It is also possible to generalize the frustration mechanism that generates local conservation laws (class (I)) from dimers to trimers, quadrumers and so on. For example, a Heisenberg coupled triangle with all exchange couplings equal to $J$ has a singlet eigenstate when each spin is an integer. If we couple this triangle to one other spin through $J'$ exchange, the singlet remains an exact eigenstate and one can build chains of such units such as the pyrochlore chain Fig. \[fig:dimer\_lattices\](d) which belongs to class (I). To generalize this to more spatially extended singlets, we simply require that the polygonal unit admits a singlet state $-$ if the polygon has an odd number of vertices the individual spins have to be integer-valued and no such constraint holds for even numbers of vertices. [**Class (III):**]{} A third class of interesting frustrated models deriving from the $H_{\Delta}$ model on a triangular plaquette is the famous class of models with a flat band of one-magnon states leading to localized multi-magnon states. For a recent review see Ref. [@Derzhko_2015]. An example is the Heisenberg $J$-$J'$ model on a sawtooth chain with the distinguished bonds along the spine of the sawtooth in contrast to the case discussed above with $J$ bonds on the left or right jagged edges of the chain [@PhysRevLett.88.167207; @PhysRevB.70.100403]. Suppose $J=1$ and $J'=2$ starting from the product state $\vert \uparrow\ldots \uparrow\rangle$. Now apply operator $\Sigma^-_i \equiv (S_{i-1}^- - 2S_{i}^- + S_{i+1}^-)$ to state $\vert \uparrow_{i-1}\uparrow_{i}\uparrow_{i+1}\rangle$ where $i$ is a site at the base of one of the “valleys" on the sawtooth chain. This is a single localized magnon state. It turns out that this model has a pair of exact many-body eigenstates formed by applying $\Sigma^-_i$ to every even, or odd, valley along the chain. This lives within the sector with half of the saturation magnetization. This state is highly fine-tuned: a small change of the $J'/J$ coupling destroys the magnon localization whereas in classes (I) and (II) the protected states are robust to changes in the ratio $J'/J$. However, later in this paper (Section \[subsec:square\_kagome\]), we report on a crystal of localized magnon states and its excitations in another member of class (III), that can form many-body scar states that are robust up to changes of all the couplings. Thermalization Dynamics in Frustrated Models with Local Conservation Laws {#sec:LIOM} ========================================================================= In this section, we consider models from class (I) focussing on two examples: the orthogonal dimer chain (Fig. \[fig:dimer\_lattices\](a)) and the fully frustrated ladder (Fig. \[fig:dimer\_lattices\](c)). We show that the distribution of the entanglement of mid-spectrum eigenstates has a large variance in contrast to usual non-integrable models. Also, both models violate the usual ETH scaling of eigenstate matrix element distributions. These results show that mid-spectrum states of the models are highly unusual although both are non-integrable. We go further and argue that, in fact, these models exhibit a variant of many-body localization albeit in the absence of quenched disorder. We explain that the local conserved quantities fragment the eigenstates in real space leading to a localization length of the order of a few lattice spacings and demonstrate that the dynamics of the fully frustrated ladder is consistent with the picture of disorder-free localization. At the end of the section, we provide concrete examples of two-dimensional translationally invariant spin models that, through a mapping to a percolation picture, can be argued to exhibit similar phenomena. As a concrete example of a model from class (I), we consider the orthogonal dimer chain [@richter1998antiferromagnetic; @PhysRevB.62.5558; @PhysRevB.65.054420] shown in Fig. \[fig:dimer\_lattices\](a). In common with other models in this class, this model has total spin conserved on each bond with $J$ exchange. The chain has four sites per unit cell and hence, for spin one-half, $2^{4C}=16^C$ states where $C$ is the number of unit cells. The number of symmetry sectors also grows exponentially in the system size but with a smaller exponent: as $2^C$. This model is distinct from integrable models in which the number of symmetry sectors equals the number of states. For example, in free fermion models each state belongs to a unique quasiparticle number sector while, for the Heisenberg chain, each state has a unique set of Bethe quantum numbers. The orthogonal dimer chain model is also distinct from typical non-integrable models in which the number of conservation law is constant and of order one. A second example of this type of model is the fully frustrated ladder (Fig. \[fig:dimer\_lattices\](c)) [@PhysRevB.43.8644; @honecker2000magnetization; @PhysRevB.82.214412] which has two sublattices per unit cell and hence $4^C$ states and $2^C$ symmetry sectors. In all such examples, the size of the subspace within a typical sector grows exponentially with the system size. An obvious question is the extent to which the thermalization properties of this class of models emulates that of well-known integrable and non-integrable models. To start addressing this question, we consider the entanglement of the eigenstates, measured using the von Neumann entropy $S_{\rm vN}=-\rho_{\rm A}\log \rho_{\rm A}$, where $\rho_{\rm A}$ is the reduced density matrix for subsystem $A$. The entanglement entropy for a cut through $J'$ bonds on a $16$ site fully frustrated ladder with blocks of $8$ spins on each subsystem is shown in Fig. \[fig:liom\](a). Of the $12870$ states, there is a single one with zero entanglement on this cut. However, the more striking observation is that there is a very large number of low entanglement states similar to analogous results for integrable models [@Alba_2009; @Beugeling_2015; @PhysRevLett.119.020601; @LeBlond_Mallaya_Vidmar_Rigol_PRE2019] and the entanglement generally falls significantly below that expected for completely random states (Fig. \[fig:liom\](c)). The entanglement for the orthogonal dimer chain is similar and shown in Fig. \[fig:liom\](b). ![Entanglement within total $S^z=0$ sector for the fully frustrated ladder (a) ($J=1$, $J'=2$, $\lambda=0.5$, $N=16$ and an $8,8$ site bipartition) and the orthogonal dimer chain (b) ($J=J'=1$, $N=16$ and a $9,7$ bipartition) with periodic boundary conditions. Both plots show an anomalously broad distribution originating from the conserved total spin on each distinguished bond. For comparison, the average entanglement of a random state with the same Hilbert space dimension is $4.949$ for the $8,8$ bipartition and $4.527$ for the $9,7$ bipartition as indicated by horizontal lines in the figures. Panel (c) shows the ratio of the average entanglement in the middle of the spectrum to the maximal entanglement as a function of the subsystem size $L_A$. The lines in blue show the envelope of maximal entanglement. Panel (d) shows the scaling of the width $\sigma$ of the distribution of off-diagonal matrix elements $\langle E_A \vert (1/2)(S_{i}^{+}S_{j}^{-}+S_{i}^{-}S_{j}^{+})\vert E_B \rangle$ consistent with a power law $\sigma \sim 1/L^\alpha$ and $\alpha=3.7$. \[fig:liom\] ](LIOM_Figure_Plus_Scaling.pdf){width="\columnwidth"} We now investigate whether the eigenstate thermalization hypothesis is obeyed by the eigenstates of models within class (I). A natural expectation would be that thermalization takes place as for non-integrable models within each exponentially large symmetry sector. To add weight to this hypothesis, let us consider the fully frustrated ladder within the sector with all dimer bonds in the total $S=1$ sector. In this sector, the dimer bonds maps to composite spins one and the coupling between them is simply a Heisenberg coupling because $(\boldsymbol{S}_{i,1}+\boldsymbol{S}_{i,2})\cdot(\boldsymbol{S}_{i+1,1}+\boldsymbol{S}_{i+1,2})$ is just the set of $J'$ couplings between rungs of the fully frustrated ladder. Thus, the “all triplets” sector is effectively a Heisenberg-coupled spin one chain (spin one Haldane chain), which is not integrable and hence is expected to obey ETH. Thus we have one example of an exponentially large sector that obeys ETH, in a model with an exponentially large number of symmetry sectors. We can imagine preparing a state in a random state within the sector with all rungs having $S=1$ and with some energy density — we should find that observables at long times can be described by a statistical ensemble average of eigenstates within this sector at some fixed temperature set by the initial energy density. The sector with $S=1$ on all rungs has exponentially small weight in the whole Hilbert space. We must consider all other sectors if we are to understand the gross thermalization properties of the model. The composite spin picture described above sheds light on all the remaining sectors. Each sector has well-defined $S=0$ or $S=1$ on each rung of the ladder. We know that consecutive $S=1$ rungs map to Haldane chains. The presence of $S=0$ rungs has the effect of completely decoupling neighboring Haldane chain fragments. It follows that a state prepared in a given sector cannot completely thermalize because entanglement cannot spread beyond $S=0$ rungs. In other words, there is dynamical localization in each sector. Since we are interested in the thermalization of typical states it is necessary to address how the amplitude in such a state is distributed among the configurations with different chain lengths. The distribution of chain lengths must be calculated by weighting the configurations by the dimension of their Hilbert space. This distribution $P(\ell)$ is equivalent to that of the distribution of success run lengths, $\ell$, in Bernoulli trials with a weighted coin producing heads with probability $p=3/4$ and tails with probability $q=1/4$. The distribution of $l$ consecutive $S=1$ rungs is evidently $p^\ell = \exp(-a \ell)$ with $a\approx 0.28$ so short fragments are overwhelmingly important among the set of all symmetry sectors. Indeed, the mean $S=1$ chain length is about $4$. We conclude that typical states $-$ those that can be decomposed into a linear combination of symmetry sectors of roughly equal weight $-$ must be localized apart from the exponentially small tail that lives in the sector with all rungs $S=1$. This is an example of so-called disorder-free localization as the model is translationally invariant. Similarly to the case of MBL high energy states in the spectrum are dynamically localized. However, unlike the MBL phase, the fully frustrated ladder is fine-tuned and the anomalous thermalization properties we have described cannot survive sufficiently large generic perturbations. We now turn to the question of whether signs of this physics can be observed numerically. We focus now on the fully frustrated ladder, because it has only $2$ sites per unit cell and so we are able to study a wider range of system sizes than in the other models described above. We first address whether eigenstates of the model obey ETH, meaning that we consider some local operator $\hat{O}$ and compute its eigenstate matrix elements. If ETH is satisfied, as is generally the case in non-integrable models, then [@srednicki1996thermal; @srednicki1999approach] $$\langle E_A \vert \hat{O} \vert E_B \rangle = \delta_{AB} f_{O}^{(1)}(\bar{E}) + e^{-S(\bar{E})/2}f_{O}^{(2)}(\bar{E},\omega) R_{AB}$$ where $S\sim \log \mathcal{D}$ is the entropy and $\mathcal{D}$ is the Hilbert space dimension, $\vert E_A \rangle$ is an energy eigenstate with eigen-energy $E_A$, $\bar{E}=(E_A + E_B)/2$, and $\omega=E_B - E_A$. The $f_{O}^{(1/2)}$ are smooth functions, and $R_{AB}$ is a (pseudo) random variable with zero mean and unit variance. A crucial aspect of ETH is the scaling of the width of the distribution of either diagonal or off-diagonal matrix elements: the width falls off as $e^{-S(\bar{\mathsf{E}})/2}\sim \mathcal{D}^{-1/2}$, i.e., exponentially with system size. This scaling is based on the similarity between typical many-body eigenstates and random states [@Marquardt_PRE12; @Beugeling_scaling_PRE14; @Beugeling_offdiag_PRE2015]. This behavior contrasts sharply with integrable systems, which do not obey ETH scaling — the width of diagonal matrix element distributions generally have power law decay with system size [@ziraldo2013relaxation; @Beugeling_scaling_PRE14; @Alba_PRB15; @ArnabSenArnabDas_PRB16; @HaqueMcClarty_SYKETH], and the off-diagonal matrix element generally has a non-gaussian distribution [@Beugeling_offdiag_PRE2015; @HaqueMcClarty_SYKETH; @LeBlond_Mallaya_Vidmar_Rigol_PRE2019]. The local operator we consider is $(1/2)(S_{i}^{+}S_{j}^{-}+S_{i}^{-}S_{j}^{+})$ where $i$ and $j$ are taken to be sites on neighboring rungs of the ladder. The exchange is taken to be $J=1$, $J'=2$ and we break translational invariance by setting the exchange on bonds between two rungs to have $J=J'$. We have computed the distribution of off-diagonal matrix elements for different system sizes. The distribution is highly peaked at zero with long tails. The width of the distribution narrows for larger system sizes consistent with power law scaling (Fig. \[fig:liom\](d)) in contrast to the exponential scaling expected for typical non-integrable models. The violation of ETH scaling by the fully frustrated ladder is consistent with the expectation of localization within mid-spectrum eigenstates. ![image](FFL_Fidelity_Correlator.pdf){width="90.00000%"} We now present features of the exact quantum dynamics for a periodic chain of $N=20$, or ten rungs, prepared in different initial states. The results are shown in Fig. \[fig:ffl\_dynamics\]. The different columns correspond to different initial states. The four panels in the top row show the return probability or fidelity, $F(t)\equiv \vert\langle \psi (0)\vert\psi (t)\rangle\vert^2$, where $\psi(t)$ is the state of the system at time $t$. In order to study the spreading of correlations, we also present the absolute value of the connected correlation function $\vert \langle S_{1}^z(t)S_{1+k}^z(t)\rangle - \langle S^z_1(t)\rangle \langle S^z_{1+k}(t)\rangle \vert$ (bottom row). The subscript here is the rung index: this quantity measures correlations between a site of the rung labelled $1$ and a site on the rung $1+k$. In each case, the site on the same leg of the rung is used. We use $J=1$ (so that time is measured in units of $J^{-1}$) and $J'=1.1$. For these parameters, the all-singlet state is not the ground state. As discussed above, when all the rungs are in the local $S=1$ sector, there is a mapping to the Haldane chain which is non-integrable and should behave like a generic random matrix model in the middle of the spectrum. Column (a) of Fig. \[fig:ffl\_dynamics\] shows results for the initial state with all rungs in the total $S^z =0$ triplet state $\vert T_0\rangle = (1/\sqrt{2})(\vert\uparrow\downarrow\rangle + \vert \downarrow\uparrow\rangle)$. In this case, the fidelity drops rapidly from $F(0)=1$ with time on a timescale set by the exchange and fluctuates close to zero as expected for a thermalizing system that should retain little memory of its initial state. The correlator in the bottom row shows rapid spreading of correlations on a well-defined light cone centered on site $1$ and emanating in both directions on the periodic chain so that both paths of the light cone meet at site $k=5$ on a timescale of the order of the exchange. Some oscillations are visible in the correlation function at later times which are presumably a finite size effect. Similar behavior is observed for a second initial state in the sector with all rungs having $S=1$. In panel (b) we take the state with alternating rungs in the $S^z=1$ and $S^z=-1$ states, denoted as $\vert T_+\rangle =\vert\uparrow\uparrow\rangle$ and $\vert T_-\rangle =\vert\downarrow\downarrow\rangle$. Once again, the fidelity falls rapidly and correlations spread on a light cone — in this case to a largely featureless time-independent state at longer times. We have confirmed that the amplitude of the fidelity fluctuations decays with increasing system size for cases presented in (a) and (b) (as can also be seen by comparing with the fidelity in column (c) as explained below). Column (c) is for the initial state T$_0$T$_0$T$_0$T$_0$ST$_0$T$_0$T$_0$T$_0$S where S denotes a rung in the singlet state. According to our proposed scenario, the presence of the two rungs in singlet states effectively splits the chain into a pair of fragments each composed of four triplet rungs that themselves have nontrivial dynamics. The blocking of the spread of information by the singlet rungs is clearly shown in the lower panel — correlations with rung $1$ are nonvanishing only with rungs at $k=1$, $2$ and $3$. The fidelity drop and large amplitude fluctuations are as expected for a chain of an effective length of four rungs. As a final example, we consider a state that lives in a linear combination of different singlet and triplet sectors, as one expects for a generic state. Specifically we take the initial state to be a product state, with each rung in the state $\vert\uparrow\downarrow\rangle$. In other words, the initial state has $S^z=0$ on each rung with equal weight in the singlet and triplet sectors so that many-body eigenstate has contributions in all possible singlet and triplet sectors. Our expectation based on the foregoing is that the small weight in the sector with all rungs in triplet states will be subject to thermalizing dynamics (as in cases (a) and (b)). The rest of the state will undergo some degree of dynamical localization because of the presence of amplitude in mixed triplet-singlet sectors. The results are shown in column (d). Evidently the system does not straightforwardly thermalize. Instead, the fidelity shows clear periodic recurrences up to about $0.7$ suggesting a lack of complete thermalization. The correlation function does exhibit a feature similar to the light cone of columns (a) and (b). However, in this case, the feature is much less pronounced, the longer time correlations are much weaker than in those other cases. The expectation is that generic states for larger systems will have exponentially small weight in the all-triplet sector. As this is the only truly thermalizing part of the wavefunction, correlations will tend to be trapped within small regions. The numerical results thus confirm our proposed picture, up to usual finite-size limitations. We now address the question of whether there are two-dimensional analogues of the physics described above. Fig. \[fig:dimer\_lattices\](f) is a two-dimensional lattice that has $J'$ bonds with the connectivity of the fully frustrated ladder. The construction, exemplified here by the square lattice, can be generalized to a bilayer of any lattice in two-dimensions. In such cases, there is local total spin conservation on each $J$ bond connecting the two layers. As we did for the chain, we now consider the different sectors on each $J$ bond. Within each sector there is generally a set of conserved singlet and triplet clusters on the lattice. If we imagine preparing a quantum state within a given sector, one would find information propagation within each connected triplet cluster and that further propagation would be blocked by singlet $J$ bonds at the boundaries of each cluster. The question of whether dynamical localization takes place maps to a percolation problem. Taking the whole Hilbert space of states, the probability that a $J$ bond is “occupied" by a triplet is $3/4$ while singlet $J$ bonds are effectively absent. If the site percolation threshold $p^*$ is less than $3/4$, typical clusters percolate and information can propagate to infinity and the system will thermalize. If instead $p^*>3/4$, dynamically connected clusters have a characteristic length scale that is an effective localization length. Having argued that the problem of frustration-induced localization in 2D is reduced to a search for lattices with $p^* >3/4$, we first note that this condition is not satisfied by most lattices. For example, the square lattice bilayer of Fig. \[fig:dimer\_lattices\](f) has $p^* =0.5927$ [@RevModPhys.64.961]. However, there are lattices with low connectivity and large loops that do satisfy the condition. Examples include the so-called star lattice [@richter2004starlattice; @PhysRevB.98.155108] with $p^*=0.807904$ [@PhysRevB.53.6401] and the martini lattice [@PhysRevE.85.062101] with $p^* = 0.764826$ [@PhysRevE.73.016107]. The dynamics of typical states on such lattices is strictly speaking athermal though the associated length scale may be large and for practical purposes local observables may be close to their thermal values. The unit cell of the fully frustrated bilayer martini lattice has $8$ sites while the fully frustrated bilayer star lattice unit cell has $24$. So, it would be challenging to numerically access the physics we have argued to exist in these models. Examples of Many-Body Quantum Scars {#sec:scars} =================================== Sawtooth Chain -------------- ![Aspects of the sawtooth chain. (a) Level spacing statistics for $N=20$, $J=3$, $J'=1$, $\lambda=-0.5$, compared with predictions for GOE (red) and Poisson (green). (b) Entanglement entropies in all eigenstates, for $J=1$, $J'=1.5$, $\lambda=0.5$, and $N=16$. Scar state is highlighted with a circle. The horizontal line at $4.949$ is the random state entanglement for the $(8,8)$ bipartition. (c) Effect of perturbations on the entanglement of the scar state: $J=1.5$, $J'=1.0$, $\lambda=0.5$ and $J_p$ (squares), $\eta$ (triangles). Dynamical signatures of scar states for an $N=12$ chain: (d-f) $J'/J=0.05$ (blue), $J'/J=0.5$ (black), $J'/J=1.5$ (red), $J'/J=4.5$ (green). (d) square of overlaps between the initial state (“state A”) and different eigenstates, plotted against eigen-energies. (e) The fidelity and (f) the block entanglement as functions of time. \[fig:sawtooth\] ](Sawtooth_Figure_Statics_Dynamics.pdf){width="\columnwidth"} Our first example of a model with a many-body scar is the sawtooth chain [@PhysRevB.53.6401; @PhysRevB.67.054412], which we have introduced in detail in Section \[sec:models\]. We demonstrated that the singlet dimer covering on the $J$ bonds is an exact eigenstate of fixed energy that can be tuned so that it is arbitrarily located in the spectrum relative to the ground state, e.g., its neighbouring states can be made to have high effective temperature. Fig. \[fig:sawtooth\](a) shows the distribution of consecutive level spacings normalized to the mean of the distribution, $P(s)$. The approximate prediction for this quantity for random matrices of the gaussian orthogonal ensemble (GOE) is $$P(s) = \frac{\pi}{2}s\exp\left( -\frac{\pi}{4}s^2 \right) .$$ The figure shows that this prediction is compatible with the $N=16$ total $S^z = 0$ sector for the XXZ model $J=3$, $J'=1$ and $\lambda=-0.5$. The spectra were computed using periodic boundary conditions and were separated into momentum sectors. Using XXZ couplings ($\lambda\neq0$) ensures that the total spin is not a good quantum number, so that there are no total spin symmetry sectors needing to be separated. The result of Fig. \[fig:sawtooth\](a) is expected in this model which has no local conserved quantities, in contrast to integrable models which show Poissonian level spacing statistics. The entanglement entropy between two half-systems ($8$ connected sites each) for the XXZ model as a function of energy is shown in Fig. \[fig:sawtooth\](b). The couplings used are such that the eigenstate with singlet coverings lies in the middle of the spectrum. We have highlighted this scar state in the Figure by circling the data point. The scar state has zero entanglement because the entanglement partitioning border cuts through $J'$ bonds. Other than the single scar state, the entanglement is typical of non-integrable systems with no conserved quantities: the points form an “arch” with the entanglements in the middle of the spectrum being close to that of a random state of the same size, and with low (area law) entanglement at the spectral edges. We have examined the effect of perturbations away from the sawtooth XXZ model on a periodic $N=16$ chain. Fig. \[fig:sawtooth\](c) shows the scar entanglement for two types of perturbation: one where the two $J'$ bonds become $J'+\eta$ and $J'-\eta$ and the other where a new Heisenberg exchange with coupling $J_p$ is included between corner vertices on neighboring triangles. The effect of the latter type of coupling is more dramatic with the entanglement rising to about $40\%$ of the random-state value for $J_p/J =0.07$. The PXP model has several scar states [@2018NatPh..14..745T] and it is possible to observe their presence through dynamical observables by preparing an initial state with significant overlap with the scar states. One finds that the dynamics exhibits Rabi oscillations reflecting unitary evolution within the scar subspace even though this subspace is distributed in energy across the many-body spectrum. In contrast, the sawtooth chain with periodic boundary conditions has a single scar state. Even so, we ask whether this state might have observable dynamical features. To this end, we prepare a state $\vert \Psi_0 \rangle$ in the dimer state except for a single bond that is excited into a product state $\vert 01\rangle$ with weight in the singlet and triplet sectors. This is referred to “state A” in Fig. \[fig:sawtooth\]. The overlap of this state with all eigenstates is shown in Fig. \[fig:sawtooth\] (d) for different couplings $J'/J$ (in the Heisenberg model $\lambda=0$). For small coupling, $J'/J=0.05$, close to the decoupled dimer limit, the overlap is concentrated in the dimer state and its low-lying excited states. As the coupling increases, the overlap distribution broadens across the whole spectrum with the most dramatic broadening taking place at the threshold $(J'/J)_c \approx 0.5$ where the exact singlet covering ceases to be the ground state. Turning now to the dynamics, we find that the fidelity $\left|\langle\Psi_0\vert\Psi(t)\rangle\right|^2$ exhibits strong oscillations for $J'/J=0.05$, close to the decoupled dimer limit (Fig. \[fig:sawtooth\](e)) that persist out to times at least of order one thousand times the period of oscillation. This degree of coherence is to be expected as the initial state is predominantly a mixture of the ground state and low-lying excited states. As the coupling increases beyond the critical coupling $(J'/J)_c$ the oscillations are progressively damped reaching a plateau in times of order $1$ when $J'/J\gtrsim 1$. The fidelity in the plateau is the (non-vanishing) weight of the scar state admixed into the initial state. This result is therefore the analogue of the coherent oscillations seen in the PXP model but in the limit where the number of scar states goes to one. These results are mirrored by the time dependence of the entanglement which remains small for long times for $J'/J=0.05$ and $J'/J=0.5$ because the dimer covering has low energies and the overlap distribution is narrow in energy. For larger couplings, the entanglement quickly reaches a plateau close to the random-state value as most of the weight of the initial state approaches a random state. In summary, we have selected a simple and natural initial state $\vert \Psi(0)\rangle = \alpha \vert {\rm Singlet \hspace{1mm} covering}\rangle + \ldots$ that has significant weight $\vert \alpha \vert^2$ with the scar state. The dynamics of this state is consistent with the thermalization of the state apart from the residual part coming from the scar $\vert {\rm Singlet \hspace{1mm} covering}\rangle$. #### Open boundary: {#open-boundary .unnumbered} A look at the chain with open boundary conditions is illuminating. As before, we consider a sawtooth chain with $N$ sites with entanglement computed on a bipartition that cuts $J'$ bonds, but now the chain has open boundaries and the bipartition divides the chain into two equal (identical) blocks. We know that one dimerized state exists in the spectrum of the periodic chain with zero entanglement on cuts through $J'$ bonds. On the open chain, many zero entanglement states are present in the spectrum $-$ their number depending on the location of the cut along the chain. These states can be rationalized as follows. In order for the state to have zero entanglement, the state must be separable at the location of the single cut on the open chain, say dividing the chain into $n$ sites on the left and $N-n$ on the right. Closer investigation reveals that the right-hand-side of the chain in these states has a simple dimer covering imposed by the dangling $J$ bond at the right-hand edge. The left-hand-side is in an eigenstate of the $n$-site open chain. It follows (i) that the number of zero entanglement states on the open chain equals the total number of states on the open $n$ site chain and (ii) that the energy of each zero entanglement states on the open chain is $E^{\rm Scar}_{n\vert N-n}=E^{\rm OBC}_{n}+E^{\rm Dimer}_{N-n}$ where $E^{\rm Scar}_{n\vert N-n}$ is the energy of the zero entanglement state on the $n\vert N-n$ bipartition, $E^{\rm OBC}_{n}$ is the energy of an eigenstate on the $n$ site open chain and $E^{\rm Dimer}_{N-n}$ is the energy of the singlet covering on the $N-n$ open chain which is a constant. This point is illustrated in Fig. \[fig:sawtooth\_obc\] which shows the entanglement on a subsystem of six sites from an open chain of length $N=12$. The energies of the zero entanglement states on this cut are in one-to-one correspondence to the full spectrum on the $N=6$ open chain (also shown). ![Entanglement of the $N=12$ sawtooth chain with open boundary conditions and entanglement cut in the middle of the chain (black points). The couplings are $J=1.0$ and $J'=1.2$. The zero entanglement states for this cut have energies equal to those of the full spectrum on chain of length $N=6$ (red circles) up to a constant shift. \[fig:sawtooth\_obc\] ](Figure_Sawtooth_Open){width="0.85\columnwidth"} Maple Leaf Lattice ------------------ ![Left column: Maple leaf lattice with $18$ sites. Right column: Shastry-Sutherland lattice with either $20$ sites (b) or $16$ sites (a,c). (a) $J$ ($J'$) bonds are dashed (full). One of the entanglement bipartitions is distinguished by blue shaded sites. The trapezoids demarcate the finite-size systems used for numerical diagonalization, with periodic boundary conditions. (b) Level spacing distributions, compared with Poissonian (green) and GOE (red) predictions. Left: maple leaf XXZ, $J=3$, $J'=1$, $\lambda=-0.5$. Statistics from the middle $1/6$th of the spectrum in the total $S^z=0$ sector. Right: Shastry-Sutherland, $N=20$, $J=3$, $J'=1$, $\lambda=0$. Right inset shows distribution of ratios of level spacings. (c) Entanglement entropy of each eigenstate, against eigen-energies. Scar states are highlighted. The random state entanglement is indicated as a horizontal line in both panels. Left: maple leaf lattice, $J=0.2$, $J'=1$. $\lambda=0$. Right: Shastry-Sutherland lattice, $N=16$, $J=1$, $J'=1.25$, $\lambda=0$. \[fig:mllshsu\] ](ShSu_MLL_Composite.pdf){width="1.\columnwidth"} The maple leaf lattice is a five-coordinated two dimensional edge-shared triangular lattice obtained by periodically depleting $1/7$th of the sites from the regular triangular lattice (Fig. \[fig:mllshsu\](a,left)) [@richter2004quantum; @PhysRevB.60.1064; @Fennell_2011]. The lattice has six sublattices and three symmetry-distinct nearest neighbor Heisenberg couplings. For our purposes, we set two of these couplings to be equal. Thus we have a $J$-$J'$ Heisenberg model (Fig. \[fig:mllshsu\](a,left)). This model is known to have the singlet covering on the $J$ bonds as an exact eigenstate, which is the ground state for $J'/J \lesssim 0.69$ [@PhysRevB.84.104406]. The level spacing statistics computed from the eigen-energies in the middle of the spectrum for an $18$ site lattice are compatible with random matrix predictions and the non-integrability of the model (Fig.  \[fig:mllshsu\](b,left)). For the level spacing results, we have broken SU$(2)$ symmetry by using an anisotropic coupling, i.e., using the XXZ model, in order to avoid having multiple sectors corresponding to different total spin values. If Fig. \[fig:mllshsu\](c,left) we present the entanglement for the Heisenberg model ($\lambda=0$) in the total $S^z=0$ sector. For the Heisenberg model, the total spin is a good quantum number. This results in separate “arches” corresponding to different total spin sectors. Fig. \[fig:mllshsu\](c,left) features the protected singlet state at intermediate energies in the spectrum. This scar state is highlighted in the figure. The partition (separating the two blocks between which the entanglement is calculated) cuts one singlet bond, so the scar state entanglement is $\log 2$. This is well below the random-matrix value, which is close to $5$. Shastry-Sutherland Model ------------------------ The Shastry-Sutherland model [@shastry1981exact; @PhysRevLett.82.3168; @albrecht1996first; @PhysRevB.72.104425; @PhysRevResearch.1.033038; @mcclarty2017topological] is a $J$-$J'$ model defined on the four sublattice 2D lattice shown in Fig. \[fig:dimer\_lattices\](g). This model is realized to a good approximation in SrCu$_2($BO$_3)_2$ with $J'/J \sim 0.6$ [@PhysRevLett.82.3168; @albrecht1996first; @PhysRevB.72.104425; @mcclarty2017topological], and its ground state and thermodynamic properties have been of intense interest in the field of frustrated magnetism. When $J'=0$, the lattice decouples into isolated pairs of $J$-coupled spins, and the ground state has singlets on each $J$ bond. The argument outlined in Section \[sec:models\] tells us that the singlet covering remains an exact eigenstate for any value of $J'$ and extensive numerical studies have shown that this state is the ground state for $J'/J \lesssim 0.7$ [@PhysRevB.87.115144]. For larger values of $J'/J$, this eigenstate is no longer the ground state and is instead a scar state. The level spacing distribution for a $20$ site lattice is shown in Fig. \[fig:mllshsu\](b,right). The spectrum has been split into symmetry sectors and the level spacing computed for each separately for the middle one-sixth of the spectrum and then combined into the full distribution. The distribution is well described by that of the GOE result consistent with the fact that the model is non-integrable. The inset to Fig. \[fig:mllshsu\](b) shows the distribution of $r$ values defined by [@Oganesyan_Huse_PRB2007; @Atas_Bogomolny_Roux_PRL2013] $$r_n \equiv \frac{{\rm Min}(s_n, s_{n+1})}{{\rm Max}(s_n, s_{n+1})}$$ where $s_n = E_{n+1}-E_n$ and the eigenvalues $E_n$ are ordered. Again the distribution is compatible with the GOE result with mean $\langle r\rangle = 0.537$. Fig. \[fig:mllshsu\](c,right) shows the entanglement computed for the $16$ spin lattice shown in Fig. \[fig:mllshsu\](a,right) on a connected partition encompassing $8$ spins and in the total $S^z=0$ sector. The entanglement in each eigenstate is plotted against the eigen-energies. Other than a single scar state, the entanglements are arranged in several arches, corresponding to different sectors of total spin, as in the maple leaf case. The boundary between partitions is such that it avoids cutting singlet bonds, so that the entanglement of the scar state is zero. For an arbitrary cut, the entanglement of the scar state would scale as the area of the boundary, in contrast to volume law for neighboring states in the middle of the spectrum. The isolated zero entanglement state has fixed ($J'$-independent) energy and its location relative to the middle of the spectrum can be tuned so that it lies among the mid-spectrum states in the lower (upper) half of the spectrum for antiferromagnetic (ferromagnetic) couplings. Square Kagome Lattice {#subsec:square_kagome} --------------------- ![(Left) Finite square kagome lattice of $30$ sites used for the exact diagonalization study. The entanglement bipartition is indicated by the shading of the lattice sites. The exchange coupling $J_n$ on the square plaquettes is shown for each square, while $J'=1.8$, $\lambda=0.5$ throughout. (Right) Entanglement entropy within the total $S^z=22s$ sector on the $30$ site system showing three scar states with zero entanglement entropy. \[fig:square\_kagome\] ](Square_Kagome_Figure){width="1.\columnwidth"} The square kagome lattice [@schnack2001independent; @richter2009heisenberg; @PhysRevB.88.195109; @PhysRevResearch.1.033147; @SquareKagome1] is a two-dimensional lattice of corner-sharing triangles with a six-site primitive cell. Like the kagome lattice, it has coordination number four but, whereas the kagome lattice has an underlying triangular Bravais lattice and triangular and hexagonal polygonal units, the square kagome has a square Bravais cell and triangular and square units. The latter are crucial to the existence of scar states in the $J$-$J'$ XXZ model on this lattice where $J$ is on the square edges and $J'$ on all other bonds . Evidently this has the same kind of frustrated triangular unit that we have seen throughout this paper. It is known that this model has a two-thirds magnetization plateau that can be reached from the fully polarized high-field limit through the condensation of a flat band of magnons. These localized magnon states live on the square plaquettes and exact eigenstates for such states starting from the fully magnetized state $\vert 11\ldots 1\rangle$ are obtained as $\sum_m (-)^m S^-_m \vert 11\ldots 1\rangle$ where $m$ is taken anti-clockwise around a square plaquette. The ground state in the $2/3$rd magnetization sector is obtained by tiling every square plaquette with such localized magnons. In order to construct quantum many-body scar states we may simply place localized magnons on a subset of the square plaquettes. For example, if we tile all but one of the plaquettes with localized magnons we shall have a many-body scar state with degeneracy equal to the number of unit cells. In order to obtain multiple scar states as in the PXP model we may enlarge the unit cell by taking the $J$ exchange to be different on different square plaquettes. As a concrete example, we consider the $30$ site system shown in Fig. \[fig:square\_kagome\] with the crystallographic unit cell enlarged by choosing the exchange on square plaquettes to be $J_1=J_5=1.0$, $J_2 = 1.1$, $J_3= 1.2$ and $J_4 = 1.3$ as shown. For this system size the saturation magnetization is $S^z = 30s$ with $s=1/2$. We carry out diagonalization in the sector with all but one of the square plaquettes in a localized magnon state — the $S^z = 22s$ sector. If the $J_n$ were equal there would be a five-fold degeneracy of the localized magnon states corresponding to the five ways of choosing the position of the fully polarized plaquette. By enlarging the unit cell in the way indicated this degeneracy is broken down to $1+1+1+2$ and the two-fold degenerate states can mix leading to a finite entanglement. The remaining three states are manifestly scar states appearing at distinct energies with zero entanglement. For our choice of $J'=1.8$, these appear roughly in the middle of the spectrum (Fig. \[fig:square\_kagome\] right panel), thus forming scars. Summary and Conclusions {#sec:conclusions} ======================= Geometrical frustration has long been one of the central ideas in condensed matter physics with important connections to various low energy exotic classical and quantum states of matter. Here, we have described how geometrical frustration can also lead to unusual high energy states. We have divided the presentation into three classes of phenomena each giving a large class of models exhibiting anomalous thermalization in at least some mid-spectrum states. In the first class, geometrical frustration leads to an extensive number of local conservation laws that is, however, smaller than the number of degrees of freedom. This class therefore consists of non-integrable models with highly structured Hilbert spaces. We have shown that standard ETH scaling is violated for one example from this class — the fully frustrated ladder — which, instead, most closely resembles the behavior seen in integrable models. A more detailed examination of the fully frustrated ladder reveals that it is an example of disorder-free localization — in which correlations and entanglement spreading are dynamically inhibited on the scale of a few lattice spacings. This example generalizes straightforwardly to any one-dimensional model with frustrated units, carrying locally conserved spin protecting localized singlets, separated from one another by any set of arbitrarily coupled spins. We have also argued that aspects of this physics carry over to certain 2D models including the fully frustrated bilayer models on star and martini lattices. We leave a detailed numerical analysis of these 2D models as a problem for the future. The second class of models we have considered has many-body quantum scar states that are product states of singlets and examples include the sawtooth chain, the famous Shastry-Sutherland model and the maple leaf lattice. Each of these examples has a single many-body scar state and we have studied the dynamical signatures of such states. We have shown through several examples that this physics is insensitive to the choice of $J'/J$ and the anisotropic coupling. The final class is composed of flat band models exhibiting localized magnon states and we saw that, in such cases, one can tune models so that arbitrarily many scar states appear in the middle of the spectrum. Acknowledgements ================ The work of A.S. is partly supported through the Max Planck Partner Group program between the Indian Association for the Cultivation of Science (Kolkata) and the Max Planck Institute for the Physics of Complex Systems (Dresden).
--- abstract: 'We perform a combined analysis of two stringent constraints on the 2 Higgs doublet model, one coming from the recently announced CLEO II bound on $\brbsg$ and the other from the recent LEP data on $\epsb$. We have included one-loop vertex corrections to $\Zbb$ through $\epsb$ in the model. We find that the new $\epsb$ constraint excludes most of the less appealing window $\tan\beta\lsim 1$ at $90\%$C. L. for $m_t=150\GeV$. We also find that although $\bsg$ constraint is stronger for $\tan\beta>1$, $\epsb$ constraint is stronger for $\tan\beta\lsim 1$, and therefore these two are the strongest and complimentary constraints present in the charged Higgs sector of the model.' --- -0.5in 6.5in 8.5in 4.0=0.5in -0.5in 1[\_1]{} 2[\_2]{} 3[\_3]{} \#1[\_[.25ex]{}]{} \#1[\_[-.1em\#1]{}]{} \#1[\_[-.2em\#1]{}]{} \#1[\_[-.1em.25ex]{}]{} \#1[\_[.25ex]{}]{} \#1[\_[.4ex]{}]{} \#1[\^[\#1]{}]{} \#1[\^[\#1]{}]{} plus0pt minus0pt \#1\#2 \#1\#2[0.7ex]{} \#1[$\underline{\smash{\vphantom{y}\hbox{#1}}}$]{} \#1[$^{#1}$ ]{} \#1 \#1 \#1[\[\#1\]]{} 0.3cm [**$b\rightarrow s\gamma$ and $\epsilon_b$ Constraints on\ **]{} 1.5cm [GYE T. PARK\ ]{} 0.4cm [*Center for Theoretical Physics, Department of Physics, Texas A&M University\ *]{} [*College Station, TX 77843–4242, USA\ *]{} [*and\ *]{} [*Astroparticle Physics Group, Houston Advanced Research Center (HARC)\ *]{} [*The Woodlands, TX 77381, USA\ *]{} Despite the remarkable successes of the Standard Model(SM) in its complete agreement with current all experimental data, there is still no experimental information on the nature of its Higgs sector. The 2 Higgs doublet model(2HDM) is one of the mildest extensions of the SM, which has been consistent with experimental data. In the 2HDM to be considered here, the Higgs sector consists of 2 doublets, $\phi_1$ and $\phi_2$, coupled to the charge -1/3 and +2/3 quarks, respectively, which will ensure the absence of Flavor-Changing Yukawa couplings at the tree level [@NOFC]. The physical Higgs spectrum of the model includes two CP-even neutral Higgs($H^0$, $h^0$), one CP-odd neutral Higgs($A^0$) , and a pair of charged Higgs($H^\pm$). In addition to the masses of these Higgs, there is another free parameter in the model, which is $\tan\beta\equiv v_2/v_1$, the ratio of the vacuum expectation values of both doublets. With a renewed interest on the flavor-changing-neutral-current (FCNC) $\bsg$ decay, spurred by the CLEO bound $\brbsg<8.4\times10^{-4}$ at $90\%$ C.L. [@CLEO], it was pointed out recently that the CLEO bound can be violated due to the charged Higgs contribution in the 2HDM and the Minimal Supersymmetric Standard Model(MSSM) basically if $m_{H^\pm}$ is too light, excluding large portion of the charged Higgs parameter space [@BargerH]. The recently announced CLEO II bound $\brbsg<5.4\times10^{-4}$ at $95\%$[@Thorndike] excludes even larger portion of the parameter space [@VernonHARC]. It has certainly proven that this particular decay mode can provide more stringent constraint on new physics beyond SM than any other experiments[@bsgamma]. In our previous work[@Rbbsg2HD], we pointed out that in addition to the constraint from $\bsg$, the recent LEP data on $R_b (\equiv {\Gamma(Z\rightarrow b\ov b)\over{\Gamma(Z\rightarrow hadrons)}})$ [@LP93] provides a mild additional constraint to the 2HDM. In this work, we will show that the recent LEP data on a new observable $\epsilon_b$ provides much stronger constraint, excluding at 90% C.L. most of the parameter space $\tan\beta\lsim 1$, which is a less appealing window simply due to the apparent mass hierarchy $m_t\gg m_b$. $\epsb$ has been recently introduced by Altarelli et. al. [@ABC; @Altlecture] , who have proposed a new scheme analyzing precision electroweak tests where four variables, $\epsilon_{1,2,3}$ and $\epsilon_b$ are defined in a model independent way. These four variables correspond to a set of observables $\Gamma_{l}, \Gamma_{b}, A^{l}_{FB}$ and $M_W/M_Z$. The advantage of using these variables is that one need not specify $m_t$ and $m_H$. Among these variables, $\epsb$ is the most interesting observable for one to consider in the 2HDM although $\epsilon_1$ can also provide an important constraint, in the MSSM[@BFC; @ABC] and a class of supergravity models[@bsgamma; @ewcorr], due to a significant negative shift coming from light chargino loop in the Z wave function renormalization with the chargino mass $\sim {1\over2} M_Z$. In fact, Altarelli et. al. have applied the new $\epsilon$-analysis to the MSSM, and their conclusion is that the model is in at least as good an agreement with the data as the SM[@ABCMSSM]. Here we intend to do a similar analysis in the framework of 2HDM. In the 2HDM, $\bsg$ decay receives contributions from penguin diagrams with $W^\pm-t$ loop and $H^\pm-t$ loop. The expression used for $\brbsg$ is given by [@BG] =[6]{} [\^2I(m\_c/m\_b)]{}, where $\eta=\alpha_s(M_Z)/\alpha_s(m_b)$, $I$ is the phase-space factor $I(x)=1-8x^2+8x^6-x^8-24x^4\ln x$, and $f(m_c/m_b)=2.41$ the QCD correction factor for the semileptonic decay. We use the 3-loop expressions for $\alpha_s$ and choose $\Lambda_{QCD}$ to obtain $\alpha_s(M_Z)$ consistent with the recent measurements at LEP. In our computations we have used: $\alpha_s(M_Z)=0.118$, $ B(b\to ce\bar\nu)=10.7\%$, $m_b=4.8\GeV$, and $m_c/m_b=0.3$. The $A_\gamma,A_g$ are the coefficients of the effective $bs\gamma$ and $bsg$ penguin operators evaluated at the scale $M_Z$. The contributions to $A_{\gamma ,g}$ from the $W^\pm-t$ loop, the $H^\pm-t$ loop are given in Ref[@BG]. As mentioned above, the CLEO II bound excludes a large portion of the parameter space. In Fig. 1 we present the excluded regions in ($m_{H^\pm}$, $\tan\beta$)-plane for $m_t=130$, and $150\GeV$, which lie to the left of each curve (solid). We have also imposed in the figure the lower bound on $\tan\beta$ from ${m_t\over{600}}\lsim\tan\beta\lsim{600\over{m_b}}$ obtained by demanding that the theory remain perturbative[@BargerLE]. We see from the figure that at large $\tan\beta$ one can obtain a lower bound on $m_{H^\pm}$ for each value of $m_t$. And we obtain the bounds , $m_{H^\pm}\gsim 186, 244\GeV$ for $m_t=130, 150\GeV$, respectively. Following Altarelli et. al.[@ABC], $\epsb$ is defined from $\Gamma_b$, the inclusive partial width for $\Zbb$, $$\Gamma_b=3 R_{QCD} {G_FM^3_Z\over 6\pi\sqrt 2}\left( 1+{\alpha\over 12\pi}\right)\left[ \beta _b{\left( 3-\beta ^2_b\right)\over 2}{g^b_V}^2+\beta^3_b {g^b_A}^2\right] \;,$$ with $$\begin{aligned} R_{QCD} &\cong&\left[1+1.2{\alpha_S\left( M_Z\right)\over\pi}-1.1{\left(\alpha_S\left( M_Z\right)\over\pi\right)}^2-12.8{\left(\alpha_S\left( M_Z\right)\over\pi\right)}^3\right] \;,\\ \beta_b&=&\sqrt {1-{4m_b^2\over M_Z^2}} \;, \\ g^b_A&=&-{1\over2}\left(1+{\epsilon_1\over2}\right)\left( 1+{\epsb}\right)\;,\\ {g^b_V\over{g^b_A}}&=&{1-{4\over3}{\ov s}^2_W+\epsb}\over{1+\epsb} \;.\end{aligned}$$ where ${\ov s}^2_W$ is an effective $\sin^2\theta_W$ for on-shell Z and the explicit expression for $\epsilon_1$ is given in Ref[@BFC; @ewcorr]. $\epsb$ is closely related to the real part of the vertex correction to $\Zbb$ , $\nabla_b$ defined in Ref[@BF]. In the SM, the diagrams for $\nabla_b$ involve top quarks and $W^\pm$ bosons[@RbSM]. However, in the 2HDM there are additional diagrams involving $H^\pm$ bosons instead of $W^\pm$ bosons. These additional diagrams have been calculated in Ref[@Rbbsg2HD; @Rb2HD; @BF; @Denner]. The charged Higgs contribution to $\nabla_b$ is given as [@BF] $$\nabla_b^{H^\pm}={\alpha\over 4\pi \sin^2\theta_W}\left[ {2 v_L F_L+2 v_R F_R}\over {v_L^2+v_R^2} \right] \;,$$ where $F_{L,R}=F_{L,R}^{(a)}+F_{L,R}^{(b)}+F_{L,R}^{(c)}$ and $$\begin{aligned} F_{L,R}^{(a)} &=& b_1\left(M_{H^+}, m_t, m_b\right) v_{L,R} \lambda^2_{L,R}\;,\\ F_{L,R}^{(b)} &=&\left[\left({M_Z^2\over{\mu^2}} c_6\left(M_{H^+}, m_t, m_t\right)-{1\over 2}-c_0\left(M_{H^+}, m_t, m_t\right)\right)v_{R,L}^t\right. \nonumber \\ && \hspace*{1.05in} \left. +{m_t^2\over{\mu^2}} c_2\left(M_{H^+}, m_t, m_t\right)v_{L,R}^t\right]\lambda^2_{L,R}\;,\\ F_{L,R}^{(c)} &=& c_0\left(m_t, M_{H^+}, M_{H^+}\right)\left({1\over 2}- \sin^2\theta_W\right)\lambda^2_{L,R}\;,\end{aligned}$$ where $\mu$ is the renormalization scale and $$\begin{aligned} v_L &=& -{1\over 2}+{1\over 3}\sin^2\theta_W\,, \quad v_R={1\over 3}\sin^2\theta_W \;, \\ v_L^t &=& {1\over 2}-{2\over 3}\sin^2\theta_W\,, \quad v_R^t=-{2\over 3}\sin^2\theta_W \;, \\ \lambda_L &=& {m_t\over{{\sqrt 2} M_W \tan\beta}}\,, \quad\lambda_R = {m_b \tan\beta\over{{\sqrt 2} M_W }}\;.\end{aligned}$$ The $b_1$ and $c_{0,2,6}$ above are the reduced Passarino-Veltman functions[@BF; @Ahn]. The charged Higgs contribution to $\epsb$, which is negative, grows as $m^2_t/\tan^{2}\beta$ for $\tan\beta\ll{m_t\over{m_b}}$ as is seen from Eq. (13). In our calculation, we neglect the neutral Higgs contributions to $\nabla_b $ which are all proportional to $m_b^2\tan^2\beta$ and become sizable only for $\tan\beta>{m_t\over{m_b}}$ and very light neutral Higgs $\lsim50\GeV$, but decreases rapidly to get negligibly small as the Higgs masses become $\gsim100\GeV$[@Denner]. We also neglect oblique corrections from the Higgs bosons just to avoid introducing more paramters. However, this correction can become sizable when there are large mass splittings between the charged and neutral Higgs, for example, it can grow as $m^2_{H^\pm}$ if $m_{H^\pm}\gg m_{H^0,h^0,A^0}$. Although $\tan\beta\gg1$ seems more appealing because of apparent hierarchy $m_t\gg m_b$, there are still no convincing arguments against $\tan\beta<1$. Our goal here is to see if one can put a severe constraint in this region. In Fig. 1 we also show the contours (dotted) of a predicted value of $\epsb=-0.00592$, which is the LEP lower limit at $90\%$C. L.[@ABC; @Altlecture]. The excluded regions lie below each dotted curve for given $m_t$. We do not consider higher values of $m_t$ here because the SM prediction for $\epsb$ exceeds the LEP value already for $m_t\gsim 163\GeV$ [@ABC]. For $m_t=150(130)\GeV$, $\tan\beta\lsim 1.03(0.51)$ is ruled out at $90\%$C. L. for $m_{H^\pm}\lsim 400\GeV$, and $\tan\beta\lsim 0.69(0.34)$ for $m_{H^\pm}\lsim 800\GeV$. We note that these strong constraints for $\tan\beta\lsim 1$ stem from large deviations of $\epsb$ from the SM prediction, which grows as $m^2_t/\tan^{2}\beta$ as explained above. We have also considered other constraints from low-energy data primarily in $B-\ov{B}, D-\ov{D}, K-\ov{K}$ mixing that exclude low values of $\tan\beta$[@BargerLE; @LowEdata]. But it turns out that none of them can hardly compete with the present $\epsb$ constraint[@assume1]. Nevertheless, the CLEO II bound is still by far the strongest constraint present in the charged Higgs sector of the model for $\tan\beta> 1$. Therefore, we find that $\bsg$ and $\epsb$ serve as the presently strongest and complimentary constraints in 2HDM. In conclusion, we have performed a combined analysis of two stringent constraints on the 2 Higgs doublet model, one coming from the recently announced CLEO II bound on $\brbsg$ and the other from the recent LEP data on $\epsb$. We have included one-loop vertex corrections to $\Zbb$ through $\epsb$ in the model. We find that the new $\epsb$ constraint excludes most of the less appealing window $\tan\beta\lsim 1$ at $90\%$C. L for $m_t=150\GeV$. We also find that although $\bsg$ constraint is stronger for $\tan\beta>1$, $\epsb$ constraint is stronger for $\tan\beta\lsim 1$, and therefore these two are the strongest and complimentary constraints present in the charged Higgs sector of the model. .25in ACKNOWLEDGEMENTS The author thanks Professor T. K. Kuo for very helpful suggestions and reading the manuscript. This work has been supported by the World Laboratory. \#1\#2\#3[Nucl. Phys. B [**\#1**]{} (19\#2) \#3]{} \#1\#2\#3[Phys. Lett. B [**\#1**]{} (19\#2) \#3]{} \#1\#2\#3[B [**\#1**]{} (19\#2) \#3]{} \#1\#2\#3[Phys. Rev. D [**\#1**]{} (19\#2) \#3]{} \#1\#2\#3[Phys. Rev. Lett. [**\#1**]{} (19\#2) \#3]{} \#1\#2\#3[Phys. Rep. [**\#1**]{} (19\#2) \#3]{} \#1\#2\#3[Mod. Phys. Lett. A [**\#1**]{} (19\#2) \#3]{} \#1\#2\#3[Int. J. Mod. Phys. A [**\#1**]{} (19\#2) \#3]{} \#1[Texas A & M University preprint CTP-TAMU-\#1]{} \#1\#2\#3[Ann. Rev. Astron. Astrophys. [**\#1**]{} (19\#2) \#3]{} \#1\#2\#3[Ann. Rev. Nucl. Part. Sci. [**\#1**]{} (19\#2) \#3]{} [99]{} S. Glashow and S. Weinberg, . S. Bertolini, F. Borzumati, A. Masiero, and G. Ridolfi, . M. Battle  (CLEO Collab.) in Proceedings of the joint Lepton-Photon and Europhysics Conference on High-Energy Physics, Geneva 1991. J. Hewett, ; V. Barger, M. Berger, and R. J. N. Phillips, . E. Thorndike, talk given at the 1993 Meeting of the American Physical Society, Washington D. C., April 1993; R. Ammar (CLEO Collab.), . V. Barger and R. J. N. Phillips, to appear in the Proceedings of the HARC workshop “Recent Advances in the Superworld", The Woodlands, Texas, April 1993. , , and G. T. Park, ; G. T. Park, to appear in the Proceedings of the HARC workshop “Recent Advances in the Superworld", The Woodlands, Texas, April 1993; , , G. T. Park, and A. Zichichi, (to appear in Phys. Rev. D). G. T. Park, (September 1993). W. Venus, A talk given at the Lepton-Photon Conference, Cornell University, Ithaca, New York, August 1993. R. Barbieri, M. Frigeni, and F. Caravaglios, . G. Altarelli, R. Barbieri, and F. Caravaglios, . G. Altarelli, CERN-TH.6867/93 (April 1993). G. Altarelli, R. Barbieri, and F. Caravaglios, . , , G. T. Park, H. Pois, and K. Yuan, . R. Barbieri and G. Giudice, . V. Barger, J. Hewett, and R. J. N. Phillips, . J. Bernabeu, A. Pich, and A. Santamaria, ; W. Beenaker and W. Hollik, Z. Phys. C40, 141(1988); A. Akhundov, D. Bardin, and T. Riemann, ; F. Boudjema, A. Djouadi, and C. Verzegnassi, . A. Djouadi, G. Girardi, C. Verzegnassi, W. Hollik, and F. Renard, . M. Boulware and D. Finnell, . A. Denner, R. Guth, W. Hollik, and J. Kühn, Z. Phys. C51, 695(1991). C. Ahn, B. Lynn, M. Peskin, and S. Selipsky, . A. Buras et al., . We have assumed that the updated analysis would not change the conclusions in Ref[@BargerLE] drastically. [**Figure Captions**]{} - Figure 1: The regions in $(m_{H^\pm},\tan\beta)$ plane excluded by the CLEO II bound $\brbsg<5.4\times10^{-4}$ at $95\%$C. L., for $m_t=130, 150 \GeV$ in 2HDM. The excluded regions lie to the left of each solid curve. The excluded regions by the LEP value $\epsb=-0.00592$ at $90\%$C. L. lie below each dotted curve. The values of $m_t$ used are as indicated.
--- abstract: 'We study the spectrum of two dimensional coupled arrays of continuum one-dimensional systems by wedding a density matrix renormalization group procedure to a renormalization group improved truncated spectrum approach. To illustrate the approach we study the spectrum of large arrays of coupled quantum Ising chains. We demonstrate explicitly that the method can treat the various regimes of chains, in particular the three dimensional Ising ordering transition the chains undergo as a function of interchain coupling.' author: - 'Robert M. Konik' - Yury Adamov date: 'June 23, 2007' title: A Renormalization Group For Treating 2D Coupled Arrays of Continuum 1D Systems --- The density matrix renormalization group (DMRG) [@white] is one of the primary theoretical tools for the quantitative description of low dimensional lattice models. For a wide range of one dimensional (1D) lattice models, DMRG can characterize the model’s spectrum and correlation functions [@kuhn]. While there have been notable recent advancements [@moukouri; @white2], its use on 2D lattice models is more circumscribed [@hallberg]. There exist several strategies to apply DMRG to 2D models. In the first, the 2D lattice is reduced to a 1D lattice with long range interactions [@noack]. A second approach sees short chains treated as individual lattice sites, allowing the 1D DMRG algorithm to be applied to a model with short ranged interactions directly [@jongh]. In a more sophisticated variant of this methodology, the DMRG is applied in a two stage process [@moukouri]. The 2D system is first divided into a set of coupled 1D chains and the DMRG is used to determine a low energy reduction thereof. For the second stage, the reduced chains, coupled together and treated as individual lattice sites in a 1D lattice, are analyzed again using DMRG. In a final approach, the 1D matrix product states underlying the DMRG algorithm [@ostlund] are replaced by a higher dimensional generalization, projected entangled pair states [@ver]. In this letter we present a distinct approach to applying DMRG to 2D models. This approach trades upon a description of a 2D system as a mixture of continuum and discrete degrees of freedom. In particular, we approach 2D systems as coupled arrays of continuum 1D chains with truncated Hilbert spaces. This methodology offers several distinct advantages. It allows us to treat any 2D strongly correlated model provided it can be conceived as composed of continuum 1D subunits. Furthermore, the approach affords superior finite size scaling. As a function of the length, $R$, of the composite 1D systems, finite size corrections behave exponentially. This implies that we can access, at the very least, the infinite volume limit in the dimension parallel to the chains. Finally, the truncation of the underlying 1D Hilbert space dramatically lessens the numerical burden of the DMRG algorithm, while providing a natural means to perform a Wilsonian renormalization group (RG) improvement of any resulting answer [@first]. The specific type of system that we propose to study takes the form, $$\label{ei} H = \sum_i H^{1D}_i + J \sum_{<ij>}{\cal O}_i{\cal O}_j.$$ The 1D continuum subunits of the array are governed by $H^{1D}_i$ which we insist must be either gapless (and so governed by a conformal field theory) or gapped but integrable [@note]. Thus this method can study, for example, arrays of Luttinger liquids and a wide range of coupled 1D Mott insulators. The subunits are coupled together via the nearest neighbour coupling $J{\cal O}_i{\cal O}_j$, where ${\cal O}_i$ is an operator defined along the i-th continuum chain. This coupling should be relevant but can be of arbitrary strength. The analysis of the arrays proceeds in two conceptual steps. In the first step we follow Zamolodchikov’s pioneering numerical analysis of perturbed gapless 1D continuum theories [@zamo], an approach termed the truncated spectrum approach (TSA). We thus place the 1D chains on a ring of circumference, $R$. Unlike DMRG applied to pure lattice models, periodic boundary conditions along the chains can be employed without issue. By working at finite $R$, we discretize the 1D spectrum. This permits the states in the chains’ spectrum to be ordered in energy, i.e. $|1\rangle, |2\rangle,\ldots$, and then truncated at some finite cutoff, $E_c$, leaving us with a finite number of states in the theory. With these alterations we nonetheless remain in an excellent position to obtain information regarding the full theory in infinite volume. In Ref. [@zamo], a critical Ising chain in a magnetic field was studied. Choosing $E_c$ so that a mere $39$ states were kept, infinite volume results were reproduced within an error of $2\%$ (via diagonalization of a $39 \times 39$ matrix). It was this finding of excellent results at little numerical cost that motivated us to apply the TSA to more complicated situations [@first], including, as here, arrays of 1D chains. A part of the TSA’s success is predicated on embedding non-perturbative information into the initial computation. In the context at hand this means that given any two states, $|k\rangle, |l\rangle$, we can [*exactly*]{} compute matrix elements of the form $\langle k |{\cal O}_i | l\rangle$ by virtue of $H^{1D}_i$ either being conformal or integrable. We stress that computation of such matrix elements is always a practical possibility either by exploiting the algebraic structures inherent in a conformal field theory or the form factor bootstrap approach in the case of a integrable field theory [@ffprog]. Having prepared the chains by truncating their finite volume spectrum, we proceed to the actual analysis of coupled arrays. We perform the analysis using the finite volume DMRG algorithm. Here each chain with truncated spectrum is treated as if a single site. We however adapt the DMRG algorithm to take into account that we are working with a truncated spectrum. This algorithm is outlined in Table 1. ---- ----------------------------------------------------------------------------------------------- 1. Form initial Hamiltonian, $H_{m-1}$, (and any other needed operators, ${\cal O}_{m-1}$) of system block, $B_{m-1}$, of m-1 chains. 2. Form Hamiltonian of superblock, $B_{m-1}\bullet\bullet B_{m-1}$, of 2m chains, only keeping states whose energy, governed by $H_{m-1} \otimes H^{1D} \otimes H^{1D} \otimes H_{m-1}$, i.e. coupling between the blocks is absent, is less than $E_c$. 3. Find low-energy target state(s) of superblock. Form reduced density matrix of m-chain block, $\rho_m$. 4. Diagonalize $\rho_m$, keeping sufficient eigenstates to obtain a truncation error of less than $\epsilon_c$. 5. Recast $H_m$ and ${\cal O}_m$ in basis of kept eigenstates of $\rho_m$, obtaining $\bar{H}_m$ and $\bar{{\cal O}}_m$. 6. Diagonalize $\bar{H}_m$, obtaining $\bar{H}^*_m$. Rewrite $\bar{{\cal O}}_m$ in terms of eigenstates of $\bar{H}^*_m$. 7. Form new superblock, $B_{m}\bullet\bullet B_{m}$, of length 2m+2. As determined by ${\bar H}^*_m \otimes H^{1D} \otimes H^{1D} \otimes {\bar H}^*_m$, only keep states whose energy is less than $E_c$. 8. Repeat 3-7 with $m\rightarrow m+1$ until desired length of system is obtained. 9. Perform finite volume sweeps until convergence. ---- ----------------------------------------------------------------------------------------------- : Finite system DMRG algorithm adapted to the presence of a truncated spectrum. -.1in Most steps of our finite system DMRG algorithm are unchanged from the standard algorithm [@white1]. One primary difference lies in the formation of the superblock Hilbert space and Hamiltonian (steps 2 and 7 in Table 1). Instead of keeping all states in the Hilbert space, we insist that the state have an energy less than the truncation energy, $E_c$, as governed by the Hamiltonian of the uncoupled blocks. Concomitantly in order to meaningfully associate an energy with an arbitrary state of a superblock, we must be able to assign an energy to the states of each block. ![a) A sketch of an array of Ising chains, each of length R; b) a zero temperature phase diagram of the coupled chains showing both the ordered and disordered regions.](chains.eps){height="1.5in" width="2.in"} This mandates that at each iteration of the DMRG we must not only recast the block Hamiltonian, $H_m$, in terms of the kept eigenstates of the reduced density matrix, $\rho_m$ (step 5), obtaining $\bar{H}_m$, but we must diagonalize $\bar{H}_m$ (step 6). Working then with a basis of eigenstates of $\bar{H}_m$ allows us to associate a definite energy to the superblock states formed in the next step (step 7). A notable operational feature of this DMRG is the relatively small number of states we need to keep from the diagonalization of the reduced density matrix in order to achieve a given truncation error. In the example of arrays of coupled quantum Ising chains that we consider below, the number of kept states needed to achieve truncation errors on the order of $5\times 10^{-5}$ ranges from $10-40$ deep in the ordered phase of the chains to $50-90$ near the critical value of the interchain coupling where the chains order. The relatively small number of states needed to achieve a given accuracy mimics Zamolodchikov’s original finding that a relatively small number of states was needed to describe a critical Ising chain in a magnetic field. But it also reflects the presence of a truncated spectrum in our approach. As was shown in Ref. [@callan], the entanglement entropy that arises from dividing a 2D system into two behaves as the cutoff, $E_c$. As the entanglement entropy is one measure of whether a DMRG-like algorithm will be successful [@ver; @hallberg], we believe that our introduction of a cutoff into the problem is a key feature of our approach. ![The ground state and first excited state energy of a 100 chain array as a function of interchain coupling obtained with both a random phase approximation (blue solid) and our DMRG methodology (red dashed). Here $\Delta = 1$.](gapped.eps){height="1.8in" width="2.9in"} A second key but related feature that marks our use of the DMRG algorithm is the use of RG improvement. For sufficiently large truncation energies, $E_c$, the quantities of interest (whether it be the spectrum or some observable) satisfy a one-loop RG flow [@first]: $$\label{eii} \frac{d\Delta Q}{d\ln E_c} = -g\Delta Q.$$ Here $\Delta Q = Q(E_c)-Q(E_c=\infty)$ is the deviation of some quantity as a function of the truncation energy from its value at $E_c=+\infty$. The function $g$ is a constant determinable from high energy perturbation theory and is related to the anomalous dimension of $Q$. When Q is an energy of some state, $g=1$. Knowledge of this flow allows us to run the DMRG algorithm at several different truncation energies and then extrapolate the resulting flow to $E_c = +\infty$, so removing any residual effect of our use of a finite value for $E_c$. [**Coupled Ising Chains**]{}: To demonstrate this methodology we consider arrays of Ising chains (Fig. 1a) coupled via a nearest spin-spin perturbation: $$\label{eiii} H = \sum_i H^{1D~Ising}_i - J_\perp \sum_{<ij>}\sigma^z_i\sigma^z_j,$$ where the summations are over the chains in the array. The lattice form of the 1D Ising model is $$\label{eiv} H^{1D~Ising} = -J\sum_i(\sigma^z_i\sigma^z_{i+1}+ (1+g)\sigma^x_i).$$ Its continuum version is a Majorana fermion with gap, $\Delta = gJ$. We place the chain on a ring of circumference, $R$. The corresponding Hilbert space of the chain divides itself into two sectors, termed Neveu-Schwarz (NS) and Ramond (R). At T=0, the chain can be either ordered or disordered. In its disordered phase, $\Delta < 0$, the NS/R sectors of the chain permit only even/odd numbers of free fermionic excitations. In the ordered phase, $\Delta > 0$, the two sectors both permit only states with even numbers of fermions. The matrix elements of the spin operator, $\langle i|\sigma^z|j\rangle$, needed to carry out the DMRG procedure can be found in Ref. [@fonseca]. These matrix elements are only non-vanishing if the states $|i\rangle$ and $|j\rangle$ lie in different sectors of the theory. Because we couple the chains together with nearest neighbour spin bilinears, i.e. $\sigma^z_i\sigma^z_{i+1}$, the Hilbert space of the full theory possesses two sectors. Any given state of the full theory has a tensor form $\otimes_i |k_i\rangle$ where $|k_i\rangle$ is a state on the i-th chain. These two sectors are distinguished by whether an even or odd number of the $|k_i\rangle$ lie in the NS sector (or equivalently, the R sector) of the individual chains. That the Hilbert space possess different sectors is a generic feature of continuum models when placed in finite volume and is not particular to the Ising chains at hand. .1in ![The behavior of the ground state energy as a function of chain length, $R$, for 100 chains. $J_\perp = 0.275$. In blue we see the analytic prediction while in red we see the results of the DMRG.](dmrg_fs.eps "fig:"){height="1.8in" width="2.6in"} We now submit the DMRG algorithm to a number of tests. The first test that we apply relates to the behaviour of the spectrum of the chains deep in their ordered phase ($\Delta > 0$ – see the phase diagram in Figure 1b). In this region of the phase diagram, we expect a chain RPA analysis to be accurate [@carr] and so provide a baseline of comparison for our DMRG results. The chain RPA analysis amounts to treating the model $$\label{ev} H^{I.C.Array}_{RPA} = -J\sum_i(\sigma^z_i\sigma^z_{i+1}+ (1+g)\sigma^x_i + h_{RPA}\sigma^z_i),$$ where $h^{RPA}$ is chosen in the standard self-consistent fashion [@carr]. We analyze this model numerically using the truncated spectrum approach along the same lines as Ref. [@fonseca]. However akin to the discussion surrounding Eqn. (2) and in Ref. [@first], we perform an RG improvement of our numerical results. In Fig. 2 we plot the ground and first excited state energy of a 100-chain array as a function of the interchain coupling as computed by both the DMRG and the chain RPA analysis. In order to optimize numerical performance, we employ different chains lengths for different values of $J_\perp$. To obtain $R=\infty$ values for the gaps, $\Delta_{exc}$, of single excitations, we can use any finite value of $R$ provided that we satisfy $R\Delta_{exc} \gg 1$. Because finite $R$ corrections behave exponentially, $\delta_R \Delta_{exc} \sim e^{-R\Delta_{exc}}$, this constraint need only be satisfied loosely. In order to obtain a truncation error,$\epsilon_c$ of $1\times 10^{-4}$ we needed to keep at most 18 states. Decreasing the truncation error to $2.5\times 10^{-6}$ we needed to keep at most 34 states. Moreover by comparing our results with an analysis of 50 coupled chains, we know that any finite size error related to studying only a finite number of chains is extremely small. The only significant uncertainty comes from the RG improvement – applying Eqn. (2) – and is the source of the error bars in Fig. 2. .1in ![The behavior of the gap in the disordered phase as a function of $J_\perp$ for 60 coupled chains at $R = 10\Delta$. In red we plot the gap as determined for the truncation energy $E_c = 7.8\Delta$, while in blue we plot the RG-improved curve. For both, the values of $J_{\perp c}$ and $\nu$ are indicated.](dmrg_crit.eps "fig:"){height="1.8in" width="2.7in"} We see from Fig. 2 that the ground state energy as determined by the DMRG agrees exceptionally well with the chain RPA analysis even for values of the interchain coupling on the same order as the gap. The DMRG values of the gap for the first excited state however see increasing deviations from the RPA values as the interchain coupling is increased. These results indicate that the truncated DMRG algorithm is operating as expected. A more significant test of the algorithm is whether it predicts correctly the finite $R$ corrections. These corrections can be computed analytically, at least at leading order. According to Ref. [@zamo], they take the form $$\label{evi} \delta_R E_{gs} = -\frac{1}{2\pi}\sum_b \sum_{k_y} \int dk_x e^{-R\epsilon_b(k_y,k_x)},$$ and can be thought of as arising from spontaneous emission of excitations from the vacuum. Here $\epsilon_b(k_y,k_x)$ is the dispersion of some band $b$ of excitations in the coupled chains. $\delta_R E_{gs}$ sees contributions from excitations with different (discrete) values of momenta, $k_y$, transverse to the chains as well as excitations with different (continuum) values of the momenta, $k_x$, parallel to the chains. While we cannot compute $\epsilon_b$ exactly, we can estimate it from the values of the excited states as measured by the DMRG together with an RPA analysis [@carr]: $$\label{evii} \epsilon_b(k_y,k_x) = (v_{Fx}^2k_x^2+\Delta_{b~exc}^2+2J_\perp Z_o(1-\cos(k_y)))^{1/2}.$$ Here $\Delta_{b~exc}$ marks the lowest lying excitation in the band, while $Z_o$ is the square of a matrix element of the spin operator on a single chain, $Z_o = 2\langle 0|\sigma_i^z(0)|\epsilon_b(k_x=0,k_y=0)\rangle^2$. The latter quantity is estimated from the same analysis used in computing the RPA energies of Fig. 1. In Fig. 3 we plot $E_{gs}(R)$ for a particular value of interchain coupling ($J_\perp = 0.275$) using the correction in Eqn. 6 by taking into account the two lowest energy bands in the theory (blue dashed line) vs. the direct DMRG computation (red line). We see that for all but the smallest values of $R$ – where our leading order analytic approximation breaks down – the DMRG and expected analytic values agree well. For a point of comparison, we also plot the extrapolation of the energy from large values of $R$, where $E_{gs}$ scales linearly with system volume, i.e. $E_{gs}(R) = \alpha(J_\perp) R N_{chain}$ (straight dashed orange curve). For values of $R \sim \Delta_{b~exc}^{-1}$ the finite size corrections are exponentially suppressed, consistent with $\Delta_{b~exc} = 5.1\Delta$ for $J_\perp = 0.275$ (from Fig. 1). The final test we put to the truncated DMRG algorithm is the chains’ ordering transition. Beginning with chains in their disordered state ($\Delta < 0$) and coupling them together with increasing $J_\perp$, they eventually order. This transition is in the same universality class as the 3D Ising model. This implies that the gap, $\Delta_{exc}$, in the disordered phase should vanish as $\Delta_{exc} \sim (J_{\perp c}-J_\perp)^\nu$ with $\nu = .630$ [@camp]. From our DMRG analysis (see Fig. 4), we find after RG improvement, $J_{\perp c} = .184\pm .0025$, together with good agreement for the critical exponent, $\nu = .622\pm .019$. We see that RG improvement notably improves the results. Computing instead $J_{\perp c}$ and $\nu$ from the results obtained at the largest of the truncation energies employed ($E_c = 7.8\Delta$), we obtain $J_c = .1880$ and $\nu = 0.650$. This DMRG approach to 2D arrays of continuum systems has a number of potentially valuable variations. We first note that we have managed to treat 2D arrays using essentially a 1D DMRG algorithm. Ref. [@moukouri] has demonstrated that the DMRG algorithm, if used in a two stage process, can treat systems of one higher dimension. This implies that it may well be possible to study [*3D arrays*]{} using our approach. We also note that the algorithm we have outlined in this letter may see substantial improvement if wedded to a numerical renormalization group (NRG) [à la Wilson]{} [@wilson]. At each DMRG iteration, one could perform a NRG akin to that described in Ref. [@first] to dramatically increase the truncation energy being employed and so (hopefully) dramatically improve the results of the procedure. To summarize, we have demonstrated that the 1D DMRG algorithm can be directly applied to 2D arrays of 1D continuum systems. In particular we have shown this algorithm can describe the behavior of large arrays of quantum Ising chains both in their ordered phase and in the vicinity of their order-disorder transition. We expect that this procedure will produce quantitatively accurate results on a wide variety of 2D systems in their infinite volume limit. RMK and YA acknowledge support from the US DOE (DE-AC02-98 CH 10886) together with useful discussions with A. Tsvelik. S. R. White, Phys. Rev. Lett. [**69**]{}, 2863 (1992). T. D. Kühner and S.R. White, Phys. Rev. B [**60**]{}, 335 (1992). S. Moukouri, Phys. Rev. B [**70**]{}, 014403 (2004). S. White and A. Chernyshev, arXiv:0705.2746. K. Hallberg, Adv. Phys. [**55**]{}, 447 (2006); cond-mat/0609039. R. Noack and S. Manmana, AIP Conf. Proc. [**789**]{}, 93 (2005); cond-mat/0510321. M. de Jongh et al., Phys. Rev. B [**57**]{}, 8494 (1998). S. Ostlund and S. Rommer, Phys. Rev. Lett. [**75**]{}, 3537 (1995). F. Verstraete and J. Cirac, cond-mat/0407066. R. M. Konik and Y. Adamov, Phys. Rev. Lett. [**98**]{}, 147205 (2007). The requirement of integrability for gapped systems can be relaxed but at additional numerical cost. V. P. Yurov and Al. B. Zamolodchikov, Int. J. Mod. Phys A [**6**]{}, 4557 (1991). F. Smirnov, “Form Factors in Completely Integrable Models of Quantum Field Theory”, World Scientific, Singapore (1992). S. R. White, Phys. Rev. B [**48**]{} 10345 (1993). C. Callan and F. Wilczek, Phys. Lett. B [**333**]{}, 55 (1994). P. Fonseca and A. Zamolodchikov, J. Stat. Phys. [**110**]{}, 527 (2003). K. Wilson, Rev. Mod. Phys. [**47**]{}, 773 (1975). S. T. Carr and A. M. Tsvelik, Phys. Rev. Lett. [**90**]{}, 177206 (2007). M. Campostrin et al., Phys. Rev. E [**60**]{}, 3526 (1999).
--- abstract: 'View based strategies for 3D object recognition have proven to be very successful. The state-of-the-art methods now achieve over 90% correct category level recognition performance on appearance images. We improve upon these methods by introducing a view clustering and pooling layer based on [*dominant sets*]{}. The key idea is to pool information from views which are similar and thus belong to the same cluster. The pooled feature vectors are then fed as inputs to the same layer, in a recurrent fashion. This recurrent clustering and pooling module, when inserted in an off-the-shelf pretrained CNN, boosts performance for multi-view 3D object recognition, achieving a new state of the art test set recognition accuracy of 93.8% on the ModelNet 40 database. We also explore a fast approximate learning strategy for our cluster-pooling CNN, which, while sacrificing end-to-end learning, greatly improves its training efficiency with only a slight reduction of recognition accuracy to 93.3%. Our implementation is available at <https://github.com/fate3439/dscnn>.' bibliography: - 'egbib.bib' title: 'Dominant Set Clustering and Pooling for Multi-View 3D Object Recognition.' --- Introduction ============ Appearance based object recognition remains a fundamental challenge to computer vision systems. Recent strategies have focused largely on learning category level object labels from 2D features obtained from projection. With the parallel advance in 3D sensing technologies, such as the Kinect, we also have the real possibility to seamless include features derived from 3D shape into recognition pipelines. There is also a growing interest in 3D shape recognition from databases of 3D mesh models, acquired from computer graphics databases [@modelnet] or reconstructed from point cloud data [@qi2016volumetric]. 3D object recognition from shape features may be categorized into *view-based* versus *volumetric* approaches. View based approaches including those in [@chen2003visual; @lowe2001local; @macrini2002view; @murase1995visual] create hand-designed feature descriptors from 2D renderings of a 3D object by combing information across different views. 3D object recognition is then reduced to a classification problem on the designed feature descriptors. More recent methods in this class combine CNN features from the multiple views to boost object recognition accuracy [@johns2016pairwise; @qi2016volumetric; @su2015multi; @bai2016gift]. Volumetric approaches rely on 3D features computed directly from native 3D representations, including meshes, voxelized 3D grids and point clouds [@horn1984extended; @kazhdan2003rotation; @knopp2010hough; @sinha2016deep]. The state-of-the-art here is the use of 3D convolutional neural networks on discretized occupancy grids for feature extraction for further processing or for direct classification [@maturana2015voxnet; @qi2016volumetric; @wu20153d; @wu2016learning]. At the present time, at least when evaluated on popular 3D object model databases, the view based approaches [@johns2016pairwise; @su2015multi] outperform the volumetric ones [@maturana2015voxnet; @wu20153d], as reported in the extensive comparison in [@qi2016volumetric]. ![image](pics/system/System_cluster_pooling_tight) The state-of-the-art view based methods differ in the manner in which multi-view information is fused. In the MVCNN approach [@su2015multi], a view pooling layer is inserted in a VGG-m style network to perform multi-view feature fusion of CNN relu vectors. This view pooling layer performs a full stride channel-wise max pooling to acquire a unified feature vector, following which fully connected layers are used to predict category labels. In the pairwise decomposition method in [@johns2016pairwise], two CNNs are used, one for view pair selection and one for pairwise label prediction. The two CNNs each use a VGG-m structure [@chatfield2014return] but they have to be trained separately, which is costly. At the expense of increased training cost, the pairwise formulation out performs the MVCNN approach. In the present article we propose a revised architecture which aims to overcome potential limitations of the above two strategies, namely: 1) the winner-take-all pooling strategy in [@su2015multi], which could discard information from possibly informative views, and 2) the pairwise formulation of [@johns2016pairwise]. The key contribution is the introduction of a recurrent clustering and pooling module based on the concept of dominant sets, as illustrated in Figure (\[fig:SystemArchit\]). The 2D views of an object are abstracted into relu vectors, which serve as inputs to this new layer. Within this layer we construct a view similarity graph, whose nodes are feature vectors corresponding to views and whose edge weights are pairwise view similarities. We then find dominant sets within this graph, which exhibit high within cluster similarity and between cluster dissimilarity. Finally, we carry out channel wise pooling but only from [*within*]{} each dominant set. If the dominant sets have changed from the previous iteration, they are fed back as new feature vectors to the same layer, and the clustering and pooling process is repeated. But if not, they are fed forward to the next full stride pooling layer. In contrast to the MVCNN approach of [@su2015multi] we only pool information across similar views. The recurrent nature allows for the feature vectors themselves to be iteratively refined. Our recurrent cluster-pooling layer, followed by a full stride view pooling layer, can be inserted after a pretrained network’s relu layers to yield a unified multi-view network which can be trained in an end-to-end manner.[^1] Our experimental results in Section (\[sec:EXP\]) show that when inserted before the view pooling layer in the MVCNN architecture of [@su2015multi], our recurrent clustering and pooling unit greatly boosts performance, achieving new state-of-the-art results in multi-view 3D object recognition on the ModelNet40 dataset. Recurrent Clustering and Pooling Layer {#sec:RClusterPool_layer} ====================================== The recurrent clustering and pooling layer requires the construction of a view similarity graph, the clustering of nodes (views) within this graph based on dominant sets, and the pooling of information from within each cluster. We now discuss these steps in greater depth and provide implementation details. A View Similarity Graph {#sec:similarity} ----------------------- First, a pairwise view similarity measure is defined for any two views in the set of rendered views of a 3D object. We then construct a view similarity graph $G = (V , E , w)$ where views $i, j \in V$ are distinct nodes and each edge $E(i,j)$ has a weight $w(i,j)$ corresponding to the similarity between the views $i$ and $j$. This results in a complete weighted undirected graph $G = K_n$, where $n$ is the number of views. Different rendered views have different appearances, which are in turn captured with low dimensional signatures by the relu features from a CNN (we typically apply relu6 or relu7 vectors). A very convenient notion of pairwise view *similarity* between the appearance images of views $i$ and $j$ is therefore given by the inner product of the corresponding CNN relu feature vectors $r_i$ and $r_j$: $$w(i,j) = r_i \cdot r_j.$$ We exploit the property that the components of relu feature vectors from a CNN, where relu stands for “rectified linear units”, are non-negative and have finite values that tend to lie within a certain range. The larger $w(i,j)$ is, the more similar the two views $i$ and $j$ are. Dominant Set Clustering {#sec:DS} ----------------------- We now cluster views within the view similarity graph, based on the concept of dominant sets [@PavPelCVPR2003; @PavPel07]. The views to be clustered are represented as an undirected edge-weighted graph with no self-loops $G = (V, E,w)$, where $V = \{1, . . . , n\}$ is the vertex set, $E \subseteq V \times V$ is the edge set, and $w : E \rightarrow \mathbb{R}_+^*$ is the (positive) weight function. As explained in Section (\[sec:similarity\]), the vertices in $G$ correspond to relu vectors abstracted from different rendered views of a given object, the edges represent view relationships between all possible view pairs, and the edge-weights encode similarity between pairs of views. We compute the affinity matrix of $G$, which is the $n \times n$ nonnegative, symmetric matrix $A = (a_{ij})$ with entries $a_{ij} = w(i, j)$. Since in $G$ there are no self-loops, all entries on the main diagonal of $A$ are zero. For a non-empty subset $S \subseteq V$, $i \in S$, and $j \notin S$, let us define $$\label{eq1} \phi_S(i,j)=a_{ij}-\frac{1}{|S|} \sum_{k \in S} a_{ik}$$ which measures the (relative) similarity between vertices $j$ and $i$, with respect to the average similarity between $i$ and its neighbors in $S$. Next, to each vertex $i \in S$ we assign a weight defined (recursively) as follows: $$w_S(i)= \begin{cases} 1,&\text{if\quad $|S|=1$},\\ \sum_{j \in S \setminus \{i\}} \phi_{S \setminus \{i\}}(j,i)w_{S \setminus \{i\}}(j),&\text{otherwise}. \end{cases}$$ As explained in [@PavPelCVPR2003; @PavPel07], a positive $w_S(i)$ indicates that adding $i$ to the elements in $S$ will increase its internal coherence, whereas a negative weight indicates that adding $i$ will cause the overall coherence of $S$ to be decreased. Finally, we define the total weight of $S$ $$W(S)=\sum_{i \in S}w_S(i)~.$$ A non-empty subset of vertices $S \subseteq V$ such that $W(T) > 0$ for any non-empty $T \subseteq S$, is said to be a [*dominant set*]{} if: 1. $w_S(i)>0$, for all $i \in S$, 2. $w_{S \cup \{i\}}(i)<0$, for all $i \notin S$. It is evident from its definition that a dominant set satisfies the two properties of a cluster that we desire, namely, it yields a partition of $G$ with a high degree of intra-cluster similarity and inter-cluster dissimilarity. Condition 1 indicates that a dominant set is internally coherent, while condition 2 implies that this coherence will be decreased by the addition of any other vertex. A simple and effective optimization algorithm to extract a dominant set from a graph based on the use of [*replicator dynamics*]{} can be found in [@PavPelCVPR2003; @PavPel07; @bulo2017dominant], with a run time complexity of $\mathcal{O}(V^2)$, where $V$ is the number of vertices in the graph. We adopt this algorithm in our implementation. ![image](pics/RecurClusterPool/ClusterHierarchy_tight) Clustering, Pooling and Recurrence ---------------------------------- After obtaining a partition into dominant sets, we propose to perform pooling operations only [*within*]{} each cluster. This allows the network to take advantage of informative features from possibly disparate views. The resultant relu vectors are then fed back to the beginning of the clustering and pooling layer, to serve as inputs for the next recurrence. This process is repeated, in a cyclical fashion, until the clusters (dominant sets) do not change. During the recurrences, we alternate max and average pooling, which allows for further abstraction of informative features. A full stride max pooling is applied to the relu vectors corresponding to the final, stable, dominant sets. We carry out experiments to demonstrate the effect of different recurrent structures and pooling combinations in Section (\[sec:ResClusterPool\]). Figure (\[fig:ClusterHierarchy\]) depicts the cluster-pooling recurrences for a mug, with 12 initial input views. After the first stage there are 3 clusters, one corresponding to side views (blue), a second corresponding to front or rear views (green) and a third corresponding to oblique views (maroon). At this stage the three relu feature vectors represent pooled information from within these distinct view types. After the second recurrence, information from the green and blue views is combined, following which no additional clusters are formed. The proposed recurrent clustering and pooling strategy aims to improve discrimination between different input objects. When input relu features are changed, our method increases the likelihood of variations in the aggregated multi-view relu output. In contrast, a single max pooling operation, as in the MVCNN approach of [@su2015multi], will result in the same output, unless the change in the inputs cause the present max value to be surpassed. The ability to capture subtle changes in relu space helps in discrimination between similar object categories. Dominant set clustering seeks to prevent outliers in a given cluster, and the within cluster pooling decreases the likelihood that the aggregated result is determined by the single relu vector that gives the maximum response. Back propagation ---------------- ![image](pics/RecurClusterPool/forward_backward) ![image](pics/RecurClusterPool/ToyExample) \[fig:fp\_bp\] \[fig:toyexample\] There are no parameters to be learned in our recurrent clustering and pooling layer, therefore we aim to derive the gradients w.r.t this layers’ input given gradients w.r.t output to facilitate learning at preceding layers. Details of the forward pass and back propagation steps are illustrated in Figure (\[fig:fp\_bp\]). Here $f_c$ represents the dominant set clustering unit which takes as input the (t-1)-th recurrence $X_{t-1} \in \mathbb{R}^{n_{t-1} \times d}$ and outputs cluster assignments, where $n_{t-1}$ is the number of input nodes and $d$ is relu vector’s dimension. $f_p$ stands for the within-cluster pooling unit, which takes as input relu vectors belonging to the k-th resultant cluster $m_k \in \mathbb{R}^{c_k \times d}$ and outputs a pooled relu vector, where $c_k$ is the number of nodes in k-th cluster. During the *forward* pass, we first acquire cluster assignments for the (t-1)-th recurrence using the dominant set algorithm $C_{k,t-1} = f_c \left( A_{t-1} \right)$, where $k$ stands for an arbitrary resultant cluster and $A_{t-1}$ stands for the affinity matrix of the constructed similarity graph. A one hot cluster representation matrix $C_{k,t-1} \in \mathbb{R}^{c_k \times n_{t-1}}$ is constructed in the following manner. For an arbitrary cluster $k$ containing input nodes $\{k_1,k_2,k_3,..., k_{c_k}\}$, the i-th row in its cluster representation matrix is a one-hot vector encoding value $k_i$. Now, inputs belonging to the k-th cluster can be represented as $$\label{eqn:m_k} m_k = C_{\left(k,t-1\right)} X_{t-1}.$$ The within-cluster pooling unit will then give $X_t^{\left(k\right)} = f_p \left( m_k \right)$, where $X_t^{\left(k\right)} \in \mathbb{R}^{1 \times d}$ is the pooled relu vector of cluster k. The above process applies to all resultant clusters. To establish the formulas for *back propagation* of the recurrent clustering and pooling layer, we define the cross entropy loss functions as $L$. We note that the backward pass requires the same amount of recurrence as the forward pass, but the direction of data flow is opposite as shown in Figure (\[fig:fp\_bp\]). During the backward pass of (t-1)-th recurrence, we iteratively loop through clusters to accumulate the gradients of the loss function w.r.t the (t-1)-th recurrence’s input defined as $\frac{\partial L}{\partial X_{t-1}}$. For a given resultant cluster $k$, gradients will be mapped back only to those input nodes that belong to this very cluster. If we define the gradients w.r.t the recurrent layer’s inputs of cluster k as $\frac{\partial L}{\partial X_{t-1}^{\left(k\right)}}$, gradients w.r.t output as $\frac{\partial L}{\partial X_{t}^{\left(k\right)}}$, and w.r.t pre-pooling as $\frac{\partial L}{\partial m_k}$, we have the following equations: $$\frac{\partial L}{\partial m_k} = f_p^{-1} \left( \frac{\partial L}{\partial X_{t}^{\left(k\right)}} \right)$$ $$\frac{\partial L}{\partial X_{t-1}^{\left(k\right)}} = C_{k,t-1}^{T} \frac{\partial L}{\partial m_k}$$ which are derived by reversing the operations of pooling and then clustering. Gradients with respect to the recurrent layer’s input are then given by $$\label{eqn:gradientFinal} \frac{\partial L}{\partial X_{t-1}} = \sum_{k} \frac{\partial L}{\partial X_{t-1}^{\left(k\right)}} = \sum_{k} C_{k,t-1}^{T} f_p^{-1} \left( \frac{\partial L}{\partial X_{t}^{\left(k\right)}} \right).$$ The toy example in Figure (\[fig:toyexample\]) illustrates a scenario with 6 input views using within-cluster max pooling. When back propagating the gradients, the orange cells represent the relu positions where gradients have passed through. We note that the grey cells’ gradients will always remain zero and gradients w.r.t input $X_{t-1}$ will only have non-zero values at nodes belongs to the k-th cluster. \[fig:featTypes\] Experiments {#sec:EXP} =========== **Network Setup.** We use the same baseline CNN structure as in [@johns2016pairwise; @su2015multi], which is a VGG-M network [@chatfield2014return] containing five convolutional layers followed by three fully connected layers. We follow the same process of network pretraining and task specific network fine tuning as in [@johns2016pairwise; @qi2016volumetric; @su2015multi]. Specifically, we use the Imagenet 1k pretrained VGG-m network provided in [@chatfield2014return] and fine tune it on the ModelNet 40 training set after our recurrent clustering and pooling layer is inserted. In our experiments, we insert customized layers after the relu6 layer. **Dataset.** We evaluate our method on the Princeton ModelNet40 dataset [@modelnet] which contains 12, 311 3D CAD models from 40 categories. This dataset is well annotated and many state-of-the-art approaches have reported their results on it [@johns2016pairwise; @qi2016volumetric; @su2015multi]. The dataset also provides a training and testing split, in which there are 9,843 training and 2,468 test models [^2]. We use the entire training and testing set for experiments in Sections (\[sec:ResClusterPool\]) and we provide results for both the full set and the subset when comparing against related works in Section (\[sec:ExpArt\]). **Rendering and additional feature modalities.** We render the 3D mesh models by placing 12 centroid pointing virtual cameras around the mesh every 30 degrees with an elevation of 30 degrees from the ground plane, which is the first camera setup in [@su2015multi]. In our experiments we also include additional feature types beyond the grey scale (appearance) images, to encode surface geometry. Surface normals are computed at the vertices of each 3D mesh model and then bilinearly interpolated to infer values on their faces. For depth, we directly apply the normalized depth values. We then render these 3D mesh features using the multi-view representations in [@su2015multi] but with these new features. We linearly map the surface normal vector field $(n_x,n_y,n_z)$ where $n_i \in [-1,1]$ and $ i = x,y,z$, to a color coding space $(C_x, C_y, C_z)$ where $C_i \in [0,255]$ for all $i = x,y,z$, to ensure that these features are similar in magnitude to the intensity values in the grey scale appearance images. Examples of the computed rendered feature types are shown in Figure (\[fig:featTypes\]). Training and Testing Procedure {#sec:Training} ------------------------------ We explore two training approaches for our system in Figure (\[fig:SystemArchit\]). In [*Fast training*]{} we use the Imagenet pretrained VGG-m network in [@chatfield2014return] to compute the relu7 feature vectors of each view of each training 3D mesh, by forward passing the 2D views into it. We then forward pass the relu7 vectors to our recurrent clustering and pooling layer. In our experiments we use a universal clustering scheme at each recurrence for all training and testing objects, by averaging over the affinity matrices of all training objects. Ideally category-specific clustering is preferred for better category level recognition accuracy, but we do not have access to labels at test time. The universal clustering scheme is computationally efficient and the consistency it grants helps improve recognition accuracy. We record each recurrence’s universal clustering scheme w.r.t training objects to formulate a recurrent clustering hierarchy for end-to-end training. After the full stride pooling layer, we have a single fused relu7 feature vector on which an SVM classifier is trained. At test time, we follow the same routine for a test object and apply the clustering scheme we used during training. The trained SVM classifier is then applied to predict its object category label. We note that there is no CNN training at all since we applied an Imagenet pretrained VGG-m network with no fine-tuning. In [*End-to-end training*]{} we directly feed in rendered 2D maps of training objects to the unified network in Figure (\[fig:SystemArchit\]) to perform the forward and backward passes in an end-to-end manner, using the recorded recurrent clustering hierarchy. The weights for the VGG-m network’s layers before and after the recurrent clustering and pooling layer are jointly learned during training time. At test time, we send a test object’s rendered 2D maps to the network to acquire its predicted object category label. Model Ablation Study -------------------- #### Recurrent Clustering and Pooling Structure {#sec:ResClusterPool} In our evaluation “f-max” stands for a full stride channel-wise max pooling, “ds-avg” stands for one recurrence of the clustering and pooling layer with within-cluster average pooling and “(ds-x)” stands for recurrent clustering and pooling until the clusters are stable. We examine the benefits of both recurrence and pooling with 3 variations: 1) ds-avg-f-max, 2) ds-max-f-avg, which uses only one phase of clustering and pooling but with different pooling operations and 3) (ds-alt)-f-max which uses recurrent clustering while alternating max and average pooling. Table (\[tab:ResClusterPooling\]) shows the results of these variations, together with the baseline “f-max” used by the MVCNN method [@su2015multi]. Recurrent clustering and pooling is indeed better than a non-recurrent version and alternating max-avg within-cluster pooling followed with a full stride max pooling performs better than the other variants. We further note that end-to-end training performs better than non-end-to-end fast training. #### Additional Feature Types {#sec:ExpFeat} We now explore the benefit of additional feature modalities using “(ds-alt)-f-max” clustering and pooling. We run experiments using the fast training scheme introduced in section (\[sec:Training\]). The results in Table (\[tab:ResFeatures\]) show that even without fine tuning, the network pretrained on pure appearance images can be generalized to handle different feature modalities. The additional feature types significantly boost the recognition accuracy of our recurrent cluster and pooling structure with a combination of appearance, depth and surface normals giving the best test set recognition accuracy of 93.3% with no CNN training. #### Effects from number of Views {#sec:ExpNViews} ![image](pics/exp/nViewPlot_eps) \[fig:nViewsExp\] We evaluate the effect of the number of views on our recurrent clustering and pooling CNN and on MVCNN [@su2015multi] in Table (\[tab:nViewsExp\]) and Figure (\[fig:nViewsExp\]). Our method is consistently better than the MVCNN approach, with a steady improvement of recognition accuracy as the number of views increases. We further note that for MVCNN there is an evident drop in performance when increasing from 6 views to 12 views, illustrating a potential drawback of its single full stride max pooling strategy. Comparison with the present State-of-the-art {#sec:ExpArt} -------------------------------------------- We now compare our method against the state-of-the-art view based 3D object recognition approaches [@johns2016pairwise; @qi2016volumetric; @su2015multi] in Table (\[tab:CompStateArt\]). Two accuracies groups are reported: one for the subset used in MVCNN [@su2015multi] and one for the full set used in [@qi2016volumetric; @johns2016pairwise].[^3] For Johns [@johns2016pairwise] and Qi [@qi2016volumetric], we quote the recognition accuracies reported in their papers. For MVCNN [@su2015multi], we quote their reported subset results but reproduce experimental results on the full set. In [@johns2016pairwise; @qi2016volumetric; @su2015multi], two types of view sampling/selection strategies are applied, as mentioned in Table (\[tab:CompStateArt\]).[^4] In addition to recognition accuracy, we also consider training time consumption. We measure the training time cost per epoch by $\mathcal{C} = \Phi \mathcal{N}_v \mathcal{N}_c$ where $\mathcal{N}_v$ denotes the number of rendered views, $\mathcal{N}_c$ denotes the number of VGG-m like CNNs trained and $\Phi$ stands for, for a single CNN, the *unit training cost* in terms of computational complexity for one epoch over a given 3D dataset when only one view is rendered per object. We denote the computational complexity for our fast training scheme, which consists of 1 epoch of forward passing and SVM training, as $\epsilon$, since this is less costly than $\Phi$. We estimate that $\epsilon < 0.5\Phi$. The results in Table (\[tab:CompStateArt\]) show that our recurrent clustering and pooling CNN out performs the state-of-the-art methods in terms of recognition accuracy, achieving a 93.8 % full test set accuracy on ModelNet40 and 92.8% on the subset. When fast (non end-to-end) training is applied, the method still achieves state-of-the-art recognition accuracies of 93.3% and 92.1% with a greatly reduced training cost. Our results presently rank second on the ModelNet40 Benchmark [@modelnet].[^5] The top performer is the method of Brock [@brock2016generative], which is a voxel-based approach with 3D representation training, focused on designing network models. In contrast, our method is a 2D view-based approach which exploits strategies for view feature aggregation. As rightly observed by a reviewer of this article, the 95.54% accuracy on ModelNet40 achieved in [@brock2016generative] is accomplished with an ensemble of 5 Voxception-ResNet [@he2016deep] (VRN) architecture (45 layers each) and 1 Inception [@szegedy2017inception] like architecture. Training each model from the ensemble takes 6 days on a Titan X. In our approach, fine-tuning a pretrained VGG-M model after inserting our recurrent layer takes 20 hours on a Tesla K40 (rendering 12 views for ModelNet40). When only one VRN is used instead of the ensemble, Brock achieve 91.33% on ModelNet40, while we achieve 92.2% using only RGB features. Conclusion {#sec:CON} ========== The recurrent clustering and pooling layer introduced in this paper aims to aggregate multi-view features in a way that provides more discriminative power for 3D object recognition. Experiments on the ModelNet40 benchmark demonstrate that the use of this layer in a standard pretrained network achieves state of the art object category level recognition results. Further, at the cost of sacrificing end-to-end training, it is possible to greatly speed up computation with a negligible loss in multi-view recognition accuracy. We therefore anticipate that the application of a recurrent clustering and pooling layer will find value in 3D computer vision systems in real world environments, where both performance and computational cost have to be considered. #### Acknowledgments We are grateful to the Natural Sciences and Engineering Research Council of Canada for funding, to Stavros Tsogkas and Sven Dickinson for their helpful comments, and to the reviewers whose constructive feedback greatly improved this article. [^1]: Note that the within-cluster pooling operation is fixed: it is either max or average pooling throughout the recurrence. [^2]: Qi [@qi2016volumetric] used this entire train/test split and reported average class accuracy on the 2,468 test objects. Su [@su2015multi] used a subset of train/test split comprising the first 80 objects in each category in the train folder (or all objects if there are fewer than 80) and the first 20 objects of each category in the test folder, respectively. [^3]: Judging by the description in [@johns2016pairwise] “ModelNet10, containing 10 object categories with 4,905 unique objects, and ModelNet40, containing 40 object categories and 12,311 unique objects, both with a testing-training split.” we assume they used the entire train/test split. Note that 9, 843 training and 2, 468 test models result in a total of 12,311 objects in the dataset. [^4]: The term “$30^{\circ}$ elevation” means only views at an elevation of $30^{\circ}$ above the ground plane, constrained to rotations about a gravity vector, are selected. The term “Uniform" means all uniformly sampled view locations on the view sphere. More specifically in Johns [@johns2016pairwise] where a CNN based next-best-view prediction is applied for view point selection, the authors select the best 12 over 144 views to perform a combined-voting style classification. Therefore their CNNs are trained on a view base of 144 rendered views per object. [^5]: As of July 17th, 2017.
--- abstract: 'Equiangular tight frames (ETFs) are configurations of vectors which are optimally geometrically spread apart and provide resolutions of the identity. Many known constructions of ETFs are group covariant, meaning they result from the action of a group on a vector, like all known constructions of symmetric, informationally complete, positive operator-valued measures. In this short article, some results characterizing the transitivity of the symmetry groups of ETFs will be presented as well as a proof that an infinite class of so-called Gabor-Steiner ETFs are roux lines, where roux lines are a generalization of doubly transitive lines.' author: - title: '$2$- and $3$-Covariant Equiangular Tight Frames' --- Introduction ============ Frames are generalizations of orthonormal bases which have applications in signal processing, quantization, coding theory, and more [@frame_book; @Waldron18]. Equiangular tight frames are the closest analog to orthonormal bases in a redundant setting and are known to give representations of data that are optimally robust to erasures and noise [@StH03]. Many equiangular tight frames of interest are generated by group actions. Understanding the higher order symmetries of a equiangular tight frame yields information about the structure of the frame and when such equiangular tight frames may exist. In Section \[sec:gpcov\] double and triple covariance of equiangular tight frames are characterized – completely so in the latter case –, generalizing results in the quantum information literature (in particular [@Zhu15]) about symmetric, informationally complete, positive operator-valued measures, which are a specific class of equiangular tight frames. In Section \[sec:drackn\], the covariance properties of so-called Gabor-Steiner equiangular tight frames [@BoKi18] are explored. In particular, a class of Gabor-Steiner equiangular tight frames are shown to be roux lines, a generalization of both abelian distance-regular antipodal covers of the complete graph and doubly transitive equiangular tight frames [@IvMi18]. Equiangular Tight Frames and Group Covariance {#sec:gpcov} ============================================= Equiangular tight frames emulate the algebraic and geometric properties of orthonormal bases but may be redundant. Let $\Phi = \{\varphi_j\}_{j=1}^n \subset {\mathbb{C}}^d$. Then $\Phi$ is an *equiangular tight frame (ETF)* if the following hold: 1. for all $x \in {\mathbb{C}}^d$, $x= \frac{d}{n} \sum_{j=1}^n \langle x, \varphi_j \rangle \varphi_j,$ 2. ${\left\Vert\varphi_j\right\Vert} = 1$ for all $j \in \{1, \hdots, n\}$, and 3. there exists $\alpha \geq 0$ such that ${\left\vert\langle\varphi_j,\varphi_k\rangle\right\vert}=\alpha$ for all $j \neq k$. It ends up that traits 1) and 3) imply that the absolute values of the inner products are optimally small; that is, the vectors are as geometrically as spread as possible. [@StH03; @LS1973] \[thm:Welch\] Let $\Phi = \{\varphi_j\}_{j=1}^n \subset {\mathbb{C}}$ be a set of unit vectors. Then $$\label{eqn:Welch} \max_{j \neq k} {\left\vert\langle\varphi_j,\varphi_k\rangle\right\vert} \geq \sqrt{\frac{n-d}{d(n-1)}}.$$ The bound in  is saturated if and only if $\Phi$ is an equiangular tight frame. Further, the bound in  may only be saturated if $n \leq d^2$. The bound in  is called the *Welch bound*, and the bound $n \leq d^2$ is *Gerzon’s bound*. It is conjectured that there is always an ETF of $d^2$ vectors in ${\mathbb{C}}^d$ [@Zauner1999]; this is called *Zauner’s conjecture*. This conjecture originally arose in quantum information theory, where such maximal ETFs are called *symmetric, informationally complete, positive operator-valued measures (SICs)*. A stronger variant of Zauner’s conjecture is that such SICs may always be generated as the orbit of a single vector under a projective unitary representation of ${\mathbb{Z}}_p \times {\mathbb{Z}}_p$ related to a finite Weyl-Heisenberg group. In general, we call any ETF which is formed as the orbit of a single vector under a (projective) unitary representation *group covariant*. By definition, the action of the group on a group covariant ETF is *transitive*; that is, given any two vectors in the ETF, there is a unitary mapping parameterized by a group element that maps one to the other. Unitary transformations leave the quantum state space invariant, so it is of interest to ask when permutations of the associated rank-one projections of a SIC [@Zhu15] or other group covariant ETF can be realized by such operators. For an ETF $\Phi$ parameterized by a group we denote by $G$ the group of unitary operators for which the ETF is invariant. That is, for all $U \in G$ and $\varphi_j \in \Phi$, $U \varphi_j \varphi_j^\ast U^\ast = \varphi_{\sigma(j)} \varphi_{\sigma(j)}^\ast$ for some permutation $\sigma$ of the group parameterization. The *symmetry group* of $\Phi$ is $\overline{G}=G/S^1$, that is, $G$ up to multiplication by universal phase factor. If the symmetry group maps every ordered $k$-tuple of distinct elements to every ordered $k$-tuple of distinct elements (i.e., is $k$-transitive), then we call the ETF *$k$-covariant*. The symmetry group yields important structural information about the ETF (see, e.g., [@AFF11]). Doubly and triply covariant SICs were completely classified in [@Zhu15]. [@Zhu15] \[thm:3SIC\] There are no triply covariant SICs. Up to equivalence, the doubly covariant SICs are - SICs in ${\mathbb{C}}^2$, - the Hesse SIC (a certain type of SIC in ${\mathbb{C}}^3$ with many linear dependencies, also the Gabor-Steiner-ETF over ${\mathbb{Z}}_3$ [@Hugh07; @DBBA2013; @BoKi18]), and - Hoggar’s lines (a sporadic SIC formed by the Weyl-Heisenberg group over ${\mathbb{Z}}_2 \times {\mathbb{Z}}_2 \times {\mathbb{Z}}_2$ rather than a cyclic group [@Hog98]). In order to characterize triply covariant ETFs, we will make use of so-called triple products. [@AFF11; @ChWa16; @FJKM17] Let $\Phi = \{\varphi_j\}_{j=1}^n$ be an ETF for ${\mathbb{C}}^d$. For $j, k, \ell \in \{1, \hdots, n\}$ we define the *triple product* to be $\operatorname*{TP}(j,k,\ell) = {\left\langle\varphi_j,\varphi_k\right\rangle}{\left\langle\varphi_k,\varphi_\ell\right\rangle}{\left\langle\varphi_\ell,\varphi_j\right\rangle}.$ If all of the triple products of distinct $j,k,\ell$ are real and negative, then $\Phi$ is a *simplex*. If an ETF $\Phi$ of $n$ vectors in ${\mathbb{C}}^d$ is triply covariant, $d=1$, $n=d$, or $n=d+1$. That is, the only non-trivial triply covariant ETFs are orthonormal bases and simplices. We assume that $n \geq 3$ and generalize the proof of Theorem \[thm:3SIC\] found in [@Zhu15 Lemma 5]. Let $\Phi=\{ \varphi_j\}_{j=1}^n$ be a triply covariant ETF for ${\mathbb{C}}^d$. Then all of the triple products (of distinct vectors) must be equal. Since for all $j \neq k$, $\operatorname*{TP}(j,k,\ell) = \overline{\operatorname*{TP}(k,j,\ell)}$, all of the triple products (of distinct vectors) must be real. We fix $j \neq k$ and note that by the Welch bound (Theorem \[thm:Welch\]) $$\begin{aligned} \lefteqn{\sum_{\ell=1}^n \operatorname*{TP}(j,k,\ell)= \left( \sum_{\ell \notin\{j,k\}} + \sum_{\ell\in\{j,k\}} \right) \operatorname*{TP}(j,k,\ell)}\nonumber\\ &= \pm(n-2) \left(\sqrt{\frac{n-d}{d(n-1)}}\right)^3 +2 \left(\sqrt{\frac{n-d}{d(n-1)}}\right)^2, \label{eq:tripcov1}\end{aligned}$$ and $$\begin{aligned} \lefteqn{\sum_{\ell=1}^n \operatorname*{TP}(j,k,\ell) = \sum_{\ell=1}^n\overline{ \operatorname*{TP}(k,j,\ell)}}\nonumber\\ &=\sum_{\ell=1}^n \overline{{\operatorname{tr}}(\varphi_j^\ast\varphi_k \varphi_k^\ast\varphi_\ell\varphi_\ell^\ast\varphi_j ) }=\overline{{\operatorname{tr}}(\varphi_j\varphi_j^\ast\varphi_k \varphi_k^\ast \sum_{\ell=1}^n \varphi_\ell\varphi_\ell^\ast ) }\nonumber\\ &= \frac{n}{d} {\left\vert\langle\varphi_j,\varphi_k\rangle\right\vert}^2 = \frac{n}{d} \left(\sqrt{\frac{n-d}{d(n-1)}}\right)^2.\label{eq:tripcov2}\end{aligned}$$ We set and to be equal. Then either the ETF is an orthonormal basis or one may divide each side by $(n-d)/((d(n-1))$. In the latter case, one obtains the equation $0 = (1-d)n^2(n-d-1)$, yielding the nonsense solution $n=0$, the trivial solution $d=1$, and the solution $n=d+1$, where all ETFs of $d+1$ vectors in ${\mathbb{C}}^d$ are simplices [@FJKM17]. For doubly transitive ETFs, we have the following result, generalizing Lemma 8 in [@Zhu15]. Let $\Phi = \{\varphi_j \}_{j=1}^n$ be a doubly transitive ETF for ${\mathbb{C}}^d$ with $n>d$. Then for all $j \neq k \neq \ell$, there exists a $2n$th root of unity $\zeta_{j,k,\ell}$ such that $$\operatorname*{TP}(j,k,\ell) = \zeta_{j,k,\ell} \left(\frac{n-d}{d(n-1)}\right)^{3/2}.$$ For $j, k, \ell$, we set $$\widetilde{\operatorname*{TP}}(j,k,\ell) =\operatorname*{TP}(j,k,\ell) / {\left\vert\operatorname*{TP}(j,k,\ell)\right\vert}.$$ Fix $j,k \in \{1,\hdots, n\}$ with $j \neq k$. The double transitivity yields that the multisets $$\begin{aligned} &\left\{\widetilde{\operatorname*{TP}}(m,j,k) : m \in \{1, \hdots, n\} \right\}\enskip \textrm{and}\\ &\left\{\widetilde{\operatorname*{TP}}(m,k,j) : m \in \{1, \hdots, n\} \right\} $$ are identical. However, the sesquilinearity of the inner product yields that the elements of the two multisets are conjugates of each other. Since they are conjugate invariant, $$\prod_{m=1}^n \widetilde{\operatorname*{TP}}(m,j,k) = \pm 1$$ where the sign is independent of choice of (distinct) $j$ and $k$. We further note that for any $j,k,\ell,m$, $$\label{eqn:lemmTP} \widetilde{\operatorname*{TP}}(j,k,\ell) = \widetilde{\operatorname*{TP}}(m,j,k) \widetilde{\operatorname*{TP}}(m,k,\ell) \widetilde{\operatorname*{TP}}(m,\ell,j).$$ By taking the product of  over all $m \in \{1, \hdots, n\}$, we obtain $\widetilde{\operatorname*{TP}}(j,k,\ell)^n = \pm 1$. Since for distinct $j,k,\ell$, $$\widetilde{\operatorname*{TP}}(j,k,\ell) = \left(\frac{d(n-1)}{n-d}\right)^{3/2}\operatorname*{TP}(j,k,\ell),$$ the lemma follows. Roux Lines {#sec:drackn} ========== Equivalence classes of real equiangular tight frames are known to be in one-to-one correspondence to combinatorial objects known as regular two-graphs [@Sei76; @HoPa04]. The correspondence is related to the fact that the inner products of equiangular vectors in real Euclidean space take one of two values based on their sign and these values can be thought of determining adjacency. Since the inner products of equiangular vectors in complex space could have infinitely many phases, the situation in complex space is more complicated. In [@IvMi18], a complex analogue of regular two-graphs the authors call *roux* is developed by requiring that the Gram matrix of the vectors satisfy certain axioms concerning association schemes. Unlike in the real case, not all complex equiangular tight frames yield roux lines. All doubly transitive ETFs are roux lines. We will prove that certain Gabor-Steiner equiangular tight frames correspond to roux lines. We begin by defining the class of ETFs, Gabor-Steiner ETFs [@BoKi18], which we would like to analyze. Like SICs, these are generated by the orbit of a single vector under a projective unitary representation of a Weyl-Heisenberg-like group; however, except for the case $m=3$, Gabor-Steiner ETFs are not SICs. Let $m \geq 2$ be an integer and $\zeta_m \in {\mathbb{C}}$ a primitive $m$th root of unity. We denote the $m \times m$ identity matrix by $I_m$. Furthermore, we define the *(cyclic) translation* $T_m$ and *modulation* $M_m$ operators as $$T_m = (\operatorname{circ}(0,1, 0, \hdots, 0),\, M_m =\operatorname*{diag}( 1 ,\hdots , \zeta_m^{m-1}).$$ Further, if $m=(m_0, m_1, \hdots, m_s)$ is a vector of integers $\geq 3$, the group of translations over $\bigoplus_{\ell=0}^s {\mathbb{Z}}_{m_\ell}$ is $$\left\{T_m^{(k)} := \bigotimes_{\ell=0}^s T_{m_\ell}^{k_\ell}: k=(k_0, \hdots k_s) \in \bigoplus_{\ell=0}^s {\mathbb{Z}}_{m_\ell}\right\},$$ where $\otimes$ is the Kronecker product. Similarly, the group of modulations is $$\left\{M_m^{(\kappa)}:= \bigotimes_{\ell=0}^s M_{m_\ell}^{\kappa_\ell}: \kappa=(\kappa_0, \hdots \kappa_s) \in \bigoplus_{\ell=0}^s {\mathbb{Z}}_{m_\ell}\right\}.$$ If further each $m_\ell$ is odd, we define the projective unitary representation $\pi$ on $\bigoplus_{\ell=0}^s {\mathbb{Z}}_{m_\ell} \times \bigoplus_{\ell=0}^s {\mathbb{Z}}_{m_\ell}$ as $$\pi(k, \kappa) = I_{({\left\vertm\right\vert}-1)/2} \otimes \left( M_{m}^{(\kappa)}T^{(k)}_{m}\right).$$ [@BoKi18] Let $m=(m_0, \hdots, m_s)$ be a vector of odd integers $\geq 3$ and set ${\left\vertm\right\vert} = \prod_{\ell=0}^s m_\ell$. Let $$\mathcal{I}=\left\{ (0, \hdots, 0) , \hdots, ((m_0-1)/2, \hdots, (m_s-3)/2)\right\},$$ which is the set of the first $({\left\vertm\right\vert}-1)/2$ elements of $\bigoplus_{\ell=0}^s {\mathbb{Z}}_{m_\ell}$, ordered lexicographically. For $i \in {\mathcal{I}}$ define $$\left(\left( \phi_{i}\right)_{j}\right)_j=\left(\left\{ \begin{array}{lr} 1; & j=i\\ -1; &j= m-i-\mathbbm{1}\\ 0; & \textrm{o.w.}\end{array}\right.\right)_j\in {\mathbb{C}}^{{\left\vertm\right\vert}},$$ where $\mathbbm{1}$ is the all-ones vector of length ${\left\vertm\right\vert}$, and $\psi$ to be the block vector in ${\mathbb{C}}^{{\left\vertm\right\vert}({\left\vertm\right\vert}-1)/2}$ consisting of the $\phi_i$ stacked vertically. We finally define ${\mathcal{G}}(m)$ to be the orbit of $\psi$ under $\pi(\bigoplus_{\ell=0}^s {\mathbb{Z}}_{m_\ell} \times \bigoplus_{\ell=0}^s {\mathbb{Z}}_{m_\ell})$. Then ${\mathcal{G}}(m)$ is an ETF called a [*Gabor-Steiner ETF*]{}; Gabor-Steiner ETFs span the same set of lines as the ETFs in [@BoEl10a; @IJM17]. We will make use of signature matrices and their characterization of ETFs (see, e.g., [@LS1973; @HoPa04]). Let $\Phi$ be an ETF of vectors of norm $\nu$ and absolute inner product value $\alpha>0$. The *signature matrix* $S$ (also called *Seidel matrix*) of $\Phi$ is defined to be $S = (\Phi^\ast \Phi- \nu^2 I)/\alpha$. If $\overline{\Phi}$ is switching equivalent to $\Phi$ (i.e., spans the same set of lines) and has signature matrix $\overline{S}$, where the entries in the first row and column with the exception of the diagonal element are equal to one, then $\overline{S}$ is a *normalized signature matrix* of $\Phi$. \[prop:sign\] Let $\Phi$ be an equiangular tight frame of $n$ vectors in ${\mathbb{C}}^d$ with signature matrix $S$. Then the following hold true. 1. $S \in {\mathrm{Sym}}_n({\mathbb{C}})$; 2. The diagonal entries of $S$ are all zero; 3. The off-diagonal entires of $S$ are unimodular; 4. $S$ has two unique eigenvalues; and 5. The larger eigenvalue of $S$ has multiplicity $d$. Further, if a matrix $S$ satisfies (i)–(v), then there exists an equiangular tight frame $\Phi$ of $n$ vectors of norm $\nu$ and absolute inner product value $\alpha$ in ${\mathbb{C}}^d$ such that $S = (\Phi^\ast \Phi- \nu^2 I)/\alpha$. \[prop:sign\] Let $m=(m_0, m_1, \hdots, m_s)$ be a vector of odd integers $\geq 3$. Define $$\overline{S} = \left( s_{(k,\kappa),(\tilde{k},\tilde{\kappa})}\right)_{(\tilde{k},\tilde{\kappa}),(k,\kappa) \in \left(\bigoplus_{\ell=0}^s {\mathbb{Z}}_{m_\ell} \times \bigoplus_{\ell=0}^s {\mathbb{Z}}_{m_\ell}\right)},$$ where $$s_{(k,\kappa),(\tilde{k},\tilde{\kappa})} = -\prod_{\ell=0}^s \zeta_{m_\ell}^{(\kappa_\ell \tilde{k}_\ell - \tilde{\kappa}_\ell k_\ell)/2}$$ when $(\tilde{k},\tilde{\kappa})$, $(k,\kappa)$, and $(0,0)$ are distinct and $s_{(k,\kappa),(\tilde{k},\tilde{\kappa})} = 1-\delta_{(k,\kappa),(\tilde{k},\tilde{\kappa})}$ otherwise. Then $\overline{S}$ is a normalized signature matrix of ${\mathcal{G}}(m)$. Let $\Phi = {\mathcal{G}}(m)$. It follows from [@BoKi18 Lemma 5.1] that $$\begin{aligned} \lefteqn{S = \Phi^\ast \Phi - ({\left\vertm\right\vert}-1)I} \\ &=\left(\!\!\begin{array}{lr}-\prod_{\ell=0}^s \zeta_{m_\ell}^{(\kappa_\ell-\tilde{\kappa}_\ell)(\tilde{k}_\ell+k_\ell-1)/2}, \!& \!(\tilde{k},\tilde{\kappa}) \neq (k,\kappa) \\ 0, \!&\!(\tilde{k},\tilde{\kappa})=(k,\kappa) \end{array} \!\!\right). $$ We form a related equiangular tight frame $\overline{\Phi}$ by multiplying each $\varphi_{k,\kappa}$ by $\prod_{\ell=0}^s \zeta_{m_\ell}^{-\kappa_\ell(k_\ell-1)/2}$ and additionally $\varphi_{0,0}$ by $-1$. Since each vector is multiplied by a unimodular, $\overline{\Phi}$ is switching equivalent to $\Phi$. The signature matrix $\overline{S}$ of $\overline{\Phi}$ as desired. We need one last definition to prove that certain Gabor-Steiner ETFs correspond to roux lines. Let $A$ be a matrix. The *$N$th Hadamard product* $A^{\circ N}$ of $A$ is the $N$th component-wise product. Namely, $(A^{\circ N})_{j,k} = (A_{j,k})^N$. We may now present the so-called roux lines detector [@IvMi18 Corollary 4.6]. \[prop:roux\] Given a normalized signature matrix $\overline{S}$, $\overline{S}$ corresponds to equal-norm representatives of roux lines if and only if the following occur simultaneously: 1. The entries of $\overline{S}$ are all roots of unity. 2. Every Hadamard power of $\overline{S}$ has exactly two eigenvalues. Let $p\geq 3$ be prime. For all $m=(p,p,\hdots,p)$, ${\mathcal{G}}(m)$ is a set of roux lines. We let $\overline{S}$ be the normalized signature matrix of ${\mathcal{G}}(m)$ presented in Proposition \[prop:sign\] and $s+1$ be the length of $m$. That (1) from Proposition \[prop:roux\] holds for $S$ is clear. Further, since $\zeta_p^N$ is a primitive $p$th root of unity for all $N$ such that $p\not\vert N$, such $N$th Hadamard powers of $\overline{S}$ simply yield normalized signature matrices of Gabor-Steiner equiangular tight frames generated by possibly different primitive $p$th roots of unity. These $\overline{S}^{\circ N}$ all have two eigenvalues. If $p \vert N$, then $\overline{S}^{\circ N}$ has negative ones in every entry that is neither on the diagonal nor the first row or column. Such a $\overline{S}^{\circ N}$ is a normalized signature matrix for a simplex of $p^{2s+2}$ vectors spanning a $p^{2s+1}$-dimensional space and thus also has two eigenvalues. Thus the Gabor-Steiner ETF generated from any finite, abelian $p$-group is roux. We note that an immediate porism of this result is that the so-called Naimark complement  [@frame_book; @Waldron18] of any ${\mathcal{G}}(p, \hdots, p)$ with $p$ odd prime yields a cyclic DRACKN (distance-regular cover of the complete graph whose automorphism group that fixes each fibre as a set is cyclic) [@CGSZ16]. We thank Joey Iverson for pointing out this result concerning DRACKNs. [10]{} \[1\][\#1]{} url@samestyle \[2\][\#2]{} \[2\][[l@\#1=l@\#1\#2]{}]{} P. G. Casazza and G. Kutyniok, Eds., *Finite frames*, ser. Appl. Numer. Harmon. Anal.1em plus 0.5em minus 0.4emBirkhäuser/Springer, New York, 2013. S. Waldron, *An introduction to finite tight frames*.1em plus 0.5em minus 0.4emSpringer, 2018. T. Strohmer and R. W. Heath, Jr., “Grassmannian frames with applications to coding and communication,” *Appl. Comput. Harmon. Anal.*, vol. 14, no. 3, pp. 257–275, 2003. H. Zhu, “Super-symmetric informationally complete measurements,” *Ann. Physics*, vol. 362, pp. 311–326, 2015. \[Online\]. Available: <https://doi.org/10.1016/j.aop.2015.08.005> B. Bodmann and E. J. King, “Optimal arrangements of classical and quantum states with limited purity,” 2018, preprint. \[Online\]. Available: <https://arxiv.org/abs/1811.11513> J. W. Iverson and D. G. Mixon, “Doubly transitive lines [I]{}: Higman pairs and roux,” 2018, preprint. \[Online\]. Available: <https://arxiv.org/pdf/1806.09037.pdf> P. W. Lemmens and J. J. Seidel, “Equiangular lines,” *Journal of Algebra*, vol. 24, no. 3, pp. 494–512, 1973. G. Zauner, “Quantendesigns - [G]{}rundz[ü]{}ge einer nichtkommutativen [D]{}esigntheorie,” Ph.D. dissertation, University Wien (Austria), 1999, english translation in International Journal of Quantum Information (IJQI) 9 (1), 445–507, 2011. D. M. Appleby, S. T. Flammia, and C. A. Fuchs, “The [L]{}ie algebraic significance of symmetric informationally complete measurements,” *J. Math. Phys.*, vol. 52, no. 2, pp. 022202, 34, 2011. L. Hughston, “[$d=3$]{} [SIC]{}-[POVM]{}s and elliptic curves,” Perimeter Institute, Seminar Talk, <http://pirsa.org/07100040/>, October 2007. H. B. Dang, K. Blanchfield, I. Bengtsson, and D. M. Appleby, “Linear dependencies in [W]{}eyl–[H]{}eisenberg orbits,” *Quantum Information Processing*, vol. 12, no. 11, pp. 3449–3475, Nov 2013. \[Online\]. Available: <https://doi.org/10.1007/s11128-013-0609-6> S. G. Hoggar, “64 lines from a quaternionic polytope,” *Geometriae Dedicata*, vol. 69, no. 3, pp. 287–289, Mar 1998. \[Online\]. Available: <https://doi.org/10.1023/A:1005009727232> T.-Y. Chien and S. Waldron, “A characterization of projective unitary equivalence of finite frames and applications,” *SIAM J. Discrete Math.*, vol. 30, no. 2, pp. 976–994, 2016. \[Online\]. Available: <https://doi.org/10.1137/15M1042140> M. Fickus, J. Jasper, E. J. King, and D. G. Mixon, “Equiangular tight frames that contain regular simplices,” *Linear Algebra Appl*, vol. 555, pp. 98–138, October 2018. J. J. Seidel, “A survey of two-graphs,” in *Colloquio [I]{}nternazionale sulle [T]{}eorie [C]{}ombinatorie ([R]{}ome, 1973), [T]{}omo [I]{}*.1em plus 0.5em minus 0.4emAccad. Naz. Lincei, Rome, 1976, pp. 481–511. Atti dei Convegni Lincei, No. 17. R. B. Holmes and V. I. Paulsen, “Optimal frames for erasures,” *Linear Algebra Appl.*, vol. 377, pp. 31–51, 2004. \[Online\]. Available: <https://doi.org/10.1016/j.laa.2003.07.012> B. G. Bodmann and H. J. Elwood, “Complex equiangular [P]{}arseval frames and [S]{}eidel matrices containing [$p$]{}th roots of unity,” *Proc. Amer. Math. Soc.*, vol. 138, no. 12, pp. 4387–4404, 2010. \[Online\]. Available: <https://doi.org/10.1090/S0002-9939-2010-10435-5> J. W. Iverson, J. Jasper, and D. G. Mixon, “Optimal line packings from finite group actions,” 2017, preprint. \[Online\]. Available: <https://arxiv.org/pdf/1709.03558.pdf> G. Coutinho, C. Godsil, H. Shirazi, and H. Zhan, “Equiangular lines and covers of the complete graph,” *Linear Algebra Appl.*, vol. 488, pp. 264–283, 2016.
--- abstract: 'This paper presents a class of boundary integral equation methods for the numerical solution of acoustic and electromagnetic time-domain scattering problems in the presence of unbounded penetrable interfaces in two-spatial dimensions. The proposed methodology relies on Convolution Quadrature (CQ) methods in conjunction with the recently introduced Windowed Green Function (WGF) method. As in standard time-domain scattering from bounded obstacles, a CQ method of the user’s choice is utilized to transform the problem into a finite number of (complex) frequency-domain problems posed on the domains involving penetrable unbounded interfaces. Each one of the frequency-domain transmission problems is then formulated as a second-kind integral equation that is effectively reduced to a bounded interface by means of the WGF method—which introduces errors that decrease super-algebraically fast as the window size increases. The resulting windowed integral equations can then be solved by means of any (accelerated or unaccelerated) off-the-shelf Helmholtz boundary integral equation solver capable of handling complex wavenumbers with a large imaginary part. A high-order Nyström method based on Alpert quadrature rules is utilized here. A variety of numerical examples including wave propagation in open waveguides as well as scattering from multiply layered media, demonstrate the capabilities of the proposed approach.' author: - Ignacio Labarca - Luiz Faria - 'Carlos Pérez-Arancibia[^1]' bibliography: - 'References.bib' date: - - title: 'Convolution quadrature methods for time-domain scattering from unbounded penetrable interfaces' --- Introduction ============ Wave propagation problems involving unbounded material interfaces play a fundamental role in numerous relevant electromagnetic and acoustic engineering applications such as waveguides, solar cells, on-chip antennas, and more recently, inverse metasurface design, to mention a few. Typically, frequency- and time-domain simulations in this context are performed by means of volume discretization techniques such as finite difference (FDTD [@oskooi2010meep] or TDFD [@taflove2005computational] methods) and finite element [@jin2015finite] methods where perfectly matched layers (PMLs) [@berenger1994perfectly] or other kinds of absorbing/transparent boundary conditions are used to reformulate the problem in a *bounded* domain free of *unbounded* material interfaces. Time-domain boundary integral equations for obstacle scattering problems, on the other hand, have been extensively and intensively studied over the last two decades [@dominguez2017recent]. Convolution quadrature (CQ) methods [@lubich1988convolution; @lubich1988convolution_II], in particular, have effectively enabled the use of (complex) frequency-domain boundary integral equation (BIE) solvers to tackle a variety of wave propagation problems, by providing a stable procedure to discretize the associated convolution equations for the unknown time evolution of the relevant surface densities; see [@sayas2016retarded] for the mathematical foundations of the method, and [@banjai2012wave; @hassell2016convolution] for details on the algorithmic implementation. In the case of the scalar wave equation with piecewise constant wavespeed, to which this paper is devoted to, approximate traces at discrete times are produced all at once from a finite sequence of independent Helmholtz problems that can be solved in parallel by means of BIE methods. Although this CQ-BIE approach has proven to be competitive to volume discretization methods in the context of obstacle scattering problems [@banjai2014fast; @Banjai:2009in; @schadle2006fast], its extension to problems involving unbounded material interfaces is severely hindered by the fact that standard BIE formulations require the knowledge of problem-specific Green functions to deal with the unboundedness of the material interfaces. These Green functions, however, are often unavailable (in terms of tractable mathematical expressions) or are given in terms of computationally expensive Sommerfeld integrals[^2] [@michalski2016efficient; @perez2014high; @perez2017windowed]. Recent advances on BIE methods for time-harmonic problems of scattering from unbounded material interfaces have led to the development of highly efficient solvers that completely bypass the use of problem-specific Green functions [@bruno2017guide; @bruno2016windowed; @bruno2017windowed; @lu2018perfectly; @perez2017windowed; @zhang2011novel]. The windowed Green function (WGF) method, in particular, has successfully been used in layered media [@bruno2016windowed; @bruno2017windowed; @perez2017windowed], dielectric waveguides [@bruno2017guide] and all-dielectric metasurfaces [@pestourie2018inverse] simulations in the frequency domain. The method relies on a certain “second-kind" BIE—given in terms of free-space Helmholtz Green functions—posed on all the (bounded and unbounded) interfaces. A highly accurate approximate solution to this BIE is then obtained by solving a modified *windowed BIE* posed on the relevant *bounded* portions of material interfaces. The windowed BIE is directly obtained from the original BIE by simply multiplying the integral kernels by a smooth *window function*. The resulting windowed BIE is (provable) of the second-kind, it is given in terms of the four standard BIE operators of Calderón calculus, and it thus can be solved by means of any (accelerated or unaccelerated) off-the-shelf Nyström [@bruno2009electromagnetic; @bruno2001fast; @colton2012inverse] or boundary element method (BEM) [@sauter2010boundary] solver. This paper presents a combined CQ-WGF procedure for problems of time-domain scattering from unbounded material interfaces ruled by the scalar wave equation in two-spatial dimensions. Our goal is to show that the straightforward combination of the two methods suffices to extend the reach of efficient BIE solvers to the large class of relevant engineering problems involving unbounded penetrable interfaces. The proposed procedure is simple. At first, a CQ method is utilized to turn the scalar wave equation into a finite sequence of frequency-domain transmission problems in the domains containing the unbounded penetrable interfaces. Each one of the required frequency-domain problems is then formulated as “second-kind" indirect BIE which is approximated—with errors that decay super-algebraically fast as the window size increases—by means of the WGF method, which is applicable for the complex wavenumbers produced by the CQ method [@bruno2016windowed; @perez2017windowed]. For the sake of definiteness we consider here the FFT-accelerated CQ method put forth in [@Banjai:2009in] and the high-order BIE Nyström method described in [@Hao:2013do] which based on Alpert quadrature rules [@alpert1999hybrid]. The capabilities of the proposed procedure are demonstrated by a variety of numerical examples including wave propagation in open waveguides and waveguide branches, as well as scattering from multiply layered media. The structure of this paper is as follows: Section \[sec:set-up\] sets forth the model problem used throughout this paper to described the proposed combined CQ-WGF methodology. Section \[sec:CQ\] details the original CQ method, based on linear multi-step methods, applied to our model problem. Next, in Section \[sec:WGFM\] the BIE formulation of the frequency-domain problems and the WGF method are presented. Section \[sec:num\], finally, contains the solver validation and the numerical results corresponding to the examples considered. ![Geometry of the model problem: A time-dependent incident field ($U^{{\mathrm{inc}}}$) impinges on a locally perturbed half-plane producing a reflected ($U^{(1)}$) and a transmitted ($U^{(2)}$) field. []{data-label="fig:dom"}](fig_1.pdf) Model problem\[sec:set-up\] =========================== This section is devoted to set up the model problem used for the presentation of the proposed technique. Without loss of generality we focus here on the electromagnetic scattering problem; an analogous acoustic problem can also be formulated. Consider then the locally perturbed dielectric half-plane depicted in Figure \[fig:dom\]. The upper and lower media are denoted by $\Omega_1$ and $\Omega_2$, within which the wavespeed equals $c_1=(\mu_1\epsilon_1)^{-1/2}>0$ and $c_2=(\mu_2\epsilon_2)^{-1}>0$, respectively, with $\mu_j$ and $\epsilon_j$ denoting the magnetic permeability and the electric permittivity of the dielectric medium $\Omega_j$, $j=1,2$. The common unbounded interface between the two media is denoted by $\Gamma$ and is assumed to be a piecewise smooth curve. We then consider a TE- or TM-polarized incident electromagnetic field $U^{\mathrm{inc}}$ that impinges on the interface $\Gamma$ producing a reflected and a transmitted field, as is depicted in Figure \[fig:dom\]. The scalar field $U^{\mathrm{inc}}$—which satisfies the wave equation ${\partial}_t^2U^{\mathrm{inc}}({\boldsymbol{x}},t)-c_1^2\Delta U^{\mathrm{inc}}({\boldsymbol{x}},t)=0$ in all of ${\mathbb{R}}^2\times{\mathbb{R}}_+$—denote $z$-component of either the incident electric field in TE-polarization or the incident magnetic field in TM-polarization. It is assumed that the incident field arrives at $\Gamma$ at a time $t_0>0$ so that both the reflected field in $\Omega_1$ and the transmitted field in $\Omega_2$ equal zero at $t=0$. Expressing the $z$-component of the resulting total electromagnetic field as $$U ={\left}\{\begin{array}{ccc}U^{(1)}+U^{\mathrm{inc}}&\mbox{in}&\Omega_1,\\ U^{(2)}&\mbox{in}&\Omega_2,\end{array}\right.$$ we then obtain that the reflected and transmitted fields—which are denoted by $U^{(1)}$ and $U^{(2)}$, respectively—satisfy $$\begin{aligned} {\partial}^2_tU^{(j)}({\boldsymbol{x}},t)-c_j^2\Delta U^{(j)}({\boldsymbol{x}},t)&=&0,\quad ({\boldsymbol{x}},t)\in\Omega_j\times {\mathbb{R}}_+,\quad j=1,2,\label{eq:wave}\\ U^{(2)}({\boldsymbol{x}},t) -U^{(1)}({\boldsymbol{x}},t)&=&U^{\mathrm{inc}}({\boldsymbol{x}},t),\quad({\boldsymbol{x}},t)\in\Gamma\times{\mathbb{R}}_+,\label{eq:dir}\\ \nu_2 {\partial}_n U^{(2)}({\boldsymbol{x}},t)-\nu_1{\partial}_n U^{(1)}({\boldsymbol{x}},t) &=& \nu_1{\partial}_nU^{\mathrm{inc}}({\boldsymbol{x}},t),\quad({\boldsymbol{x}},t)\in\Gamma\times{\mathbb{R}}_+,\label{eq:neu}\\ U^{(j)}({\boldsymbol{x}},0) ={\partial}_tU^{(j)}({\boldsymbol{x}},0)&=&0,\quad{\boldsymbol{x}}\in\Omega_j,\quad j=1,2,\label{eq:init}\end{aligned}$$ \[eq:wave-eqn\] where $\nu_j=\mu^{-1}_j$ in TE-polarization and $\nu_j = \epsilon^{-1}_j$ in TM-polarization. In the following section we present the multi-step time semi-discretization of the transmission problem  using the classical CQ method introduced by Lubich in [@lubich1988convolution]. Convolution quadrature methods\[sec:CQ\] ======================================== Following the presentation of the convolution quadrature in [@betcke2017overresolving], we begin by turning the second-order transmission problem  into a first-order system. We thus introduce the vector valued functions $\bold V^{(j)}({\boldsymbol{x}},t ) = \left[U^{(j)}({\boldsymbol{x}},t),c_j^{-1}{\partial}_tU^{(j)}({\boldsymbol{x}},t)\right]^T$, $j=1,2,$ which allow  to be expressed as $$\begin{aligned} c_j^{-1}{\partial}_t \bold V^{(j)}({\boldsymbol{x}},t)&=&\mathcal L\bold V^{(j)}({\boldsymbol{x}},t),\quad ({\boldsymbol{x}},t)\in\Omega_j\times {\mathbb{R}}_+,\quad j=1,2,\label{eq:1_ODE}\\ \mathcal B_2\bold V^{(2)}({\boldsymbol{x}},t)&=& \mathcal B_1\bold V^{(1)}({\boldsymbol{x}},t)+\bold F({\boldsymbol{x}},t),\quad ({\boldsymbol{x}},t)\in\Gamma\times {\mathbb{R}}_+, \\ \bold V^{(j)}({\boldsymbol{x}},0) &=&\bold 0, \quad{\boldsymbol{x}}\in\Omega_j,\quad j=1,2,\end{aligned}$$ \[eq:1st\_ODE\_mod\] where $\mathcal L=\begin{bmatrix}0&I\\\Delta&0\end{bmatrix},$ $\mathcal B_j=\begin{bmatrix}\gamma_D&0\\0&\nu_j\gamma_N\end{bmatrix}$ and $\bold F({\boldsymbol{x}},t)=\begin{bmatrix}U^{\mathrm{inc}}({\boldsymbol{x}},t)\\\nu_1{\partial}_nU^{\mathrm{inc}}({\boldsymbol{x}},t)\end{bmatrix}$. (Note that $\bold F({\boldsymbol{x}},\cdot)$ is a causal function.) The symbols $\gamma_D$ and $\gamma_N$ in the definition of the operators $\mathcal B_j$, $j=1,2$, denote the Dirichlet and Neumann traces on $\Gamma$, respectively. The system  is subsequently semi-discretized in time using a general linear multi-step method. Letting $\Delta t>0$ denote the prescribed time step and $\alpha_\ell,\beta_\ell\in{\mathbb{R}}$, $\ell=0,\ldots,k,$ denote the coefficients of the multi-step method, we obtain that equations  for $j=1,2$, become the following difference equations $$\begin{split} \frac{1}{c_j\Delta t}\sum_{\ell=0}^k\alpha_{\ell}\bold V^{(j)}_{n+\ell-k}({\boldsymbol{x}}) = \sum_{\ell=0}^{k}\beta_{j}\mathcal L\bold V^{(j)}_{n+\ell-k}({\boldsymbol{x}}), \end{split}\label{eq:LMS}$$ where we have introduced the sequences of vector valued functions ${\left}\{\bold V_n^{(j)}(\cdot){\right}\}_{n=-\infty}^\infty$, $j=1,2$, which correspond to the approximation $\bold V^{(j)}(\cdot,t_n)\approx \bold V^{(j)}_{n}(\cdot)$ at the discrete times $t_n=n\Delta t$ for $n\geq 0$, and are defined as $\bold V^{(j)}_{n}(\cdot)=\bold 0$ for $n<0$. As it turns out, the difference equations  can be solved by means of the $\zeta$-transform [@hassell2016convolution]. Indeed, applying $\zeta$-transform to both sides of  we get $$\frac{1}{c_j\Delta t}\sum_{n=0}^{\infty}{\left}(\sum_{\ell=0}^k\alpha_{\ell}\bold V^{(j)}_{n+\ell-k}({\boldsymbol{x}}){\right})\zeta^n = \sum_{n=0}^\infty {\left}(\sum_{\ell=0}^{k}\beta_{\ell}\mathcal L\bold V^{(j)}_{n+\ell-k}({\boldsymbol{x}}){\right})\zeta^n,$$ for $\zeta\in B\subset{\mathbb{C}}$ with $B$ denoting the region of convergence of the power series. From the convolution property of the $\zeta$-transform it follows that the functions $\bold v^{(j)}(\cdot,\zeta)$, $j=1,2$—which correspond to the $\zeta$-transform of the sequences ${\left}\{\bold V^{(j)}(\cdot){\right}\}_{n=-\infty}^{\infty}$, $j=1,2$—satisfy $$\begin{aligned} {\left}(\frac{\gamma(z)}{c_j\Delta t}{\right})\bold v^{(j)}({\boldsymbol{x}},\zeta) &=&\mathcal L\bold v^{(j)}({\boldsymbol{x}},\zeta),\qquad({\boldsymbol{x}},\zeta)\in\Omega_j\times B,\ j=1,2,\\ \mathcal B_2\bold v^{(2)}({\boldsymbol{x}},\zeta)&=&\mathcal B_1\bold v^{(1)} ({\boldsymbol{x}},\zeta)+\bold f({\boldsymbol{x}},\zeta),\quad ({\boldsymbol{x}},\zeta)\in\Gamma\times B, \end{aligned}$$ \[eq:discrete\_equation\] where $\gamma(\zeta)={\left}(\sum_{\ell=0}^k \alpha_\ell \zeta^{k-\ell}{\right})/{\left}(\sum_{\ell=0}^k \beta_\ell \zeta^{k-\ell}{\right})$, $$\bold v^{(j)}({\boldsymbol{x}},\zeta) = \sum_{n=0}^{\infty}\bold V^{(j)}_n({\boldsymbol{x}})\zeta^{n} {\quad\mbox{and}\quad}\bold f({\boldsymbol{x}},\zeta) = \sum_{n=0}^{\infty}\bold F({\boldsymbol{x}},n\Delta t)\zeta^{n}.\label{eq:z_trans}$$ Letting $\bold v^{(j)}=[u^{(j)},v^{(j)}]^T$ and $\bold f=[f,g]^T$ it readily follows that the scalar fields $u^{(j)}:\Omega_j\times B\to{\mathbb{C}}$, $j=1,2$, satisfy the Helmholtz transmission problem: $$\begin{aligned} \Delta u^{(j)}({\boldsymbol{x}},\zeta) + k_j^2(\zeta) u^{(j)}({\boldsymbol{x}},\zeta) &=&0, \ \quad\qquad ({\boldsymbol{x}},\zeta)\in \Omega_j\times B,\quad j=1,2,\label{eq:helmholtz}\\ u^{(2)}({\boldsymbol{x}},\zeta) -u^{(1)}({\boldsymbol{x}},\zeta) &=& f({\boldsymbol{x}},\zeta), \quad ({\boldsymbol{x}},\zeta)\in \Gamma\times B,\label{eq:trans_cond_1}\\ \nu_2{\partial}_nu^{(2)}({\boldsymbol{x}},\zeta)-\nu_1{\partial}_nu^{(1)}({\boldsymbol{x}},\zeta) &=& g({\boldsymbol{x}},\zeta), \quad ({\boldsymbol{x}},\zeta)\in \Gamma\times B,\label{eq:trans_cond_2}\end{aligned}$$ \[eq:transmission\] where the (complex) wavenumbers $k_j(\zeta)$, $j=1,2,$ are given by $$k_j(\zeta) := \frac{i\gamma(\zeta)}{c_j\Delta t}.\label{eq:comple_wn}$$ The transmission problem  needs to be complemented with suitable radiation conditions for both fields $u^{(1)}$ and $u^{(2)}$ that ensure that they correspond to waves that propagate away from the interface $\Gamma$. We refer the reader to [@KRISTENSSON:1980vs] for a rigorous discussion on suitable radiation conditions that lead to results on the existence and uniqueness of solutions of  for physically meaningful wavenumbers, i.e., $k_j(\zeta)\in{\mathbb{C}}$ such that ${\mathrm{Re}\,}k_j(\zeta)>0$ and ${\mathrm{Im}\,}k_j(\zeta)\geq 0$. As it turns out, only transmission problems  with wavenumbers satisfying the latter conditions are needed to be solved for the implementation of the proposed convolution quadrature method (see Remark \[rem:pos\_wn\] below). In Section \[sec:WGFM\] below, we present an efficient and high-order boundary integral method for the fast and accurate solution of the transmission problems . Assuming that the (complex) frequency-domain solutions $u^{(j)}$, $j=1,2,$ have been obtained by solving , the approximations $U^{(j)}_n({\boldsymbol{x}})\approx U^{(j)}({\boldsymbol{x}}, t_n)$ for the reflected ($j=1$) and transmitted ($j=2$) fields at the discrete times $t_n=n\Delta t$, $n=0,\ldots N,$ are retrieved by taking inverse $\zeta$-transform of $u^{(j)}({\boldsymbol{x}},\cdot)$, $j=1,2$. It thus follows directly from  and the Cauchy residue theorem that $u^{(j)}_n({\boldsymbol{x}})$ can be expressed as the complex contour integrals $$U^{(j)}_n({\boldsymbol{x}}) :=\frac{1}{2\pi i}\oint_C\frac{u^{(j)}({\boldsymbol{x}},\zeta)}{\zeta^{n+1}}{\,\mathrm{d}}\zeta,\quad n=0,\ldots, N,\quad {\boldsymbol{x}}\in\Omega_j\quad j=1,2,\label{eq:inv_Z}$$ where the contour $C$ could be any simple closed curve contained in $B$. Note that the validity of formula  relies on the analyticity of $u^{(j)}({\boldsymbol{x}},\cdot)$ within the region enclosed by $C$ in complex $\zeta$-plane. Results on these regards for the transmission problem  can be derived following the arguments presented in [@cutzach1998existence]. We point out here that no scattering poles associated with  lie inside the contour for $C\subset\{z\in{\mathbb{C}}:{\mathrm{Im}\,}z\geq 0\}$. (An interesting discussion on analytic and numerical issues arising due to the existence of scattering poles near the contour $C$ utilized in the practical implementation of the convolution quadrature method can be found in reference [@betcke2017overresolving].) In practice, the contour integrals  have to be computed numerically and any quadrature rule could be utilized in principle. As it is pointed out in reference [@Banjai:2009in], however, the use of the classical trapezoidal rule leads to a significant reduction in the overall computational cost of the CQ method. To see this we proceed to select the contour $C$ as a circle of radius $\lambda>0$ which is discretized using the quadrature points $\zeta_m = \lambda {\operatorname{e}}^{2i\pi m/(N+1)}$, $m=0,\ldots, N$, that produce the following approximation of the integrals in : $$\widetilde U_n^{(j)}({\boldsymbol{x}}) := \frac{\lambda^{-n}}{N+1}\sum_{m=0}^Nu^{(j)}({\boldsymbol{x}},\zeta_m)\zeta^{-{n}}_m, \quad n=0\ldots N,\quad j=1,2,\quad{\boldsymbol{x}}\in\Omega_j.\label{eq:inv_Z_FFT}$$ The advantages of the trapezoidal rule are twofold. On one hand the numerical errors in the approximations $U^{(j)}_n({\boldsymbol{x}})\approx\widetilde U_n^{(j)}({\boldsymbol{x}})$, $n=0,\ldots, N$, decay exponentially fast as $N$ increases (due to the analyticity and periodicity of the integrands in ), and, on the other hand, the sums in  for $n=0,\ldots,N,$ can be efficiently computed by means of the Fast Fourier Transform (FFT). \[rem:pos\_wn\]The computational cost associated to the evaluation of  can be further reduced by noticing that, using the complex conjugation, just half of the fields $u^{(j)}$ corresponding to solutions of  for wavenumbers $k_j(\zeta)$  in the first quadrant (i.e., ${\mathrm{Re}\,}k_j(\zeta)>0$ and ${\mathrm{Im}\,}k_j(\zeta)\geq 0$) need to be computed [@Banjai:2009in]. Windowed Green function method {#sec:WGFM} ============================== In this section we present the WGF method for the solution of the two-layer transmission problem . As is shown in [@bruno2017guide; @bruno2017windowed; @perez2017windowed] and in the numerical examples presented in Section \[sec:num\], the proposed WGF method approach can be easily extended to tackle more general configurations involving unbounded material interfaces, such as multiply layered media and waveguide branches. We first introduce the single- and double-layer potentials which are defined as $${\left}(\mathcal S_{\zeta,j}\varphi{\right}) ({\boldsymbol{r}}) := \int_{\Gamma} G_{\zeta,j}({\boldsymbol{r}},{\boldsymbol{y}})\varphi({\boldsymbol{y}}){\,\mathrm{d}}s({\boldsymbol{y}})\mbox{ and } {\left}(\mathcal D_{\zeta,j}\varphi{\right}) ({\boldsymbol{r}}) := \int_{\Gamma} \frac{{\partial}G_{\zeta,j}({\boldsymbol{r}},{\boldsymbol{y}})}{{\partial}n({\boldsymbol{y}})}\varphi({\boldsymbol{y}}){\,\mathrm{d}}s({\boldsymbol{y}}),\ {\boldsymbol{r}}\in{\mathbb{R}}^2\setminus\Gamma,\label{eq:lay_pot}$$ respectively, where $ G_{\zeta,j}({\boldsymbol{x}},{\boldsymbol{y}}):=\frac{i}{4}H_0^{(1)}(k_j(\zeta)|{\boldsymbol{x}}-{\boldsymbol{y}}|)\label{eq:GF}$ is the free space Green function for the Helmholtz equation with wavenumbers $k_j=k_j(\zeta)$ defined in , which in view of Remark \[rem:pos\_wn\] are assumed to satisfy ${\mathrm{Re}\,}k_j>0$ and ${\mathrm{Im}\,}k_j\geq 0$. The Helmholtz single-layer ($S_{\zeta,j}$), double-layer ($K_{\zeta,j}$), adjoint double-layer ($K'_{\zeta,j}$) and hypersingular ($N_{\zeta,j}$) operators are in turn defined as $$\begin{split} {\left}(S_{\zeta,j}\varphi{\right})({\boldsymbol{x}}) := \int_{\Gamma} G_{\zeta,j}({\boldsymbol{x}},{\boldsymbol{y}})\varphi({\boldsymbol{y}}){\,\mathrm{d}}s({\boldsymbol{y}}), &\qquad {\left}({K'}_{\zeta,j}\varphi{\right})({\boldsymbol{x}}) := \int_{\Gamma} \frac{{\partial}G_{\zeta,j}({\boldsymbol{x}},{\boldsymbol{y}})}{{\partial}n({\boldsymbol{x}})}\varphi({\boldsymbol{y}}){\,\mathrm{d}}s({\boldsymbol{y}}),\\ {\left}(K_{\zeta,j}\varphi{\right})({\boldsymbol{x}}) := \int_{\Gamma} \frac{{\partial}G_{\zeta,j}({\boldsymbol{x}},{\boldsymbol{y}})}{{\partial}n({\boldsymbol{y}})}\varphi({\boldsymbol{y}}){\,\mathrm{d}}s({\boldsymbol{y}}),&\qquad {\left}(N_{\zeta,j}\varphi{\right})({\boldsymbol{x}}) := \mathrm{f.p.}\int_{\Gamma} \frac{{\partial}^2 G_{\zeta,j}({\boldsymbol{x}},{\boldsymbol{y}})}{{\partial}n({\boldsymbol{x}}){\partial}n({\boldsymbol{y}})}\varphi({\boldsymbol{y}}){\,\mathrm{d}}s({\boldsymbol{y}}),\end{split}\label{eq:int_op}$$ for ${\boldsymbol{x}}\in \Gamma$. As usual, the initials f.p. in the definition of the hypersingular operator $N$ stand for Hadamard finite-part integral. Throughout this section we assume that the layer potentials and boundary integral operators are defined for density functions $\varphi:\Gamma\to{\mathbb{C}}$ that make the integrals in  and  conditionally convergent—after the regularizations needed due to the kernel singularities. Unlike the seminal reference [@bruno2016windowed] on the WGF method, where a direct integral formulation approach is followed, we use here an indirect formulation whose derivation is conceptually simpler. Introducing two unknown density functions $\varphi_\zeta,\psi_\zeta:\Gamma\to{\mathbb{C}},$ we seek reflected and transmitted fields of the form $$u^{(j)}({\boldsymbol{x}},\zeta) = \nu_j^{-1}(\mathcal D_{\zeta,j}\varphi_\zeta)({\boldsymbol{x}})-(\mathcal S_{\zeta,j}\psi_\zeta)({\boldsymbol{x}}),\quad{\boldsymbol{x}}\in\Omega_j,\quad j=1,2.\label{eq:layer_rep}$$ Clearly, the potentials  satisfy the Helmholtz equation  in $\Omega_j$. Enforcing the transmission conditions - we readily arrive at the integral equation system $$\label{eq:int_eq_full} -E{\boldsymbol}\phi_\zeta + T_\zeta[{\boldsymbol}\phi_\zeta] ={\boldsymbol}\phi^{\mathrm{inc}}_\zeta \quad \mbox{on}\quad \Gamma$$ for the vector density ${\boldsymbol}\phi_\zeta=[\varphi_\zeta,\psi_\zeta]^T$, where $ E = \frac{1}{2}\begin{bmatrix} \nu^{-1}_1+\nu^{-1}_2& 0\\ 0& \nu_1+\nu_2 \end{bmatrix},$ ${\boldsymbol}\phi^{\mathrm{inc}}_\zeta = \begin{bmatrix} f(\cdot,\zeta)\\ g(\cdot,\zeta) \end{bmatrix}, $ and $$T_\zeta = \begin{bmatrix} \nu_2^{-1}K_{\zeta,2}-\nu_1^{-1}K_{\zeta,1}& -S_{\zeta,2}+ S_{\zeta,1}\smallskip\\ N_{\zeta,2}-N_{\zeta,1}& - \nu_2{K'}_{\zeta,2}+ \nu_1{K'}_{\zeta,1} \end{bmatrix}.\label{eq:transmission_operator}$$ Instead of solving  on the entire unbounded material interface $\Gamma$, a locally windowed problem is used to obtain the surface density functions $\varphi_\zeta$ and $\psi_\zeta$ over relevant portions of $\Gamma$. In order to do so we introduce a slow-rise smooth window function $w_A$ which is non-zero in an interval of length $2A$. In detail, our window function is given by $w_A(x) = \eta(x,cA,A)$ where $$\label{eq:window_function} \eta(t,t_0,t_1):= {\left}\{\begin{array}{cll}1&\mbox{if}&|t|<t_0,\\ \displaystyle\exp{\left}(\frac{2{\operatorname{e}}^{-1/u}}{u-1}{\right}), u=\frac{|t|-t_0}{t_1-t_0},&\mbox{if}&t_0<|t|<t_1,\\ 0&\mbox{if}&|t|\geq t_1,\end{array}{\right}.$$ where $0<c<1$. The width $2A>0$ of the support of $w_A$ is selected in such a way that $1- w_A(x)$ vanishes on any corrugations that exist on the surface $\Gamma$. Letting $W_{\!\!A}=w_A\cdot I$, where $I$ is the $2\times 2$ identity matrix, we then consider windowed integral equation $$-E {\boldsymbol}\phi_{\zeta,A}+ T_\zeta[W_{\!\!A} {\boldsymbol}\phi_{\zeta,A}] = {\boldsymbol}\phi^{\mathrm{inc}}_\zeta \quad\mbox{on}\quad \widetilde\Gamma_A=\{{\boldsymbol{x}}\in\Gamma:w_A(x)\neq0\}.\label{eq:windowed_version}$$ As is shown in [@bruno2016windowed; @perez2017windowed] the errors in the approximation ${\boldsymbol}\phi_\zeta\approx {\boldsymbol}\phi_{\zeta,A}$ in $\Gamma_A=\{{\boldsymbol{x}}\in\Gamma:w_A(x)=1\}$, for a fixed $\zeta$, decay super-algebraically fast as the window size $A\to\infty$. (Note that a single window size $A>0$ is here used for all the (eventually discrete) values of the variable $\zeta$.) For a sufficiently smooth curve $\Gamma,$ it can be easily shown that the windowed BIE  is of the second-kind for all $A>0$ [@perez2017windowed Appendix D]—as each one of the integral operators in the blocks of $T_\zeta$ is given in terms of weakly singular kernels. Moreover, the windowed BIE  can be solved by means of any standard BIE solver. With the approximate density functions $\varphi_{\zeta,A}$ and $\psi_{\zeta,A}$ at hand, the approximate reflected and transmitted fields (in the frequency-domain) can be easily obtained by respectively substituting $\varphi_\zeta$ and $\psi_\zeta$ by $\varphi_{\zeta,A}$ and $\psi_{\zeta,A}$ in the representation formula . These substitutions produce the approximate fields $$\label{eq:window_RF} u^{(j)}_{A}({\boldsymbol{x}},\zeta)= \nu_j^{-1}\mathcal D_{\zeta,j}\left[w_A\varphi_{\zeta,A}\right]({\boldsymbol{x}})-\mathcal S_{\zeta,j}\left[w_A\psi_{\zeta,A}\right]({\boldsymbol{x}}),\quad{\boldsymbol{x}}\in\Omega_j,\quad j=1,2,$$ which, for each fixed $\zeta$, exhibit errors that decay super-algebraically as $A\to\infty$ in the regions $\Omega_{j,A}=\{{\boldsymbol{x}}=(x,y)\in\Omega_j:w_A(x)=1\}$. Note that the use of a single window size $A$ allows for the numerous windowed BIEs  (which, in view of Remark \[rem:pos\_wn\], are $\sim\!\!N/2$ in total) to be solved using a single discretization of the curve $\widetilde \Gamma_A$, which can also be used in the numerical evaluation of the windowed representation formulae . Examples and applications {#sec:num} ========================= This section presents a variety of numerical examples that demonstrate the accuracy of the proposed convolution quadrature method for problems of scattering in the presence of the unbounded material interfaces. Nyström method and frequency-domain problems -------------------------------------------- Throughout this paper we utilize a high-order Nyström method for the spatial discretization of the windowed frequency-domain BIEs . Nyström methods enjoy well-known advantages over other boundary integral equation methods. Unlike BEMs, for instance, Nyström methods require numerical evaluation of only one boundary integral integral per grid point. Furthermore, they can easily yield high-order convergence without compromising the computational cost. Those advantages become even more apparent in CQ calculations where large numbers of frequency-domain problems are typically needed to be solved all at once. Among the many two-dimensional Nyström methods available in the literature, we use here the one based on the Alpert quadrature rule [@alpert1999hybrid] of order sixteen. This quadrature rule is designed to deal with weak singularities such as the logarithmic singularities present in the integral kernels defining the operator $T_\zeta$ in . This BIE method enjoys two immediate advantages over, say, the classical spectrally-accurate Martensen-Kussmaul (MK) Nyström method [@colton2012inverse section 3.5] (for the kind of problems concerning this work), which is arguably the best discretization method available for frequency-domain problems. On one hand, the Alpert-based Nyström method can easily handle large complex wavenumbers, such as those produced by CQ methods, and, on the other hand, it is compatible with the Fast Multipole Method [@Hao:2013do]. (As is well-known, for complex wavenumbers, the MK method suffers from numerical instabilities arising from round-off errors [@lu2014efficient; @wang1998modal].) In order to validate our frequency-domain Nyström solver for the solution of the windowed BIE , we consider a two-layer medium with a smooth cosine-shaped defect. In detail, the penetrable interface considered is $$\label{eq:cos_interface} \Gamma={\left}\{{\left}(t,\frac12\cos(2t)\eta(t,2.5,5){\right})\in{\mathbb{R}}^2,t\in{\mathbb{R}}{\right}\},$$ where $\eta$ is defined in  (the curve $\Gamma$ is depicted in the inset in Figure \[fig:conv\_N\_a\]). The wavenumbers considered are $k_1=k$ and $k_2=k/2$ in $\Omega_1$ and $\Omega_2$, respectively, for ten different values of the parameters $k$ which were selected randomly from the set $\{0<{\mathrm{Re}\,}z,{\mathrm{Im}\,}z<20\}$ by a uniform distribution. These wavenumbers are meant to be representative of those generated by CQ methods. The windowed BIE  is then numerically solved for each one of the randomly selected $k$ values for various numbers of discretization points and for a fixed window size $A=8$. The numerical errors in the fields are displayed in Figure \[fig:conv\_N\_a\] where it can be clearly seen that our solver yields the expected order of convergence. The numerical error is here defined as $\max\{e^{(1)},e^{(2)}\}$ where $$\label{eq:error_formula} e^{(j)}(\cdot)=\max_{i=1,2,3}{\left}|u^{(j)}_{A}({\boldsymbol{x}}^{(j)}_i,\cdot)-\tilde u^{(j)}_{A}({\boldsymbol{x}}^{(j)}_i,\cdot){\right}|/\max_{i=1,2,3}{\left}|\tilde u_{A}({\boldsymbol{x}}^{(j)}_i,\cdot){\right}|,\quad j=1,2,$$ and where the evaluation points are ${\boldsymbol{x}}^{(1)}_1=(-1,1)$, ${\boldsymbol{x}}^{(1)}_2=(0,1)$ and ${\boldsymbol{x}}^{(1)}_3=(1,1)$ in the upper domain $\Omega_1$, and ${\boldsymbol{x}}^{(2)}_1=(-1,-1)$, ${\boldsymbol{x}}^{(2)}_2=(0,-1)$ and ${\boldsymbol{x}}^{(2)}_3=(1,-1)$ in the lower domain $\Omega_2$. Both the sample fields $u^{(j)}_A$ and the reference fields $\tilde u^{(j)}_A$ in  were obtained by numerically solving the windowed BIE  and then evaluating the representation formula , using the approximate surface densities. The reference fields ($\tilde u^{(j)}$) were produced using a fine grid consisting of $512$ discretization points. In order to demonstrate the high-order convergence of the WGF method in the context of CQ methods, we next consider the frequency-domain problems of the previous example but they are now solved for various window sizes $A>0$. The number of discretization points used in this example is such that $\sim\!\!15$ per unit length are used, which turns out to be enough to guarantee that the dominant error in all the calculations stems from the use of a finite window size $A>0$ and not from the Nyström discretization of the windowed BIE . Figure \[fig:conv\_A\_b\] displays the numerical errors obtained for the various window sizes and complex wavenumbers considered. The error is measured as in the previous example but with the reference fields produced using a large window size $A = 32$. As expected, super-algebraic convergence is observed for all the complex wavenumbers considered, with error curves exhibiting a strong dependence on the wavenumber; faster convergence is observed for wavenumbers with larger imaginary part (this is partly explained by the fast (exponential) decay of the integral kernels). On the other hand, for a fixed imaginary part, faster convergence is expected for wavenumbers with larger real part [@bruno2016windowed; @perez2017windowed]. The fact that the convergence of the WGF method depends on the wavenumber rises the issue of selecting a single appropriate window size to be used in the solution of all the frequency-domain problems. This issue can be easily resolved in the case of BDF-based CQ methods by noticing that the wavenumber with the smallest imaginary part is also the wavenumber with the smallest real part. This is due to the fact that the CQ-produced wavenumbers  lie on the boundary of a bounded convex set contained in the upper complex half-plane. Therefore, in order to achieve aceptable WGF errors in the solution of all the frequency-domain problems, it suffices to select $A>0$ large enough so that the WGF errors in the solution of the problem with smallest wavenumber is acceptable. This procedure is utilized in the selection of the window-size parameter $A$ in all the time-domain problems considered in this paper. An alternative approach to deal with the frequency-dependent convergence of the WGF method can be devised for simple problems for which discretizations of the interfaces can be inexpensively produced. Since the actual CQ-WGF approximation to wave-equation solution at a point ${\boldsymbol{x}}$ is a linear combinations of the fields  resulting from a discrete set of $\zeta$ values, a $\zeta$-dependent windowed sizes $A_\zeta$ can in principle be used in the numerical solution of the windowed BIE and in the evaluation of the windowed representation formulae . This procedure would allow to eliminate inefficiencies stemming from both the use of unnecessarily large values of $A$ for large wavenumbers, and from the use of over discretized spatial grids for small wavenumbers. Time-domain scattering problems {#sec:CQ_results} ------------------------------- This section encompasses several challenging examples that validate our CQ-WGF method for the solution of time-domain scattering problems. For the sake of definiteness, in what follows we consider the CQ method associated to backward differences of orders two (BDF2) with corresponding polynomial $\gamma(\zeta)= \frac{1}{2}(\zeta^2-4\zeta+3)$ (higher-order CQ methods can be easily incorporated). Following [@Banjai:2009in] the radius of the circular contour in  is selected as $\lambda=\epsilon^{\frac{1}{2N}}$ in all convolution quadrature computations, where $ \epsilon>0$ denotes the machine-precission number and where $N$ is the total number of time-steps. #### Planar two-layer medium. In the first example of this section we consider the scattering of a planewave off of a planar two-layer medium consisting of the subdomains $\Omega_1={\mathbb{R}}^2_+$ and $\Omega_2={\mathbb{R}}^2_-$ with wavenumbers $c_1$ and $c_2$, respectively, for which the exact solution can be analytically constructed from Snell’s law and Fresnel equations [@brekhovskikh2013acoustics]. In detail, for a general incident planewave of the form $U^{\mathrm{inc}}({\boldsymbol{x}},t)= f(c_1 (t - t_{\text{lag}}) - {\boldsymbol{x}}\cdot {\boldsymbol}{d}(\theta^{\mathrm{inc}})),$ with ${\boldsymbol}d(\theta)=(\cos\theta,-\sin\theta)$ and $\theta\in [0,\pi]$, the exact total field solution of the problem of scattering is given by $$U({\boldsymbol{x}},t) = {\left}\{\begin{array}{ccc}U^{\mathrm{inc}}({\boldsymbol{x}},t)+R(\theta^{\mathrm{inc}})f{\left}(c_1 (t - t_{\text{lag}}) - {\boldsymbol{x}}\cdot {\boldsymbol}{d}(-\theta^{\mathrm{inc}}){\right}),& {\boldsymbol{x}}\in\Omega_1={\mathbb{R}}^2_+,\smallskip\\ T(\theta^{\mathrm{inc}})f{\left}(c_2 (t - t_{\text{lag}}) - {\boldsymbol{x}}\cdot {\boldsymbol}{d}(\theta^{\rm ref}){\right}),&{\boldsymbol{x}}\in\Omega_2={\mathbb{R}}^2_{-},\end{array}{\right}.$$ where the refraction angle $\theta^{\rm ref}\in[0,\pi]$ (measured with respect to the horizontal) is determined by the relation $n:=c_1/c_2= \cos(\theta^{\mathrm{inc}})/\cos(\theta^{\rm ref})$ and where the reflection ($R$) and transmission $(T)$ coefficients are given by $$R(\theta^{\mathrm{inc}}) = \dfrac{\sin \theta^{\mathrm{inc}}- \sqrt{n^2 - \cos^2 \theta^{\mathrm{inc}}}}{\sin\theta^{\mathrm{inc}}+ \sqrt{n^2 - \cos^2 \theta^{\mathrm{inc}}}} {\quad\mbox{and}\quad}T(\theta^{\mathrm{inc}})=1 + R(\theta^{\mathrm{inc}}).$$ In this particular example we consider $f(t) = \sin(t) \exp(-\sigma t^2) $, $\theta^{\mathrm{inc}}=\pi/2$, $\sigma = 1.5$, $t_{\text{lag}} = 5$, and the wavespeeds $c_1 = 1$ and $c_2 = 2$. The numerical errors produced by proposed CQ-WGF procedure are displayed in Figure \[fig:ex2\_a\] where the expected second-order convergence in time of the fields in each of the layers can be observed. A window size of $A=40$ and a total number of $400$ discretization points were used in the numerical solution of each of the windowed BIEs . These parameters were selected so as to guarantee spatial errors below $10^{-6}$ at the observation points considered, in all the frequency-domain solutions for the complex wavenumbers produced by the CQ-BDF2 method. The approximate, exact and the logarithm in base ten of the absolute value of their difference are displayed in Figure \[fig:ex2\_b\]. #### Multi-layer medium. In the second example of this section we consider a three-layer medium with penetrable interfaces $\Gamma_1$ and $\Gamma_2$ defined as $\Gamma_1=\Gamma$ and $\Gamma_2=\Gamma+\{(0,-2)\}$, where $\Gamma$ is the curve defined in  and depicted in the inset in Figure \[fig:conv\_N\_a\]. The wavespeeds are $c_1 = 2$, $c_2 = 1$ and $c_3 = 2$ in $\Omega_1$, $\Omega_2$ and $\Omega_3$, respectively (the various domains and interfaces involved in this problem are displayed in the inset of Figure \[fig:3layer\_a\]). As in the previous example, the incident field is the planewave $U^{\mathrm{inc}}({\boldsymbol{x}},t)=f(c_1 (t - t_{\text{lag}}) - {\boldsymbol{x}}\cdot {\boldsymbol}{d}(\theta^{\mathrm{inc}}))$ with parameters $ \theta^{\mathrm{inc}}= \frac\pi4$, $\sigma = 1.5$ and $t_{\rm{lag}} = 5$. Convergence results are shown in Figure \[fig:3layer\_a\] and snapshots of the solution are displayed in Figure \[fig:3layer\_b\]. The derivation of the corresponding BIE in this case is completely analogous to the one presented in Section \[sec:WGFM\] above for the two-layer problem. The widow size $A=15$ and a total of $240$ discretization points on each interface were used in the WGF solution of the frequency-domain problems. The reference fields were obtained using $N=6400$ timesteps. #### Waveguide and waveguide branches. Finally, we consider two different waveguide problems. The incident field is selected as a causal periodic pulse placed at a point ${\boldsymbol{x}}_0$ within the waveguide structure—which in both cases is denoted by $\Omega_2$. In detail, the incident field is given by the time convolution $$\label{eq:sourcepulse} U^{\mathrm{inc}}({\boldsymbol{x}}, t) := \int_{0}^{t} G({\boldsymbol{x}},{\boldsymbol{x}}_0, t-\tau) f(\tau) \ \text{d}\tau,\qquad t\geq0,\quad ({\boldsymbol{x}}_0\in\Omega_2)$$ of the fundamental solution of the wave equation [@sayas2016retarded] $$G({\boldsymbol{x}}, {\boldsymbol{y}}, t) = \dfrac{H{\left}(t-c_2^{-1}|{\boldsymbol{x}}- {\boldsymbol{y}}|{\right})}{2\pi \sqrt{t^2 - c_2^{-2}|{\boldsymbol{x}}- {\boldsymbol{y}}|^2}},$$ where $H$ denotes the Heaviside step function, and the periodic signal $f(t) = \sin(2 t)$. The convolution integral  is evaluated by means of BDF2-based CQ method [@hassell2016convolution]. The geometry of our first waveguide problem—which is the same considered in the previous example—is depicted in the inset of Figure \[fig:WG\_a\]. The wavespeeds are once again $c_1 = 2, c_2 = 1$ and $c_3 = 2 $ in $\Omega_1$, $\Omega_2$ and $\Omega_3$, respectively. The second-order convergence of the proposed methodology is demonstrated in Figure \[fig:WG\_a\] where relative errors at three different points—one in each subdomain—are shown for various time discretizations. A fixed window size $A=15$ and a fixed number of discretization points (equal to 240) were used on each interface in the numerical solution of each of the corresponding windowed BIEs. The reference fields at the final time $T=20$, were obtained using $N=6400$ timesteps. Snapshots of the solution are displayed in Figure \[fig:WG\_b\]. As expected, the time harmonic incident field considered eventually excites the first propagative mode of the waveguide that can be clearly seen in the last snapshot. In our final example we consider a more complicated example consisting of a waveguide branch and a circular resonator. Note that some of the interfaces are not smooth. In order to properly resolve the BIE densities and the fields near the corners, a sigmoid transformation is used to produce grids that accumulate discretization points near the corners thus ensure the overall high-order convergence of our frequency domain BIE solver [@anand2012well; @dominguez2016well]. The wavespeed in the waveguide and the resonator is $c =1$. Outside the waveguide and the resonator the wavespeed is $c=2$. Snapshots of the solution are presented in Figure \[fig:plotwaveguide2\] which shows that a propagative mode is excited within the waveguide and it splits as it propagates into the two waveguide branches. ![Wave propagation within a (non-smooth) waveguide branch.\[fig:plotwaveguide2\]](fig_6.pdf){height="14cm"} [^1]: cperez@mat.uc.cl, cperezar@mit.edu [^2]: The numerical evaluation of Sommerfeld integrals has been referred to in the literature as “a standard nightmare for many electromagnetic engineers" [@jimenez1996sommerfeld]
--- abstract: | Cheeger inequalities bound the spectral gap $\gamma$ of a space by isoperimetric properties of that space and vice versa. In this paper, I derive Cheeger-type inequalities for nonpositive matrices (aka stoquastic Hamiltonians), real matrices, and Hermitian matrices. For matrices written $H = L+W$, where $L$ is either a combinatorial or normalized graph Laplacian, I show that, 1. when $W$ is diagonal and $L$ has maximum degree $d_{\max}$, $2h \geq \gamma \geq \sqrt{h^2 + d_{\max}^2}-d_\max$; 2. when $W$ is real, we can often route negative-weighted edges along positive-weighted edges such that the Cheeger constant of the resulting graph obeys an inequality similar to that above; and 3. when $W$ is Hermitian, the weighted Cheeger constant obeys $2h \geq \gamma$ where $h$ is the weighted Cheeger constant of $H$. This constant reduces bounds on $\gamma$ to information contained in the underlying graph and the Hamiltonian’s ground-state. If efficiently computable, the constant opens up a very clear path towards adaptive quantum adiabatic algorithms, those that adjust the adiabatic path based on spectral structure. I sketch a bashful adiabatic algorithm that aborts the adiabatic process early, uses the resulting state to approximate the weighted Cheeger constant, and restarts the process using the updated information. Should this approach work, it would provide more rigorous foundations for adiabatic quantum computing without *a priori* knowledge of the spectral gap. author: - 'Michael Jarret [^1]' title: 'Hamiltonian surgery: Cheeger-type inequalities for nonpositive (stoquastic), real, and Hermitian matrices' --- Introduction ============ Motivation ---------- An $n \times n$ Hermitian matrix $H$ has eigenvalues $\lambda_0 \leq \lambda_1 \leq \dots \leq \lambda_{n-1}$. We call the difference in the two lowest eigenvalues of $H$, $\gamma = \lambda_1 - \lambda_0$, its spectral gap. Bounding the spectral gap is a problem that could be motivated any number of ways. In quantum theory, the spectral gap determines the runtime of adiabatic algorithms and processes [@Jansen2006; @Albash2018; @Crosson2016] and relates to quantum phase transitions [@sachdev2007quantum]. The spectral gap is also intimately related to the rate at which heat diffuses on a manifold [@yau2009estimate; @andrews2011proof] and the rate at which substochastic processes approach their quasistationary distributions [@collet2012quasi; @collet2013markov]. At the computational level, it determines the runtime of various well-known randomized algorithms [@sinclair2012algorithms] as well as Fleming-Viot type algorithms for approximating marginals [@Jarret2016; @jarret2017substochastic; @cloez2016quantitative; @cloez2016fleming]. Each of these is an independently interesting topic, which would motivate its own study of the spectral gap. Here, I abstract away the context and seek to understand the spectral structure of $H$ by decomposing it as $H=L+W$, the sum of a graph Laplacian $L$ and some other Hermitian matrix $W$. All Hermitian matrices can be decomposed this way and, as we will see, the decomposition proves fruitful. If $W$ is diagonal, $H$ is frequently called a “stoquastic” Hamiltonian or “stoquastic” matrix. A diagonal $W$ also implies that $H$ is an infinitesimal generator of a substochastic process and the resulting matrix $I-\epsilon H$ is a substochastic matrix. When $W$ is not diagonal, but instead real, the matrix $H$ may have a *sign problem*, or all off-diagonal terms may not have the same sign. The “problem” is that such Hamiltonians can be difficult to study with Monte Carlo methods [@troyer2005computational]. Finally, when $W$ is a general Hermitian matrix, then $H$ has no special name; Hamiltonian is special enough. In this paper, I look to formalize the relationship between $\gamma$ and some geometrical properties of the ground-state $\phi_0$ of $H$, or its lowest eigenvector. I always assume that $H$ is represented in such a way that $H:\mathbb{C}^{\abs{V}} \longrightarrow \mathbb{C}^{\abs{V}}$ for some graph $G=(V,E)$. In our representation, $L$ is the graph Laplacian of $G$. Correspondingly, we consider functions $\phi:V \longrightarrow \mathbb{C}$. I often assume that $H$ has been rotated by a diagonal unitary transformation such that $\phi_0 \geq 0$ and will define a weighted Cheeger constant $h$ [@Chung2000], capturing the relevant geometric properties of $\phi_0$. It remains unclear how difficult approximating $h$ is, however in the event that $W=0$, it reduces to the Cheeger constant of $G$ and can be efficiently approximated. If and when one can approximate $h$ remains a very important open question beyond the scope of this paper, though I discuss some related ideas in \[sec:discussion\]. The conceptual lesson of this paper is quite concrete. For any Hermitian matrix $H$, if $H$ has a large spectral gap, then $\phi_0$ *has no bottlenecks*. That is to say, that $\phi_0$ is a somewhat smooth distribution over $G$. Prior results, discussed below, suggest that we should already believe this, but leave open the possibility that there exist cases that betray our intuition. Provided that $H$ is not diagonal, I show that our intuition is always correct. (In the case that $H$ is diagonal, our intuition is trivially correct.) I do not, however, show the converse. That is, I leave open the question of whether a small spectral gap implies a bottlenecked $\phi_0$. I show that this is indeed implied in the stoquastic and some real cases, but when this is implied by the general Hermitian case is left open. Adapting these techniques to more general cases appears possible and I will discuss some potential approaches as we progress through the proof. Furthermore, in \[sec:discussion\], we will see that understanding the precise relationship might have far-reaching implications for quantum adiabatic algorithms. Previous Work {#sec:previous} ------------- In this paper, we study isoperimetric inequalities of discrete systems. Such inequalities enjoy a rich history. Within the context of randomized algorithms, the Cheeger constant often provides a means of determining the mixing time of a Markov chain and, thus, the efficiency of certain approximation algorithms [@sinclair2012algorithms]. Standard Cheeger inequalities relate the spectral gap $\gamma$ of the Laplacian $L$ corresponding to a graph $G$ and the Cheeger constant $h$ of that graph. They usually appear in a form similar to $$\label{eqn:standard} 2h \geq \gamma \geq \frac{h^2}{2}$$ and provide a very useful, intuitive significance to the spectral gap. Although a useful quantity, we know that computing the Cheeger constant exactly for an arbitrary graph is NP-hard [@GAREY1976237; @leighton1988approximate; @kaibel2004expansion]. Despite this hardness, the Cheeger constant can indeed be efficiently approximated [@sinclair2012algorithms; @kannan2004clusterings]. The spectral gap, and hence Cheeger constant, is also of primary interest in spectral graph theory, where it is often explored in connection with graph Laplacians [@Chung]. In [@Chung2000], the authors adapted Cheeger inequalities to apply to the gap in the *Dirichlet eigenvalues* of a graph. The distinguishing characteristic of the Dirichlet eigenvalues is that they arise by imposing a Dirichlet boundary constraint. This constraint requires that, for some subset of vertices $\delta V \subseteq V$, all eigenfunctions must satisfy $f\vert_{\delta V} = 0$. These eigenvalues are also studied quite a bit and numerous bounds appear in the literature. Unfortunately for us, these studies typically focus on the easier problem of bounding eigenvalues, not their differences. Additionally, the few gap inequalities that exist, like those in [@Chung2000], are not easily applied to most situations we are presently interested in. Thus, we require a new inequality. To this end, various authors (including me) have pursued Cheeger-type inequalities in the stoquastic case [@al2010energy; @Jarret2014a] and more general Hermitian matrices [@crosson2017quantum]. In either case, this problem is actually equivalent to that of determining the differences in the Dirichlet eigenvalues of an appropriate host graph. These inequalities all assume an unfortunate form that looks something like $$\label{eqn:old} 2 \norm{H} h \geq \gamma \geq \frac{h^2}{2\norm{H}}$$ where $h$ is an appropriately defined Cheeger constant. We can easily see the weakness of this expression: unlike in the case of graph Laplacians, it is entirely possible that $h^2 \sim \norm{H} \sim e^{n}$. Thus, the lower bound from \[eqn:old\] scales like a constant, whereas we would expect from \[eqn:standard\] that $\gamma \gtrsim e^{n}$. A similar argument illuminates the weakness of the upper bound. Suppose the very common situation that $\norm{H} \sim e^{n}$ and $h\sim e^{-n}$. Then, the upper bound on $\gamma$ scales as a constant whereas we expect that $\gamma \lesssim e^{-n}$. This latter issue leaves open the possibility that one might have a large spectral gap in the presence of a bottleneck. In this work, I will correct these defects. Results ------- Consider a graph $G=(V,E)$ with edge weights assigned by $w:V\times V \longrightarrow \mathbb{R}$. Then, for the corresponding graph Laplacian $L$ and any real diagonal matrix $W$, $H=L+W$ admits a weighted Cheeger constant $h$, defined in [@Chung2000] and again in \[sec:cheeger\]. In particular, I prove that for any stoquastic matrix with spectral gap $\gamma$ $$\label{eqn:result} {2 h \geq \gamma \geq \sqrt{h^2 + Q^2} - Q}$$ where, if $L$ is a combinatorial Laplacian, $Q$ is the maximum degree of a vertex of $G$. If $L$ is a normalized Laplacian, $Q=1$. For any real matrix, we can identify positive off-diagonal terms with negative edge weights ($E^- = \{\{u,v\} \in E \vert \allowbreak w(u,v) \allowbreak < 0 \allowbreak \}$) and show that $${2 h \geq \gamma \geq \sqrt{k^2 + Q^2} - Q}.$$ if $\phi_0$ is uniform up to phase and $${2h \geq \gamma \geq (Q+\rho)^2 - \sqrt{(Q+\rho)^2 -k^2}}$$ where $\rho = \lambda_{\abs{V}-1}-\lambda_0$ otherwise. Above, $h$ is the weighted Cheeger constant corresponding to the graph $G^+ = (V,E\setminus E^-)$ under the original weight function and $k$ the weighted Cheeger constant of $G^+$ with a redistributed weight function $w^+$ to be defined in \[sec:neg\_edges\]. In \[sec:applications\], we will see that these equations can often be relaxed to $${2 h \geq \gamma \geq \epsilon \left(\sqrt{h^2+Q^2}-Q\right)}$$ for a constant $\epsilon$, which may be easier to apply and retains appropriate scaling behavior. In other words, at least asymptotically, I reduce the problem of bounding the gap of a signed graph $G$ to that of determining the appropriate Cheeger constant of $G^+$. Finally, I provide the upper bound $$\label{eqn:result2} 2h \geq \gamma$$ for any Hermitian matrix. Not only does this expression correct the problems mentioned in \[sec:previous\], but the improvement over these statements can be quite drastic and firmly establishes some conceptual points. Note that in cases where $h$ is large compared to the maximum degree $Q$, which often happens when $\norm{W}$ is sufficiently large, the lower bound in \[eqn:old\] becomes weak whereas \[eqn:result\] remains tight. Furthermore, the form of the expression guarantees that the inequality scales appropriately for all relative sizes of $L$ and $W$ and, hence, all Hermitian matrices. Although establishing the lower bound in \[eqn:old\] is unlikely in general, expanding around $h\approx 0$ does yield a similar expression. Furthermore, when $h$ is large relative to $Q$, \[eqn:result\] guarantees that $\gamma \sim h$. The efficiency with which one can classically approximate $h$ remains unclear, but the quantity only depends upon information about the ground-state distribution of $H$ and the corresponding graph $G$. This opens up the possibility that an adiabatic algorithm may be able to efficiently approximate $h$, even if a classical method remains elusive. This ability would be a great advantage to the field of adiabatic optimization, as it could be used to determine the appropriate time dependence of an adiabatic evolution without *a priori* knowledge of the spectral gap. Such an evolution can be necessary to produce quantum speedups, like those achieved in adiabatic Grover search [@roland2002quantum]. This idea will be discussed in detail in \[sec:discussion\], but conclusive results, should they exist, are left for future work. Preliminaries ============= The Rayleigh Quotient {#sec:rayleigh} --------------------- For an $n\times n$ Hermitian operator $H$ acting on the space $\mathcal{S} = \{f: \intrange{1}{n} \longrightarrow \mathbb{C}^n\}$, one defines the Rayleigh quotient corresponding to a function $f \in \mathcal{S}$ as $$\label{eqn:Rayleigh} R(H,f) = \frac{\langle f, H f\rangle}{\langle f, f\rangle}. $$ Thus, the eigenvalues $\lambda_0(H) \leq \lambda_1(H) \leq \dots \leq \lambda_{n-1}(H)$ of $H$ can be written as $$\label{eqn:eigenvalues} \lambda_i(H) = \inf_{f \perp T_{i-1}}\frac{\langle f, H f\rangle}{\langle f, f\rangle}$$ where $T_{i}$ is the space spanned by the functions $f_j$ achieving $\lambda_j(H)$ for each $0 \leq j \leq i$. We call $f$ achieving $\lambda_0(H)$ the *ground-state*. Of particular interest in this paper is the spectral gap $\gamma(H)$ of $H$, or the difference in its two lowest eigenvalues, $\gamma(H) = \lambda_1(H) - \lambda_0(H)$. Usually, we will just write $\gamma = \gamma(H)$ and $\lambda_i = \lambda_i(H)$ and reserve the argument for when it is necessary to distinguish the eigenvalues of two matrices. Our first goal is to rewrite $H$ in a form useful for the current work. Presently, we only seek lower bounds for real matrices, so we can prove a quick comparison theorem between $\gamma(H)$ and $\gamma(\Re(H))$ where $\Re(H)$ is the real part of $H$. One can immediately obtain a useful upper bound on the spectral gap of $H$ by considering the function $\phi_0$ obtaining $\lambda_0(H)$ in \[eqn:eigenvalues\]. \[prop:Hermitian\] For a Hermitian matrix $H$ with spectral gap $\gamma(H)$, ground-state $\phi_0$, and $U= \mathrm{diag}(\phi_0/\abs{\phi_0})$ where the ratio and absolute value are taken pointwise, $$\gamma(H) \leq \gamma(\Re(U^\dagger H U)).$$ This proof is very straightforward. First, suppose $H$ has ground-state $\phi_0$. Then, let $U = \mathrm{diag}(\phi_0/\abs{\phi_0})$ where the ratio and absolute value are taken pointwise. Obviously, $U$ is unitary and $U^\dagger \phi_0 \geq 0$. Now, write $\Im(U^\dagger H U) = i S$, where $S \in \mathbb{R}^{n \times n}$ is skew-symmetric. Thus, $\lambda_0$ satisfies $$\begin{aligned} \lambda_0 &=\inf_{f \in \mathbb{C}^n}\frac{\langle f,Hf\rangle}{\langle f,f\rangle}\\ &=\inf_{U^\dagger f \in \mathbb{C}^n}\frac{\langle U f,H U f\rangle}{\langle U f,U f\rangle}\\ &=\inf_{f \geq 0}\frac{\langle f,U^\dagger H U f\rangle}{\langle f,f\rangle}\\ &=\inf_{f \geq 0}\frac{\langle f,\Re(U^\dagger H U)f\rangle+\langle f,iS f\rangle}{\langle f,f\rangle}\\ &=\inf_{f \geq 0}\frac{\langle f,\Re(U^\dagger H U)f\rangle}{\langle f,f\rangle}\end{aligned}$$ where the second equality follows from our choice of $U$ and the final equality from the skew-symmetry of $S$. Now, the Rayleigh quotient for $\lambda_1$ becomes $$\begin{aligned} \lambda_1 &= \inf_{\substack{f \perp \phi_0 \\ f \in \mathbb{C}^n}}\frac{\langle f , H f\rangle}{\langle f , f\rangle}\\ &= \inf_{\substack{U f \perp \phi_0 \\ f \in \mathbb{C}^n}}\frac{\langle U f , H U f\rangle}{\langle U f , U f\rangle}\\ &= \inf_{\substack{f \perp U^\dagger \phi_0 \\ f \in \mathbb{C}^n}}\frac{\langle f , U^\dagger H U f\rangle}{\langle f , f\rangle}\\ &= \inf_{\substack{f \perp U^\dagger \phi_0 \\ f \in \mathbb{C}^n}}\frac{\langle f , \Re(U^\dagger H U) f\rangle + \langle f, iS f \rangle}{\langle f , f\rangle}\\ &\leq \inf_{\substack{f \perp U^\dagger \phi_0 \\ f \in \mathbb{R}^n}}\frac{\langle f , \Re(U^\dagger H U) f\rangle + \langle f, iS f \rangle}{\langle f , f\rangle}\\ &=\inf_{\substack{f \perp U^\dagger\phi_0 \\ f \in \mathbb{R}^n}}\frac{\langle f , \Re(U^\dagger H U) f\rangle}{\langle f , f\rangle}.\end{aligned}$$ Above, the inequality follows from introducing the additional constraint on the infimum. Thus, the gap of $\gamma(H) \leq \gamma(\Re(U^\dagger H U))$. Stoquastic Hamiltonians ----------------------- guarantees us that, at least in the case of upper bounds, we hereafter need only consider $\Re(U^\dagger H U)$. Hence, we no longer address the issue of upper bounding the gap of a Hermitian matrix, since the bound is implied by any bounds on real matrices. Although determining an appropriate $U$ to actually perform the rotation in \[prop:Hermitian\] might be a hard problem in general,[^2] there exist certain cases where this becomes relatively easy. One convenient way to describe these situations is through the *frustration index* of the matrix $\Theta = H/\abs{H}$ where, again, the ratio and absolute value are taken pointwise. If we view $\Theta$ as an adjacency matrix, as will be made precise in the following section, we can consider a cycle cover of $\Theta$ given by the successor function $\sigma:\intrange{1}{n} \longrightarrow \intrange{1}{n}$. Here, $\sigma$ is just a permutation of $\intrange{1}{n}$. Then, the sequence $i \rightarrow \sigma(i) \rightarrow \sigma \cdot \sigma(i)\rightarrow \dots \rightarrow i$ is a cycle through $\Theta$, which we refer to as $c_\sigma(i)$. We call the set of all successor functions $C = \{c_\sigma\}$. For any $1\leq i\leq n$, we define the signature of the cycle $c_\sigma(i)$ as $${\mathrm{sig}(c_\sigma(i)) = \prod_{k \in c_\sigma(i)} \left[-\Theta_{k,\sigma(k)}\right] = (-1)^{\abs{c_\sigma(i)}} \prod_{k \in c_\sigma(i)} \Theta_{k,\sigma(k)}.}$$ In analogy to the standard definition, we somewhat carelessly define the *frustration index* of $\Theta$ as the minimum number of elements of $\Theta$ that need to be removed such that $\mathrm{sig}(c_\sigma(i)) \in \{0,1\}$ for all $c_\sigma \in C$ and $i \in \intrange{1}{n}$ [@Atay2014; @Martin2017; @Lange2015]. This particular definition is clearly far from ideal, since complex phases imply that this is not a strict question of combinatorics, and we should prefer a functional definition similar to that of [@Lange2015] in the future. Despite its failings, we can use this definition to define *stoquastic* matrices. We call a matrix *stoquastic* if it has frustration index $0$. This definition of stoquastic diverges from much of the literature on the subject. (See, e.g. [@bravyi2008complexity].) Nonetheless, it is a bit more descriptive and (potentially) avoids redefining well-known mathematical concepts.[^3] We introduce this definition for two reasons: (1) because frustration index has been used to obtain better isoperimetric inequalities [@Martin2017; @Lange2015], setting the stage for future work, and (2) because it extends our results to a broader class of matrices. Importantly, this property can be efficiently checked (at least in the dimension of the matrix), so that one can determine whether or not stoquastic spectral bounds apply even if one is unsure that a matrix is stoquastic. Thus, this definition makes the methods presented below easier to apply in many cases. The unitary $U$ that transforms $H$ such that all off-diagonal terms of $U^\dagger H U$ are nonpositive is immediate. First for any cycle we can decompose $\sigma$ into paths $\sigma_1$ and $\sigma_2$. $$\begin{aligned} 1 &=\prod_{k\in c_\sigma(i)} \left[-\Theta_{k ,\sigma(k)}\right]\\ &= \left(\prod_{k\in c_{\sigma_1}(i)} \left[-\Theta_{k ,\sigma(k)}\right]\right)\left( \prod_{k \in c_{\sigma_2}(i)}\left[-\Theta_{k,\sigma(k)}\right]\right) \\ &=\left(\prod_{k\in c_{\sigma_1}(i)} \left[-\Theta_{k ,\sigma_1(k)}\right]\right)\left( \prod_{k\in c_{ \sigma_2}^{-1}(i)}\left[-\Theta^\dagger_{k,\sigma_2(k)}\right]\right)\end{aligned}$$ where the final line follows because, since $\Theta$ is Hermitian, every point in a cycle forms its own cycle. In other words, beginning at $i$, the product $\prod_{k \in c_\sigma(i)}^{\sigma^j(i)}(-\Theta_{k,\sigma(k)})$ is entirely independent of the particular path chosen. This immediately implies the well-known fact that the frustration index of a real, nonnegative $\Theta$ is $0$ if and only if $\Theta$ describes a bipartite graph. The path-independence above also implies that one can explicitly construct the appropriate unitary $U$ by choosing a vertex, say $i$ and then, for all $j$ in some cycle with $i$, $U_{jj} = \prod_{k=i}^{\sigma^{-1}(j)}(-\Theta_{ij})U_{ii}$. Since every pair of vertices forms a simple cycle, this reduces to the constraint that, provided $\Theta_{ij}\neq 0$, $U_{jj}=-\Theta_{ij}U_{ii}$. Thus, we know that this definition of $U$ is consistent and unique up to a global phase. Furthermore, it clearly performs the appropriate transformation. Thus, if we satisfy stoquasticity, we know *a priori* that $U^\dagger H U$ has all nonpositive off-diagonal elements. More importantly, because $U$ is diagonal, we do not need to do the unitary transformation; we can simply replace each off diagonal term $w_{uv}$ with $-\abs{w_{uv}}$ and obtain the resulting matrix. Despite the utility of this condition in producing bounds for a larger class of matrices, in what follows we assume the problem has been reduced such that $H \mapsto U^\dagger H U$, guaranteeing that all off-diagonal terms are nonpositive and the ground-state $\phi_0 \geq 0$. This allows for a simpler presentation. Graph Laplacians ---------------- We wish to characterize $\Re(U^\dagger H U)$ in terms of graph Laplacians. Although the standard combinatorial and normalized graph Laplacian are defined such that all diagonal elements are nonnegative and all off-diagonal elements are nonpositive, we can relax the latter constraint and consider *signed* Laplacians. For our purposes, the only difference between a signed and standard Laplacian is that signed Laplacians have no constraint on the non-positivity of their off-diagonal terms, however our definitions are somewhat atypical [@atay2014spectrum].[^4] We begin by considering a connected weighted graph $G=(V,E)$ with weight function $w:V\times V \longrightarrow \mathbb{R}$ where we require $w(u,v)=0$ whenever $(u,v) \notin E$. Additionally, we require that $w(u,v)= w(v,u)$ or that $G$ is undirected.[^5] For ease of presentation, we will also lower arguments to $w$ such that $w_{uv} = w(u,v)$. Since we are allowing the possibility of negative edge weights, we introduce the notation $E^+ = \left\{\{u,v\} \in E \vert w_{uv} > 0 \right\}$ for the set of all positive-weighted edges and $E^-$ for the set of all negative-weighted edges. We also define $G^\pm = (V,E^\pm)$ and note that $G^+\subseteq G$. Now we can include some standard definitions for the combinatorial and normalized Laplacians, keeping in mind that edge weights may be negative. ### The combinatorial Laplacian To define the combinatorial Laplacian for a graph $G$, we first let the degree of a vertex $u \in V$ be $d_u = \sum_{v}w_{uv}$. Then, the combinatorial graph Laplacian $L$ is $$L(u,v) = \begin{cases} d_u & u=v \\ -w_{uv} & u\neq v \end{cases}$$ where $d_u = \sum_v w_{uv}$. For any function $f:V \longrightarrow \mathbb{R}$ (or $f:V \longrightarrow \mathbb{C}$), one can easily see that $$Lf(u) = \sum_{v}w_{uv}[f(u)-f(v)]$$ where we have adopted the standard convention that $Lf(u) = [Lf](u)$. (This is just to say that $Lf \neq L \circ f$, since $L: \mathbb{R}^{\lvert V \rvert} \longrightarrow \mathbb{R}^{\lvert V \rvert}$.) One can easily argue that if $f$ is an eigenfunction of $L$, then $f$ satisfies $$\lambda f(u) = \sum_v w_{uv}[f(u)-f(v)].$$ Now, let $W:V \longrightarrow \mathbb{R}$. We can represent $W$ as an $n \times n$ diagonal matrix and write $W_u \equiv W_{uu}$. Then, if $f$ is an eigenfunction of $L + W$, $f$ satisfies $$\label{eqn:combinatorial_operator} (\lambda-W_u) f(u) = \sum_v w_{uv}[f(u)-f(v)].$$ Recalling the definition of the Rayleigh quotient, $R(L+W,f)$, we have that the eigenvalues of $L+W$ satisfy $$\label{eqn:rc-pert} \lambda_i = \inf_{\substack{\tiny{f \perp T_{i-1}}}} \frac{\sum_{\{u,v\} \in E(G)}w_{uv} [f(u)-f(v)]^2 + \sum_u W_u f^2(u)}{\sum_u f^2(u)}$$ where $T^{(D)}_{i}$ is the subspace spanned by the functions $f_j$ achieving $\lambda^{(D)}_j$ for $0 \leq j \leq i$. This equation actually defines the *Dirichlet eigenvalues* of the graph $G$ embedded in an appropriate host graph. In the following subsection, I will make this mapping precise. ### Dirichlet eigenvalues {#sec:Dirichlet} For a given subgraph $S \subseteq G$, we can consider eigenfunctions of $S$ under boundary constraints and their corresponding eigenvalues. To proceed, we define the edge and vertex boundary sets 1. $\partial S = \left\{\{u,v\} \in E(G) \;\vert\; u \in V(S) , v \notin V(S) \right\}$ and 2. $\delta S = \left\{u \in V(G\setminus S) \;\vert\; \{u,v\} \in \partial S \; \text{for some} \; v \in V\right\}$. Any function $f:S\longrightarrow \mathbb{R}$ can be extended to a function $f:S\cup \delta S \longrightarrow \mathbb{R}$ with the Dirichlet boundary condition $f(u \in \delta S) = 0$ or $\restr{f}{\delta S} = 0$. *Dirichlet eigenvalues* are the eigenvalues of $S$ under this boundary constraint. To be precise, $$\label{eqn:Dirichlet_def} \lambda_i^{(D)} = \inf_{\tiny{ \substack{f \perp T^{(D)}_{i-1} \\\restr{f}{\delta S} = 0}}} \frac{\sum_{\{u,v\} \in E(S) \cup \partial S}w_{uv} [f(u)-f(v)]^2}{\sum_{u \in V(S)} f^2(u)}$$ where $T^{(D)}_{i}$ is the subspace spanned by the functions $f_j$ achieving $\lambda^{(D)}_j$ for $0 \leq j \leq i$. Now, recall that in the previous section we had a graph $G$ with weight function $w:E(G)\longrightarrow \mathbb{R}$. We embed this graph in a host graph $G'\supseteq G$ and extend the function $w:E(G)\cup \partial G \longrightarrow \mathbb{R}$ by requiring that $W_u = \sum_{v \in \delta G}w_{uv}$. That is, if the degree of vertex $u$ in $G$ is $d_u$, then the degree of vertex $u$ in $G'$ is $d_u + W_u$. (See .) Now, one can explicitly impose the Dirichlet constraint on \[eqn:Dirichlet\_def\] and recover \[eqn:rc-pert\]: $$\begin{aligned} \lambda_i^{(D)} &= \inf_{\tiny{ \substack{f \perp T^{(D)}_{i-1} \\ \restr{f}{\delta G} = 0}}} \frac{\sum_{\{u,v\} \in E(G) \cup \partial G}w_{uv} [f(u)-f(v)]^2}{\sum_{u \in V(G)} f^2(u)}\\ &= \inf_{\tiny{f \perp T_{i-1}}} \frac{\sum_{\{u,v\} \in E(G)}w_{uv} [f(u)-f(v)]^2 + \sum_{u \in V(G)} W_u f^2(u)}{\sum_{u \in V(G)} f^2(u)}.\end{aligned}$$ This embedding identity is often a useful way to geometrize a physical potential and both descriptions can be useful depending upon one’s goals. [.33]{} [.33]{} [.33]{} ### The normalized Laplacian Although the expressions in are sufficient to completely characterize all real matrices, we can derive a more elegant bound by perturbing the normalized Laplacian rather than the combinatorial Laplacian. We let $D = \diag{(d_u)_u}$ and define the symmetric normalized Laplacian as $\mathcal{L} = D^{-1/2}LD^{-1/2}$. Explicitly, this can be written $$\mathcal{L}(u,v) = \begin{cases} 1 & u=v \\ -\frac{w_{uv}}{\sqrt{d_u d_v}} & u \neq v. \end{cases}$$ Similar to the combinatorial Laplacian, for any function $f:V\longrightarrow \mathbb{R}$, the operator $\mathcal{L}$ satisfies $$\mathcal{L}f(u) = \frac{1}{\sqrt{d_u}}\sum_{v}w_{uv}\left[\frac{f(u)}{\sqrt{d_u}}-\frac{f(v)}{\sqrt{d_v}}\right]$$ and eigenfunctions $f$ of $\mathcal{L} + W$ satisfy $$\begin{aligned} (\lambda-W_u)f(u) &= \frac{1}{\sqrt{d_u}} \sum_v w_{uv}\left[\frac{f(u)}{\sqrt{d_u}} - \frac{f(v)}{\sqrt{d_v}} \right].\end{aligned}$$ Letting $\phi = f/\sqrt{d}$, $$\begin{aligned} \label{eqn:normalized_operator} (\lambda-W_u)\phi(u)d_u &= \sum_v w_{uv}\left[\phi(u) -\phi(v) \right].\end{aligned}$$ Our treatment of \[eqn:combinatorial\_operator,eqn:normalized\_operator\] can be unified by considering equations of the form $$\label{eqn:qweighted} L_q \phi(u) = (\lambda-W_u)q_u \phi(u) = \sum_v w_{uv}\left[\phi(u) -\phi(v) \right]$$ where taking $q_u = d_u$ reproduces \[eqn:normalized\_operator\] and $q_u = 1$ reproduces \[eqn:combinatorial\_operator\]. Hence, eigenvalues of either Laplacian are given by their respective Rayleigh quotients, $$\lambda_i = \inf_{\tiny{f \perp qT_{i-1}}} \frac{\sum_{\{u,v\}}w_{uv} [f(u)-f(v)]^2}{\sum_u q_u f^2(u)}$$ where $T_i$ is the subspace spanned by the functions $f_j$ achieving $\lambda_j$ for $0 \leq j \leq i$. Similarly, for either Laplacian perturbed by a diagonal matrix $W$, the eigenvalues are given by $$\label{eqn:rc-pert2} \lambda_i = \inf_{\tiny{f \perp q T_{i-1}}} \frac{\sum_{\{u,v\}}w_{uv} [f(u)-f(v)]^2 + \sum_u q_u W_u f^2(u)}{\sum_u q_u f^2(u)}.$$ This can once again be seen as Dirichlet eigenvalues as in \[sec:Dirichlet\], however one must be careful as the expression arising from \[eqn:rc-pert2\] for normalized Laplacians diverges from the correct expression for Dirichlet eigenvalues of the host graph. The spectral gap ---------------- Now that we have a characterization of the Dirichlet eigenvalues, we are prepared to handle the spectral gap of the operator $L_q+W$. Suppose that $\lambda_0$ has eigenfunction $\phi \geq 0$. Then, we can characterize the spectral gap of $L_q + W$ as follows. \[prop:gap\] $$\gamma = \inf_{\tiny{g \perp q\phi^2}} \frac{\sum_{\{u,v\}}w_{uv}\phi(u)\phi(v)[g(u)-g(v)]^2}{\sum_u q_u g^2(u)\phi^2(u)}$$ Before proceeding, we need the standard fact that for any $g:V\longrightarrow \mathbb{R}$, $$\label{eqn:fact1} \sum_{\{u,v\} } w_{uv}\left[g(u)\phi(u)-g(v)\phi(v)\right]^2 = \sum_{u} (\lambda_0-W_u) q_u g^2(u)\phi^2(u) + \sum_{\{u,v\} }w_{uv}\left[g(u)-g(v)\right]^2\phi(u)\phi(v).$$ To see this, begin with \[eqn:combinatorial\_operator\] and write $$\sum_{u} (\lambda_0-W_u) q_u g^2(u)\phi^2(u) = \sum_{u} g^2(u)\sum_{v} w_{uv}\phi(u)[\phi(u)-\phi(v)]$$ $$= \sum_{u} \left[d_u g^2(u)\phi^2(u) - \sum_{v} w_{uv} g^2(u)\phi(u)\phi(v)\right]$$ $$= \sum_{\{u,v\}}w_{uv}\left(g^2(u)\phi^2(u)+g^2(v)\phi^2(v) - \left[g^2(u)+g^2(v)\right]\phi(u)\phi(v)\right)$$ $$= \sum_{\{u,v\}}w_{uv}\left(\left[g(u)\phi(u)-g(v)\phi(v)\right]^2 - \left[g^2(u)+g^2(v)-2g(u)g(v)\right]\phi(u)\phi(v)\right)$$ $$=\sum_{\{u,v\}}w_{uv}\left(\left[g(u)\phi(u)-g(v)\phi(v)\right]^2 - \left[g(u)-g(v)\right]^2\phi(u)\phi(v)\right).$$ With this in hand, we turn to $\lambda_1$. $$\begin{aligned} \lambda_1 &= \inf_{\tiny{f \perp q\phi}} \frac{\sum_{\{u,v\}}w_{uv} [f(u)-f(v)]^2 + \sum_{u} q_u W_u f^2(u)}{\sum_{u } q_u f^2(u)} \\ &= \inf_{\tiny{g \perp q\phi^2}} \frac{\sum_{\{u,v\} }w_{uv} [g(u)\phi(u)-g(v)\phi(v)]^2 + \sum_{u } q_u W_u g^2(u)\phi^2(u)}{\sum_{u } q_u g^2(u)\phi^2(u)}\\ &= \inf_{\tiny{g \perp q\phi^2}} \frac{\sum_{\{u,v\} }w_{uv}\phi(u)\phi(v)[g(u)-g(v)]^2 + \lambda_0 \sum_{u}q_u \phi^2(u) g^2(u)}{\sum_{u} q_u g^2(u)\phi^2(u)}\\ &= \inf_{\tiny{g \perp q\phi^2}} \frac{\sum_{\{u,v\}}w_{uv}\phi(u)\phi(v)[g(u)-g(v)]^2}{\sum_{u } q_u g^2(u)\phi^2(u)} + \lambda_0.\end{aligned}$$ Thus, we have that $$\begin{aligned} \gamma = \inf_{\tiny{g \perp q\phi^2}} \frac{\sum_{\{u,v\}}w_{uv}\phi(u)\phi(v)[g(u)-g(v)]^2}{\sum_u q_u g^2(u)\phi^2(u)}.\end{aligned}$$ Warm-up: Cheeger upper bounds {#sec:cheeger} ============================= The Cheeger constant -------------------- The Cheeger constant of a graph describes the graph’s isoperimetric ratio, or the surface area to volume ratio of any subgraph. Noting that gives an expression for the gap that is equivalent to the Rayleigh quotient of a weighted graph with weights $\omega_{uv} = w_{uv}\phi(u)\phi(v)$, we use $\omega$ as a modified weight function for defining both area and volume. That is, for a subgraph $S\subseteq G$ we let 1. $\overline S = G\setminus S$, 2. the boundary vertices $\delta S = \{u \in \overline S \;\vert\; u\sim v \in S\},$ 3. the surface area $\abs{\partial S} = \sum_{u \in V(S), v \in \delta S} w_{uv}\phi(u)\phi(v)$, and 4. the volume $\vol(S) = \sum_{u \in S} q_u \phi^2(u)$. Then, we reproduce the weighted Cheeger constant of [@Chung2000] $$\label{eqn:Cheeger_constant} h = \min_{S \subset G}\frac{\abs{\partial S}}{\min_{S' \in \{S,\overline S\}}\vol(S')}.$$ Note that in the event that both $w_{uv}=1$ for all $\{u,v\} \in E(G)$ and $\phi \neq 0$ is any trivial function, this reproduces the ratio $$\frac{\text{\# edges in $\partial S$}}{\text{\# vertices in S}}$$ which is the standard Cheeger constant for an unweighted graph. The upper bound --------------- instructs us that we can use any function $g\perp q\phi^2$ to upper bound the gap and allows us to ignore the case that $H$ is not real. Thus, the upper bound derives from simply choosing an appropriate trial function in . \[thm:upper\] For any $H=L+W$ with ground-state $\phi$ corresponding to weighted Cheeger constant $h$ $$\gamma \leq 2h.$$ For $S$ achieving the infimum in \[eqn:Cheeger\_constant\], we put the function $$g(u) = \begin{cases} \vol(\overline S) & u \in S\\ -\vol(S) & u \notin S. \end{cases}$$ into \[prop:gap\]. Without loss of generality, we assume that $\vol(S)\leq \vol(\overline{S})$ and find that $$\begin{aligned} \gamma &\leq \frac{\sum_{\{u,v\} \in E(G)}w_{uv}\phi(u)\phi(v)[g(u)-g(v)]^2}{\sum_u q_u g^2(u)\phi^2(u)}\\ &=\frac{\left(\sum_{\{u,v\} \in \partial S}w_{uv}\phi(u)\phi(v)\right)[\vol(S) + \vol(\overline S)]^2}{\vol(\overline S)^2\sum_{u \in V(S)} q_u \phi^2(u) + \vol(S)^2\sum_{u \in V(\overline S)} q_u \phi^2(u)}\\ &\leq \frac{(h \vol(S)) [\vol(S) + \vol(\overline S)]^2}{\vol(S)[\vol(S)^2 + \vol(\overline S)^2]}\\ &= h \frac{[\vol(S) + \vol(\overline S)]^2}{\vol(S)^2 + \vol(\overline S)^2}\\ &\leq 2 h.\end{aligned}$$ also holds for all Hermitian matrices by \[prop:Hermitian\]. Removing negative edge weights {#sec:neg_edges} ============================== In this section, I provide a theorem relating the spectrum of the graph $G=(V,E)$ with edge weights $w:E \longrightarrow \mathbb{R}$ to the graph $G^+ = (V,E\setminus E^-)$. For edges with negative edge weights and endpoints $(x,y)$, we consider the set of paths from $x$ to $y$ through $G^+$, denoted $P(x,y)$. Thus, a path from $x$ to $y$ is a member of the set $P(x,y)$. The strategy behind this theorem is to consider an edge $\{u,v\}$ with weight $w_{uv}<0$ as in \[fig:graph1\]. Then, we find some path connecting $u$ and $v$ that traverses $G^+$ and route the negative weights along this path. Routing is not an uncommon approach (see, e.g. [@diaconis1991geometric]) and has a lot in common with the method of proving Poincaré inequalities [@Chung]. [.33]{} in [0,...,4]{}[ in [0,...,4]{}[ (,0) – (,4); (0,) – (4,); ]{} ]{} (2,2) – (2,3) node\[midway,right\] ; in [0,...,4]{}[ in [0,...,4]{}[ at (,) ; ]{} ]{} at (2,2) ; at (2,3) ; [.33]{} in [0,...,4]{}[ in [0,...,4]{}[ (,0) – (,4); (0,) – (4,); ]{} ]{} (2,2) – (2,3) node\[midway,right\] ; (2,2) – (3,2); (3,2) – (3,3); (3,3) – (2,3); in [0,...,4]{}[ in [0,...,4]{}[ at (,) ; ]{} ]{} at (2,2) ; at (2,3) ; [.33]{} in [0,...,4]{}[ in [0,...,4]{}[ (,0) – (,4); (0,) – (4,); ]{} ]{} (2,2) – (2,3); (2,2) – (3,2); (3,2) – (3,3); (3,3) – (2,3); in [0,...,4]{}[ in [0,...,4]{}[ at (,) ; ]{} ]{} at (2,2) ; at (2,3) ; \[thm:positive\] Suppose that for a graph $S \subseteq G$, $S^+$ is connected and there exists an $\displaystyle \alpha : \bigcup_{(x,y)\in E(S)}P(x,y) \longrightarrow [0,1]$ such that for any $\{u,v\} \in E(S)$, 1. $\displaystyle \sum_{\tiny{p \in P(u,v)}} \alpha_{p} = 1$; 2. and $0 < \omega_{uv} = \displaystyle w_{uv} - \sum_{\tiny{w_{xy}<0}}\sum_{\tiny{\substack{p \in P(x,y) \\ (u,v) \in p}}} \abs{w_{xy}}\ell_p \alpha_p$, then, for each $i$, there exists an $\widetilde\omega \geq \omega$ such that $$\begin{aligned} \lambda_i^{(D)} &= \inf_{\tiny{ \substack{f \perp qT^{(D)}_{i-1} \\ \restr{f}{\delta S} = 0}}} \frac{\sum_{\{u,v\} \in E(S^+)\cup \partial S}\widetilde\omega_{uv} [f(u)-f(v)]^2}{\sum_{u \in V(S)} q_u f^2(u)}. \end{aligned}$$ and $\lambda^{(D)}_0$ is unique. Consider $$\begin{aligned} \lambda_i^{(D)} &= \inf_{\tiny{ \substack{f \perp qT^{(D)}_{i-1} \\ \restr{f}{\delta S} = 0}}} \frac{\sum_{\{u,v\} \in E(S)\cup \partial S}w_{uv} [f(u)-f(v)]^2}{\sum_{u \in V(S)} q_u f^2(u)}\\ &\geq \inf_{\tiny{ \substack{f \perp qT^{(D)}_{i-1} \\ \restr{f}{\delta S} = 0}}} \frac{\sum_{\{u,v\} \in E(S^+)\cup \partial S}\left(w_{uv}-\sum_{\tiny{w_{xy}<0}}\sum_{\tiny{\substack{p \in P(x,y) \\ (u,v) \in p}}} \abs{w_{xy}}\ell_p \alpha_p\right) [f(u)-f(v)]^2}{\sum_{u \in V(S)} q_u f^2(u)}\\ &=\inf_{\tiny{ \substack{f \perp qT^{(D)}_{i-1} \\ \restr{f}{\delta S} = 0}}} \frac{\sum_{\{u,v\} \in E(S^+)\cup \partial S}\omega_{uv} [f(u)-f(v)]^2}{\sum_{u \in V(S)} q_u f^2(u)}\end{aligned}$$ where we have applied Jensen’s inequality. Thus, there exists some $ \widetilde\omega \geq \omega$ such that $$\begin{aligned} \lambda_i^{(D)} &= \inf_{\tiny{ \substack{f \perp qT^{(D)}_{i-1} \\ \restr{f}{\delta S} = 0}}} \frac{\sum_{\{u,v\} \in E(S^+)\cup \partial S}\widetilde\omega_{uv} [f(u)-f(v)]^2}{\sum_u q_u f^2(u)}.\end{aligned}$$ Furthermore, since this is just the Rayleigh quotient corresponding to the Dirichlet eigenvalues of a connected graph, the Perron-Frobenius theorem applies and we also have that $\lambda_0^{(D)}$ is unique. also applies to the characterization of $\gamma$ in \[prop:gap\]: \[cor:positive\] Suppose that for a graph $G=(V,E)$, $G^+$ is connected and there exists an $\displaystyle \alpha : \bigcup_{(x,y)\in E}P(x,y) \longrightarrow [0,1]$ such that for any $(u,v) \in E$ 1. $\displaystyle \sum_{\tiny{p \in P(u,v)}} \alpha_{p} = 1$ and 2. $\displaystyle \widetilde\omega_{uv} > w_{uv}\phi(u)\phi(v) - \sum_{\tiny{w_{xy}<0}}\sum_{\tiny{\substack{p \in P(x,y) \\ \{u,v\} \in p}}} \abs{w_{xy}}\phi(x)\phi(y)\ell_p \alpha_p$, where $\ell_{p} = \abs{p}$ is the length of path $p$, $$\label{eqn:rgap} \gamma = \inf_{\tiny{g \perp q\phi^2}} \frac{\sum_{\{u,v\} \in E(G^+)}\widetilde\omega_{uv}[g(u)-g(v)]^2}{\sum_u q_u g^2(u)\phi^2(u)}.$$ The unsightliness of $\widetilde\omega$ is not lost on me. Nonetheless, the expression is quite intuitive. Basically, a potentially useful Cheeger-type bound exists whenever we can redistribute negative weighted edges along paths through $G^+$ connecting them. I present this form, however, because it is unlikely that in practical situations we will be faced with something that can be easily routed along a single path. Such a statement is easy to derive by choosing unique paths satisfying the constraints of \[thm:positive\], however. \[cor:simpler\] provides one such simplification. Although there exist cases where one can create cuts such that condition 2 above is always unachievable, in many cases, this is handled by the unitary rotation considered in \[sec:rayleigh\]. Two Dirichlet Cheeger inequalities ================================== In this section, I present Cheeger inequalities using a technique similar to [@Chung2000]. Unlike [@Chung2000], we wish to construct an inequality for as broad a class of matrices as possible. A theorem similar to Theorem 1 was originally derived and presented by me in [@Jarret2014], however, at that time, I did not realize that it could be significantly strengthened to the more useful one below. First, we need to bound the contribution of the term $W$ to the eigenvalues $\lambda_i$. Because of \[thm:positive,cor:positive\], we only need to consider the case of nonnegative edge weights. \[lem:potential\] For a graph $G=(V,E)$, suppose $\phi : V \longrightarrow \mathbb{R}$ satisfies $$\label{eqn:potential_a1} \left(\lambda -W_u\right)q_u \phi(u) = \sum_{v\sim u} w_{uv} \left[\phi(u)-\phi(v) \right].$$ for $w > 0$. Then, $$\begin{aligned} \lambda &\geq \max_{S' \in \{S,V\setminus S\} }\left( \frac{\sum_{u \in S'} \left(W_u +\lambda_0^D(S')\right) q_u \phi(u)^2}{\sum_{u \in S'} q_u\phi(u)^2}\right) \end{aligned}$$ for $S = \left\{u \in V \;\vert\; \phi(u) \geq 0 \right\}$ and $\lambda_0^D(S')$ the lowest Dirichlet eigenvalue of $S' \subseteq G$. Without loss of generality, assume that $S' \in \{S, V \setminus S\}$ achieves the maximum above. Now, $$\sum_{u \in S'} (\lambda - W_u)q_u \phi(u)^2 = \sum_{u \in S'}\sum_{v \sim u} w_{uv} (\phi(u)-\phi(v))\phi(u)$$ $$= \sum_{\{u,v\} \in E(S')}w_{uv} (\phi(u)-\phi(v))^2 + \sum_{\substack{\{u,v\} \in \partial S' \\ u \in S'}}w_{uv}(\phi(u)-\phi(v))\phi(u)$$ $$\geq \lambda_0^{D}(S')\sum_{u \in S'} q_u\phi^2(u) - \sum_{\substack{\{u,v\} \in \partial S' \\ u \in S'}}w_{uv}\phi(v)\phi(u)$$ $$\geq \lambda_0^{D}(S')\sum_{u \in S'} q_u\phi^2(u).$$ Above, the first inequality follows from the definition of the Dirichlet eigenvalues and the second because ${\phi(S') \phi(\overline{S'}) \leq 0}$. \[cor:potential\] For a graph $G=(V,E)$, suppose $\phi : V \longrightarrow \mathbb{R}$ satisfies $$\left(\lambda -W_u\right)q_u \phi(u) = \sum_{v\sim u} w_{uv} \left[\phi(u)-\phi(v) \right].$$ for $w > 0$. Then, $$\begin{aligned} \lambda &\geq \max_{S' \in \{S,V\setminus S\} }\left( \frac{\sum_{u \in S'} W_u q_u \phi(u)^2}{\sum_{u \in S'} q_u\phi(u)^2}\right) \end{aligned}$$ for $S = \left\{u \in V \;\vert\; \phi(u) \geq 0 \right\}$. allows us to derive our primary Cheeger inequality, which generalizes from [@Chung2000]: \[thm:cheeger\] Suppose $\phi_i : V \longrightarrow \mathbb{R}$, satisfy $$\label{eqn:assumption} \left[\lambda_i -W_u\right]q_u \phi_i(u) = \sum_{v \sim u} w_{uv} \left[\phi_i(u)-\phi_i(v) \right]$$ and let $\gamma = \lambda_1 - \lambda_0$. Then, $$\gamma \geq \sqrt{h^2 + Q^2} - Q$$ where $$Q = \frac{\sum_{u \in S} d_u\phi_1^2(u)}{\sum_{u \in S}q_u\phi_1^2(u)}.$$ For a particular vertex $u_0$, we begin by considering the one-parameter family $$f_\epsilon(u) = \begin{cases} f(u_0) + \epsilon \vol\left(G\setminus\{u_0\}\right) & u = u_0 \\ f(u) - \epsilon q_{u_0} \phi_0^2(u_0) & \text{otherwise} \end{cases}$$ where $f$ achieves the infimum in \[eqn:rgap\]. Clearly, $f_\epsilon$ satisfies $f_\epsilon \perp q\phi_0^2$. Then, we introduce this into the Rayleigh quotient $R(f_\epsilon)$ and note that $\frac{d}{d\epsilon}R(f_\epsilon) \vert_{\epsilon = 0} = 0$ [^6] $$0 = \restr{\frac{d R(f(\epsilon))}{d \epsilon}}{\epsilon = 0} = \frac{d}{d\epsilon} \left[ \frac{\displaystyle\sum_{\{u,v\}}w_{uv}\phi_0(u)\phi_0(v)[f_\epsilon(u)-f_\epsilon(v)]^2}{\displaystyle\sum_u q_u f_\epsilon^2(u)\phi_0^2(u)} \right]_{\epsilon = 0} = \frac{d}{d\epsilon} \left[ \frac{\displaystyle\sum_{\substack{\{u,v\} \\ u,v \neq u_0}}\omega_{uv}[f(u)-f(v)]^2 + \sum_{u\neq u_0}\omega_{u_0 u}\left(f(u_0)-f(u) + \epsilon \vol(G) \right]_{\epsilon = 0}^2 }{\displaystyle\sum_{u\neq u_0} q_u \left(f(u)-\epsilon q_{u_0}\phi_0^2(u_0) \right)^2\phi_0^2(u) + q_{u_0}\left(f(u_0)+ \epsilon \vol(G\setminus \{u_0\}) \right)^2} \right]_{\epsilon = 0} = \frac{2\sum_{u\neq u_0}w_{u_0 u}\left(f(u_0)-f(u) \right)\vol(G)}{\sum_u q_u f^2(u)\phi_0^2(u)} - 2R(f)q_{u_0}\phi_0^2(u_0) \left( \frac{-\sum_{u\neq u_0} q_u f(u)\phi_0^2(u) + f(u_0)\vol\left(G\setminus \{u_0\}\right)}{{\sum_u q_u f^2(u)\phi_0^2(u)}}\right) =\sum_{u\neq u_0}w_{u_0 u}\left(f(u_0)-f(u) \right)\vol(G) - \gamma q_{u_0}\phi_0^2(u_0) \left({f(u_0)\vol(G\setminus \{u_0\}) - \sum_{u\neq u_0} q_u f(u) \phi_0^2(u)}\right) =\sum_{u\neq u_0}w_{u_0 u}\left(f(u_0)-f(u) \right)\vol(G) - \gamma q_{u_0}\phi_0^2(u_0) \left( f(u_0)\vol(G\setminus \{u_0\}) + q_{u_0}f(u_0)\phi_0^2(u_0) \right) =\sum_{u\neq u_0}w_{u_0 u}\left(f(u_0)-f(u) \right)\vol(G) - \gamma q_{u_0}f(u_0)\phi_0^2(u_0)\vol(G) =\sum_{u\neq u_0}w_{u_0 u}\left(f(u_0)-f(u) \right) - \gamma q_{u_0}f(u_0)\phi_0^2(u_0) .$$ Thus, for any $u$, $f(u)$ satisfies $$\begin{aligned} \gamma q_u f(u) \phi_0^2(u) &= \sum_{v \sim u} w_{uv}\phi_0(v)\phi_0(u)[f(u)-f(v)]\\ \gamma q_u f^2(u) \phi_0^2(u) &= \sum_{v \sim u} w_{uv}\phi_0(v)\phi_0(u)[f(u)-f(v)]f(u). \end{aligned}$$ Let $S \subseteq G$ be the subgraph of $G$ induced by the vertex set $V(S) = \left\{v \vert \phi_1(v) \geq 0 \right\}$ and let $\omega_{uv} = w_{uv}\phi_0(u)\phi_0(v)$. Without loss of generality, we assume that $\sum_{u \in S}q_u \phi_0^2(u) \leq \sum_{u \notin S} q_u \phi_0^2(u)$. (If this is not the case, simply take $f \mapsto -f$.) Then, for any region $S' \subseteq G$ such that either $S' \subseteq S$ or $\overline{S'} \subseteq S$, and define the Cheeger ratio as $$\begin{aligned} h_{S'} &\equiv \frac{\abs{\partial S'}}{\min\{\vol(S'),\vol(\overline{S'}\}} \\ &= \begin{cases}\frac{\abs{\partial S'}}{\sum_{u \in V(S')} q_u \phi_0^2(u)} & S' \subseteq S\\ \frac{\abs{\partial S'}}{\sum_{u \in V(\overline{S'})} q_u \phi_0^2(u)} & \text{$\overline{S'}\subseteq S$} \end{cases}\\ &\geq h {\addtocounter{equation}{1}\tag{\theequation}}\label{eqn:local_Cheeger}. \end{aligned}$$ Now, we let $$\begin{aligned} \gamma\sum_{u\in V(S)}q_uf^2(u)\phi_0^2(u) &= \sum_{u \in V(S)} \sum_{v \sim u}\omega_{uv}[f(u)-f(v)]f(u) \\ &= \sum_{\{v,u\} \in E(S)}\omega_{uv}[f(u)-f(v)]^2 + \sum_{\substack{\{u,v\} \in \partial S \\ u \in V(S)}}\omega_{uv}[f(u)-f(v)]f(u) \\ &\geq \sum_{\{v,u\} \in E(S)}\omega_{uv}[f(u)-f(v)]^2 + \sum_{\substack{\{u,v\} \in \partial S \\ u \in V(S)}} \omega_{uv}f^2(u) \end{aligned}$$ since $f(u)f(v)\leq0$ whenever $\{u,v\} \in \partial S$. Introducing the function $$g(u) = \begin{cases} f(u) & f(u) \geq 0 \\ 0 & \text{otherwise}, \end{cases}$$ we have that $${\gamma \geq \Phi} = \frac{\displaystyle\sum_{\{v,u\}}\omega_{uv}[g(u)-g(v)]^2}{\displaystyle\sum_{u}q_u g^2(u)\phi_0^2(u)}$$ $$ =\frac{\displaystyle\sum_{\{v,u\}}\omega_{uv}[g(u)-g(v)]^2}{\displaystyle\sum_{u\in V(S)}q_uf^2(u)\phi_0^2(u)} \cdot \frac{\displaystyle\sum_{\{v,u\}}\omega_{uv}[g(u)+g(v)]^2}{\displaystyle\sum_{\{v,u\}}\omega_{uv}[g(u)+g(v)]^2}\\$$ $$\geq \frac{\left(\displaystyle\sum_{\{v,u\}} \omega_{uv}\abs{g^2(u)-g^2(v)}\right)^2}{\left(\displaystyle\sum_{u\in V(S)}q_uf^2(u)\phi_0^2(u)\right) \left(\displaystyle\sum_{\{v,u\}}\omega_{uv}[g(u)+g(v)]^2\right)}\\$$ $$ = \frac{\left(\displaystyle\sum_{\{v,u\}} \omega_{uv}\abs{g^2(u)-g^2(v)}\right)^2}{\left(\displaystyle\sum_{u\in V(S)}q_uf^2(u)\phi_0^2(u) \right)\left( \displaystyle 2 \sum_{u \in V(S)}f^2(u)\phi_0(u)\sum_{v \sim u}w_{uv} \phi_0(v) -\sum_{\{v,u\}}\omega_{uv}[g(u)-g(v)]^2\right)}$$ $$= \frac{\left(\displaystyle\sum_{\{v,u\}}\omega_{uv} \abs{g^2(u)-g^2(v)}\right)^2}{\left(\displaystyle\sum_{u\in V(S)}q_u g^2(u)\phi_0^2(u) \right)\left( \displaystyle 2 \sum_{u \in V(S)}f^2(u)\phi_0^2(u)q_u\left(W_u+\frac{d_u}{q_u} -\lambda_0 \right) -\sum_{\{v,u\}}\omega_{uv}[g(u)-g(v)]^2\right)}$$ $$= \frac{\left(\displaystyle\sum_{\{v,u\}} \omega_{uv}\abs{g^2(u)-g^2(v)}\right)^2}{\left(\displaystyle\sum_{u \in V(S)}q_u g^2(u)\phi_0^2(u) \right)^2 \left( \frac{\displaystyle 2 \sum_{u \in V(S)}q_u\phi_1^2(u)\left(W_u+\frac{d_u}{q_u} -\lambda_0 \right)}{\displaystyle\sum_{u\in V(S)}q_u\phi_1^2(u)} - \Phi \right)}\\$$ \[eqn:final\] where the first inequality follows from Cauchy-Schwarz and the final inequality follows from \[cor:potential\]. Now, suppose that we label our vertices $u_i$ with integers $i\geq 1$ such that $f(u_{i+1}) \geq f(u_i)$. Then, clearly, for any $j < i$ $$\begin{aligned} g(u_i)-g(u_j) &= \sum_{k=j}^{i-1}(g(u_{k+1}) - g(u_k)). \end{aligned}$$ Now, consider the cut $S_k = \left\{u_j \;\vert\; j \leq k \right\}$, $$\begin{aligned} \omega_{u_i u_j} \abs*{g^2(u_i)-g^2(u_j)} &= \omega_{u_i u_j}\sum_{k=j}^{i-1} \abs*{g^2(u_{k+1})-g^2(u_k)}\\ \sum_{j < i}\omega_{u_i u_j} \abs*{g^2(u_i)-g^2(u_j)} &= \sum_{j < i}\sum_{k=j}^{i-1} \omega_{u_i u_j} \abs*{g^2(u_{k+1})-g^2(u_k)}\\ &=\sum_{k \leq \abs*{V}-1} \abs*{g^2(u_{k+1}) -g^2(u_{k}) }\sum_{j \leq k < i} \omega_{u_i u_j}\\ &\geq \sum_{k \leq \abs*{V}-1} \abs*{g^2(u_{k+1}) -g^2(u_{k}) }\left(h_{S_k}\sum_{j > k} \phi_0^2(u_j)q_{u_j} \right)\\ &\geq h\sum_{k \leq \abs*{V}} q_{u_k} g^2(u_k) \phi_0^2(u_k) \\&=h\sum_{u\in V(S)} q_u f^2(u) \phi_0^2(u). \end{aligned}$$ Above, both inequalities follow from \[eqn:local\_Cheeger\], where the second also utilizes summation by parts. Introducing this into \[eqn:final\], $$\Phi \geq \frac{\left(\displaystyle\sum_{\{v,u\}}\omega_{uv} \abs*{g^2(u)-g^2(v)}\right)^2}{\left(\displaystyle\sum_{u\in V(S)}q_uf^2(u)\phi_0^2(u) \right)^2 \left( 2 \gamma + 2 Q - \Phi \right)}$$ $$\geq h^2 \frac{\left(\sum_{u \in V(S)} q_u f^2(u_u) \phi_0^2(u_u) \right)^2}{\left(\sum_{u\in V(S)}q_u f^2(u)\phi_0^2(u) \right)^2 \left( 2 \gamma + 2 Q - \Phi \right)}$$ $$=\frac{h^2}{2 \gamma + 2Q-\Phi}.$$ Now, $$h^2 \leq (2 \gamma + 2 Q -\Phi)\Phi$$ $$=2(\gamma +Q)\Phi -\Phi^2$$ $$\leq 2 Q \gamma - (\Phi-\gamma)^2 + \gamma^2$$ $$\leq 2 Q \gamma + \gamma^2,$$ so that $\gamma \geq \sqrt{h^2+Q^2}-Q$. At this point, it is worth pausing to recognize just how much tighter the bound one finds from \[thm:cheeger\] is than its expansion around $h=0$. Had we simply assumed $h$ was small, we would have arrived at the inequality $\gamma \geq h^2/2 Q - h^4/8Q^3$. For $W=0$, one would expect the inequality $\gamma \geq h^2/(2Q)$, so our result is only slightly weaker than anticipated. At first glance, one might expect this to be our desired bound. Unlike the $W=0$ case, however, we *do not* expect that $h$ will usually be small. In fact, for strongly peaked distributions, we expect that $h$ can be rather large. Thus, retaining the expression of \[thm:cheeger\] can be essential to using this bound for most choices of $W$. The following inequality looks more like the standard Cheeger inequality and does not turn negative, however it is weak for large $h$. It follows immediately from the inequality $2(\sqrt{x+1}-\sqrt{x}) > 1/\sqrt{x+1}$ when $x>0$. \[cor:cheeger2\] Suppose $\phi_i : V \longrightarrow \mathbb{R}$, satisfy $$\left[\lambda_i -W_u\right]q_u \phi_i(u) = \sum_{v \sim u} w_{uv} \left[\phi_i(u)-\phi_i(v) \right]$$ and let $\gamma = \lambda_1 - \lambda_0$. Then, $$\gamma \geq \frac{h^2}{2\sqrt{h^2+Q^2}}$$ where $Q$ is as in \[thm:cheeger\]. We can now adapt \[cor:nonstoq\] to provide a Cheeger inequality for the case considered in \[cor:positive\]. \[thm:nonstoq\] For a graph $G=(V,E)$, suppose $$\gamma = \inf_{\tiny{g \perp q\phi^2}} \frac{\sum_{\{u,v\} \in E(G^+)}\widetilde\omega_{uv}[g(u)-g(v)]^2}{\sum_u q_u g^2(u)\phi^2(u)},$$ then $$\gamma \geq (Q + \rho) - \sqrt{(Q+\rho)^2 - h^2}$$ where $\rho = \lambda_{\abs{V}-1}-\lambda_0$. For a proof, see \[ap:proof\]. Now, \[thm:nonstoq\] with the appropriate choice of $Q$ yields the following corollaries: \[cor:combinatorial\] Suppose $H=L+W$ is an $n\times n$ real symmetric matrix with eigenvalues $\lambda_0 \leq \lambda_1 \leq \dots \leq \lambda_{n-1}$ and corresponding ground-state $\phi$ where $L$ is the combinatorial Laplacian of $G$. Then, if $G^+$ has degree at most $d_{\max}$ $$\lambda_1-\lambda_0 \geq (d_{\max} + \rho) - \sqrt{(d_{\max}+\rho)^2 - h^2}$$ for $\rho = \lambda_{N-1}-\lambda_0$ and $$h = \sup_{\substack{\alpha>0 \\ \sum_{p \in P(u,v)} \alpha_p = 1}} h_\alpha,$$ $$h_\alpha = \min_S \max_{S' \in \{S, \overline S\}}\frac{\sum_{\{u,v\} \in \partial S} \omega_{uv}(\alpha)}{\sum_{u \in S'} \phi^2(u)}$$ where $$\omega_{uv}(\alpha) = \left(w_{uv}\phi(u)\phi(v)-\sum_{w_{xy}<0}\sum_{\tiny{\substack{p \in P(x,y)\\\{u,v\} \in p}}}\abs*{w_{xy}}\ell_p\alpha_p\phi(x)\phi(y) \right)$$ as in \[thm:positive\]. \[cor:normalized\] Suppose $H=\mathcal{L}+W$ is a real symmetric matrix with eigenvalues $\lambda_0 < \lambda_1 \leq \dots \leq \lambda_{N-1}$ and corresponding ground-state $\phi$ where $\mathcal{L}$ is the normalized Laplacian of $G$. Then, $$\lambda_1 -\lambda_0 \geq (1 + \rho) - \sqrt{(1+\rho)^2 - h^2}$$ for $\rho = \lambda_{N-1}-\lambda_0$ and the distributed Cheeger constant $$h = \sup_{\substack{\alpha>0 \\ \sum_p \alpha_p = 1}} h_\alpha,$$ $$h_\alpha = \min_S \max_{S' \in \{S, \overline S\}}\frac{\sum_{\{u,v\} \in \partial S} \omega_{uv}(\alpha)}{\sum_{u \in S'} d_u\phi^2(u)}$$ where $$\omega_{uv}(\alpha) = \left(\omega_{uv}\phi(u)\phi(v)-\sum_{\omega_{xy}<0}\sum_{\tiny{\substack{p \in P(x,y)\\\{u,v\} \in p}}}\abs{\omega_{xy}}\ell_p\alpha_p \phi(x)\phi(y) \right)$$ as in \[thm:positive\]. The form of \[cor:combinatorial\] and \[cor:normalized\] is not as elegant as \[thm:cheeger\], but we shouldn’t be turned off so easily; each corollary has a pleasing interpretation. Begin by taking negative edge weights and redistribute them along positive paths as best you can. The Cheeger constant of the resulting graph is always a lower bound for the gap. Applications {#sec:applications} ============ Some simple reductions ---------------------- The approach of \[sec:neg\_edges\] is more general than one might desire. All we have effectively done in that section is apply Jensen’s inequality. Restricting to unique paths, we have the following corollary to \[thm:positive\]. \[cor:simpler\] Suppose that for the graph $G=(V,E)$, $\gamma(G)$ is the spectral gap of the combinatorial Laplacian of $G$. Then, if $G^+$ is connected and there exists a set of non-overlapping paths such that $$\mathcal{P} = \{P(u,v) \in E^+ \;\vert\; \{u,v\} \in E^- \text{ and } w(e \in P(u,v)) - \abs{w_{uv}}\abs{P(u,v)} \geq 0 \}.$$ Then, $\gamma(G) \geq \gamma(G\setminus \mathcal{P})$ where all constants are as in \[sec:cheeger\]. Comparison theorems like this are rather easy to derive by choosing the appropriate set of paths through $G$. One can also use this to derive a Cheeger inequality that uses the Cheeger constant $h$ of $G^+$ on both sides. Note that, we obtain a tighter bound than that of \[thm:nonstoq\], since $\lambda_0(G) = \lambda_0(G^+) = 0$, so we only need to bound $\lambda_1(G) \geq \lambda_1(G\setminus \mathcal{P})$ and we can apply \[thm:cheeger\] directly. Suppose that the graph $G=(V,E)$, $\gamma(G)$ is spectral gap of the combinatorial Laplacian of $G$. Then, if $G^+$ is connected and there exists a set of non-overlapping paths such that $$\mathcal{P} = \{P(u,v) \in E^+ \;\vert\; \{u,v\} \in E^- \text{ and } w(e \in P(u,v)) - \abs{w_{uv}}\abs{P(u,v)} \geq \epsilon \}.$$ Then, $$2h \geq \gamma \geq \epsilon(\sqrt{h^2+Q^2}-Q)$$ where all constants are as in \[sec:cheeger\]. This follows readily from \[thm:cheeger\]. First, it is obvious that the degree $Q'$ resulting from routing negative edge weights must satisfy $Q'\geq \epsilon Q$. Thus, one must only show that $k$, the weighted Cheeger constant of $G^+$ after routing, satisfies $k \geq \epsilon h$. If we let $\omega$ be the edge-weights after appropriately routing the original weight function $w$, $$k = \frac{\sum_{\substack{u \in S \\ v \notin S}}\omega_{uv}}{\sum_{u \in V(S)} q_u} \geq \epsilon\frac{\sum_{\substack{u \in S \\ v \notin S}}w_{uv}}{\sum_{u \in V(S)} q_u} = \epsilon h.$$ Cheeger comparison theorems --------------------------- To obtain a somewhat useful comparison theorem, we require the following characterization of the weighted Cheeger constant $h$, which derives from a lengthy calculation beyond the scope of this paper. For a derivation that generalizes easily, see [@Chung2000]. Specifically, $$\label{eqn:functional_h} h = \inf_{f\not\equiv 0}\sup_C \frac{\sum_{\{u,v\} \in E(G)} w_{uv}\phi_0(u)\phi_0(v)\abs{f(u)-f(v)}}{\sum_u q_u \phi_0^2(u) \abs{f(u)-C}}$$ where $\phi_0$ is the ground-state of the corresponding Hamiltonian (Laplacian). Note that when $H$ is just a Laplacian, or $W=0$, $h$ is the standard Cheeger constant of the corresponding graph. With this, we can prove the following theorem: \[thm:comparison\] Suppose that $g$ is the Cheeger constant of $L_q$ corresponding to $G=(V,E)$ with weight function $w:V\times V \longrightarrow \mathbb{R}$. Further, suppose $h$ is the weighted Cheeger constant of $H=L_q+W$ resulting from imposing the Dirichlet condition as in \[sec:Dirichlet\]. Then, if $H$ has ground-state $\phi$ satisfying the curvature inequality, $$\sum_{v\sim u} w_{uv}\abs{\phi(u)-\phi(v)} \leq \frac{\epsilon}{2} d_u \phi(u)$$ the Cheeger constants $h$ and $g$ satisfy $$g \leq h + \lambda_0(H) + \epsilon Q$$ where $Q$ is the maximum degree of $G$ if $L_q$ is the combinatorial Laplacian and $1$ if $L_q$ is the normalized Laplacian. Note that $g$ corresponds to a case where $\phi_0(u \in V(G)) = 1$ in \[eqn:functional\_h\]. Thus, $$g = \inf_{f\not\equiv 0}\sup_C \frac{\sum_{\{u,v\} \in E(G)\cup \partial G} w_{uv}\abs{f(u)-f(v)}}{\sum_u q_u \abs{f(u)-C}}.$$ Now, let $S \subseteq G$ be the subset of $G$ that achieves $h$. We introduce $$f(u)-C = \begin{cases} \phi^2(u) & u \in S \\ - \phi^2(u) & u \notin S, \end{cases}$$ where $\phi$ is the ground-state of $H$. Now, $$g \leq \frac{\displaystyle \sum_{\{u,v\} \in \partial S} w_{uv} \left(\phi^2(u) +\phi^2(v)\right) + \sum_{\{u,v\} \notin \partial S} w_{uv}\abs{\phi^2(u)-\phi^2(v)}}{\sum_u q_u \phi^2(u) }$$ $$= \frac{\displaystyle \sum_{\{u,v\} \in \partial S} w_{uv} \left[\left(\phi(u) -\phi(v)\right)^2+2\phi(u)\phi(v) \right]+ \sum_{\{u,v\} \notin \partial S} w_{uv}\abs{\phi^2(u)-\phi^2(v)}}{\sum_u q_u \phi^2(u) }$$ $$\leq 2h\frac{\sum_{u\in V(S)}q_u\phi^2(u) }{\sum_u q_u \phi^2(u)}+ \frac{\displaystyle \sum_{\{u,v\} \in \partial S} w_{uv} \left(\phi(u) -\phi(v)\right)^2+ \sum_{\{u,v\} \notin \partial S} w_{uv}\abs{\phi^2(u)-\phi^2(v)}}{\sum_u q_u \phi^2(u) }$$ $$\leq h+ \frac{\displaystyle \sum_{\{u,v\} \in E(G)} w_{uv} \left(\phi(u) -\phi(v)\right)^2+ \sum_{\{u,v\} \notin \partial S} w_{uv}\left(\abs{\phi^2(u)-\phi^2(v)}-\left(\phi(u)-\phi(v)\right)^2\right)}{\sum_u q_u \phi^2(u) }$$ $$= h+ \lambda_0(H) + \frac{\displaystyle \sum_{\{u,v\} \notin \partial S} w_{uv}\bigg[2 \min\{\phi(u),\phi(v)\} \;\abs{\phi(u)-\phi(v)}\bigg]}{\sum_u q_u \phi^2(u) }$$ $$\leq h+ \lambda_0(H) + \frac{\displaystyle \sum_{\{u,v\}} w_{uv}\bigg[2 \min\{\phi(u),\phi(v)\} \;\abs{\phi(u)-\phi(v)}\bigg]}{\sum_u q_u \phi^2(u) }$$ $$\leq h+ \lambda_0(H) + 2\frac{\displaystyle \sum_{u}\phi(u)\sum_{v\sim u} w_{uv}\abs{\phi(u)-\phi(v)}}{\sum_u q_u \phi^2(u) }$$ $$\leq h + \lambda_0(H) + \epsilon\frac{\displaystyle \sum_{u} d_u \phi^2(u)}{\sum_u q_u \phi^2(u) }$$ $$\leq h + \lambda_0(H) + \epsilon Q .$$ Thus, $$g\leq h + \lambda_0(H) + \epsilon Q.$$ The above theorem is not as tight as we would ideally like. In the future, it would be advantageous to derive a better analogue of the results in [@cheng1997isoperimetric]. Although continuous, those results suggest that one could derive a comparison theorem such that $c h \geq g$ for some constant $c$ that depends only upon the structure of the space. Additionally, it seems likely that in the case that $\phi$ is unimodal, the weighted Cheeger constant is proportional to the Cheeger constant of the host graph. Nonetheless, a proof remains elusive. ### Subgraph Comparison We can prove something a bit better by comparing subgraphs of our Hamiltonian and applying \[lem:potential\]. For any $S$, let $h_S$ be as in \[eqn:local\_Cheeger\]. That is, $$\label{eqn:hs} h_S = \frac{\abs{\partial S}}{\min\{\vol(S),\vol(\overline{S})\}}$$ where all quantities are as in \[sec:cheeger\]. If we again restrict to the case that $H$ is stoquastic, then we can apply the technique of \[lem:potential\] to prove a theorem which makes clear the significance of the Cheeger constant for any particular cut $S \subset G$. In the following theorem, we make use of the Dirichlet representation of \[sec:Dirichlet\]. In other words, $$\lambda_0(H) = \inf_{\substack{f \\ f\vert_{\delta G}=0}} \frac{\sum_{\{u,v\} \in E(G)\cup \partial G}w_{uv}(f(u)-f(v))^2}{\sum_{u \in V(G)} q_u f^2(u)}.$$ Thus, for any subgraph $S \subseteq G$, we can consider $\delta G \subseteq \delta S$. Another way of stating this, is that $$\lambda_0^D(H,S) = \inf_{\substack{f \\ f\vert_{\delta S}=0}} \frac{\sum_{\{u,v\} \in E(S)\cup \partial S}w_{uv}(f(u)-f(v))^2}{\sum_{u \in V(S)} q_u f^2(u)}$$ $$= \inf_{\substack{f \\ f\vert_{\delta S}=0}} \frac{\sum_{\{u,v\} \in E(S)\cup (\partial S \setminus \partial G)}w_{uv}(f(u)-f(v))^2+\sum_{u \in V(S)} q_uW_u \phi^2(u)}{\sum_{u \in V(S)} q_u f^2(u)}.$$ The following theorem compares the Dirichlet eigenvalues of the subgraph $S$ to those of $G$. Suppose that $H$ is a stoquastic Hamiltonian with ground state $\phi>0$, corresponding to a graph $G$ with subgraph $S \subset G$. Then, $$h_S \geq \lambda_0^D(H,S) - \lambda_0(H).$$ Above, $h_S$ is as in \[eqn:hs\] and $\lambda_0^D(H,S)$ is the Dirichlet eigenvalue of the subgraph $S$ of the host graph $G\subseteq G'$, defined by $$\lambda_0^D(H,S) = \inf_{\substack{f \\ f\vert_{\delta S}=0}} \frac{\sum_{\{u,v\} \in E(S)\cup \partial S}w_{uv}(f(u)-f(v))^2}{\sum_{u \in V(S)} q_u f^2(u)}.$$ First, we begin with the appropriate definition of the Dirichlet eigenvalues of a subgraph. We begin as in \[lem:potential\]. Without loss of generality, assume that $\vol(S) \leq \vol(\overline{S})$. Now, $$\sum_{u \in V(S)}(\lambda_0(H)-W_u)q_u\phi^2(u) = \sum_{u\in S}\sum_{\{v,u\}\in E(G)}w_{uv}(\phi(u)-\phi(v))\phi(u)$$ $$\lambda_0(H) \sum_{u \in V(S)}q_u\phi^2(u) = \sum_{\{u,v\} \in E(S)}w_{uv}(\phi(u)-\phi(v))^2 + \sum_{u \in V(S)}q_u W_u \phi^2(u) + \sum_{\{v,u\} \in \partial S \setminus \partial G}w_{uv}\left(\phi(u)-\phi(v)\right)\phi(u)$$ $$\geq \lambda_0^D(H,S) \sum_{u \in V(S)}q_u \phi^2(u) - \sum_{\{v,u\} \in \partial S} w_{uv} \phi(v)\phi(u)$$ $$= \left(\lambda_0^{D}(H,S) - h_S\right)\sum_{u \in V(S)}q_u\phi^2(u).$$ Since we know that $\sum_{u \in V(S)}q_u \phi^2(u) > 0$, $$h_S \geq \lambda_0^D(H,S) - \lambda_0(H).$$ In other words, whenever $h_S$ is exponentially small, there exists a Dirichlet eigenfunction for some subgraph that approximates the ground-state eigenvalue of $H$. This is equivalent to saying that there exists some block of $H$ that has approximately the same ground-state eigenvalue as $H$ itself. Physical implications {#sec:discussion} ===================== These results lead to a very concrete understanding of the nature of the spectral gap in most quantum systems. In a very strong sense, the presence of a spectral gap implies that the ground-state wave function *must not* contain bottlenecks. Although this may be unsurprising, all prior results fail to confirm the intuition when $\norm{W}$ is sufficiently large. In this paper, I have eliminated the ability for physics behave unexpectedly in such situations. That is, we now know that gapped Hamiltonians must not contain strong bottlenecks in their ground-states and, additionally, the appropriate scaling of this claim. Equivalently, the presence of a bottleneck guarantees a small spectral gap. This conceptual point does not yet hold in reverse. That is, we have not shown that a small gap implies a strong bottleneck. It is possible that there exist Hamiltonians with ground-states without bottlenecks that nonetheless have small spectral gaps. This particular point may be of some physical interest and worth exploring, however in the context that inspired this work is somewhat less interesting. Probably the major advantage of this characterization is that we can now definitively say that, for the standard adiabatic theorem to guarantee an efficient adiabatic process, at no point in the evolution must $H$ have a bottlenecked ground-state. Some results suggest that, at least with existing Monte Carlo techniques, states without bottlenecks can still be hard to simulate [@Hastings; @jarret2016adiabatic; @bringewatt2018diffusion]. Nonetheless, a guaranteed lack of bottlenecks reaffirms my agnosticism about whether one might be able to classically and efficiently sample from ground-state distributions arising from large-gap stoquastic Hamiltonians. Shifting dialogue away from spectral gaps and towards bottlenecked distributions as also suggested in [@Jarret2014a; @crosson2017quantum] will, hopefully, shed light on this question one way or the other. The Bashful Adiabatic Algorithm =============================== In this section, I show how one might be able to exploit the weighted Cheeger constant to improve quantum adiabatic algorithms. A quantum process solves the Schrödinger equation $$\begin{cases} i \frac{\partial \phi(t)}{\partial t} = H(t/T) \phi(t) \\ \phi(0) = \phi_0(0) \end{cases}$$ where $\phi_0(t)$ is the ground-state of $H(t/T)$. An adiabatic algorithm seeks to produce the distribution $\phi(T) \approx \phi_0(T)$ and the adiabatic theorem guarantees that this can be done provided that a quantity like $\gamma^{-2}(H(t/T))\norm{\frac{dH(t/T)}{dt}}$ is never too large [@Jansen2006]. Abusively, for this section, we call the Hamiltonian $H(t/T)$ the “schedule”. At least in the case of real Hamiltonians, our inequality opens up the possibility of adaptive adiabatic algorithms, or those where we adjust the rate of variation of $H$ in response to the size of the gap. In many cases, $h$ reduces the problem of bounding the spectral gap to determining information about the ground-state. This allows one to stop an evolution early, say at $t < T$ and bound the gap at that point. That is, suppose we know $\phi(t) \approx \phi_0(t)$ for some $t$. Then, if we can use $\phi(t)$ to approximate $h$, we can assume that we know $\gamma(H(t/T))$. One can use Weyl’s inequality or another perturbative argument to then guarantee that $\gamma(H(\tau/T)) \geq c$ for some choice of $c$ and $\tau > t$. Thus, we can restart the adiabatic from $t=0$ and choose an appropriate $dH(t/T)/dt$ such that $\phi(\tau) \approx \phi_0(\tau)$. Repeating this until $\tau = T$ would give us the entire adiabatic path with, potentially, only polynomial overhead. This algorithm, which I am calling the Bashful Adiabatic Algorithm (BAA), is sketched below:[^7] \[alg:find\_eta\] Assume $H_\tau(1) = H_0(1)$ for all choices of $\tau$. Choose a schedule $H_{\tau}$ with $\min_{t < \tau}\gamma(H_{\tau}(t/T)) > \gamma_\min$. Prepare the state $\phi_0(0)$ of $H_0(0)$. Generate $N$ copies of $\phi(\tau)$ from $\phi(0)$ using the schedule $H_\tau(t/T)$. Sample $\{\phi(\tau)\}$ and (if possible) approximate the weighted Cheeger constant of $H_{\tau}(\tau)$. Use the result to bound $\min_{t <\tau + \delta \tau}\gamma(H_{\tau+\delta \tau}(t/T))$ for some new schedule $H_{\tau + \delta \tau}(t/T)$. $\tau \gets \tau + \delta \tau$. $\phi(T)$ using the schedule $H_T(t/T)$. This algorithm would run in time $\bigO{(T/\delta \tau)^2(X+N\delta \tau)}$, where $\delta \tau$ is the smallest timestep taken, $N$ is the number of copies needed, and $X$ the longest time it takes to compute $h$. The reader should note that even if $\delta \tau$ must get very small (because $\gamma$ gets very small), so long as it is only small for a sufficiently short period of time, we should be able to locally decrease $\norm{\frac{dH}{dt}}$ and obtain much tighter scaling than that proposed above. Furthermore, we can ensure that our $\norm{\frac{d H}{dt}}$ is taken as large as possible while remaining consistent with the adiabatic theorem, or that our path (through time) is chosen optimally. The ability to compute $h$ may allow one to predict when an adiabatic path needs to be changed, as suggested in [@crosson2014different]. Even given the ability to sample $\phi_0$, we would still require an efficient method for approximating $h$. Although I do not expect this to be possible for an arbitrary graph and $\phi_0$, this may indeed be possible for some classes of graphs and reasonable assumptions about $\phi_0$. It is likely that a statement like \[lem:potential\] will be useful in this regard. Additionally, while there will clearly be distributions where an approximation strategy for $h$ should fail, it is quite possible that these same instances correspond to otherwise intractable optimization problems. As an example, one can think of the graph $G=(V,E)$ with $V = \{u_i \; \vert \; i \in \intrange{1}{n} \}$ and $E = \{\{u_i,u_{i+1} \} \; \vert \; i \in \intrange{1}{n-1}\}$. Suppose that for some $j \notin \{i,i+1\}$, the Hamiltonian has ground-state $$\phi_0(u_i,\tau) = \begin{cases} c_1 & i = 1 \\ c_j & i = j \\ C & i \notin \{1,j\} . \end{cases}$$ Choosing $C \sim e^{-n}$, if $c_1 > c_j \sim \mathrm{poly}(n)$, then there exists a cut such that $h$ is exponentially small in $n$. Using $L$ as the graph Laplacian for this graph, this is achieved by, for example, the ground-state of $H=L+W$ with diagonal matrix $W \equiv \diag{(W_u)_{u \in V}}$ $$W_{u_i} = \begin{cases} c x^{-1} & i=1\\ x c^{-1} & i = 2\\ c^{-1} & i = \abs{V}-1\\ c & i = \abs{V}\\ 1 & \text{otherwise} \end{cases}$$ and an appropriate choice of $c$ and $x$. (Take $c$ to be small and choose $x$ to produce the desired ratio of $c_1/c_{\abs{V}}$.) Distinguishing this from the case where $c_j \sim e^{-n}$, which implies that $h$ is only polynomially small in $n$ (see [@Jarret2014a]), seems to be close to efficiently solving unstructured search. Thus, if one were to investigate an algorithm for approximating $h$, one might need to consider a divide-and-conquer approach that considers separate adiabatic processes constrained to different subgraphs for sufficiently concentrated $\phi_0$. Another possibility would be to attempt to adapt existing algorithms for approximating the Cheeger constant in large networks [@spielman2004nearly]. Exploring this question is well beyond the scope of the present work, but would nonetheless be very interesting. Open questions and future work ============================== These inequalities lead to quite a few open questions. - First and foremost, I think, is the question of whether one can ever efficiently approximate the weighted Cheeger constant and what information/constraints would be necessary to do so. The standard combinatorial Cheeger constant has been the object of extensive study and we know determining it to be NP-hard [@matula1990sparsest]. Nonetheless, one can efficiently approximate the Cheeger constant, however the scaling of such estimates is probably insufficient for quantum systems. Additionally, given that the weighted Cheeger constant depends on more information than the combinatorial Cheeger constant, estimating the weighted Cheeger constant might be considerably harder. Nonetheless, it is possible that in sparse graphs, such as those that would naturally arise from physical systems of interest, this quantity might not be too difficult to approximate, especially if one is willing to take a poor estimate. If one can approximate $h$ efficiently enough in a large enough number of cases, one might potentially use this information to choose an adiabatic path for adiabatic quantum computation as discussed in the previous section [@crosson2014different]. - Also, because this work demonstrates the deficiencies in gap analysis, it would be interesting if one could prove a version of the adiabatic theorem specific to bottlenecked states. In particular, an adiabatic theorem that stresses Dirichlet eigenfunctions would probably be able to capture the “relevant” portion of the wavefunction. One can imagine a situation where the solution to some optimization problem is in a subgraph $S\subseteq G$ where there exists no bottleneck and $\phi$ is large and, yet, $\overline{S}$ contains a strong bottleneck somewhere. It would be interesting to see if such situations arise frequently, infrequently, or never. I suspect they arise frequently, and thus deriving adiabatic theorems that restrict to the subgraph $S$ that we wish to explore would have a hope of providing much better runtime bounds. - Another question is whether one can derive useful comparison theorems between the gap of the host graph and the gap of Hamiltonian, as alluded to in \[sec:applications\]. Desirable forms for comparison theorems can be found in many places, such as [@Chung2000; @Chung]. (The interested reader should beware, however, as [@Chung2000 Theorem 3] is incorrect due to a sign error and the result is carried through to two of the main corollaries of the paper. Theorem 4 of that paper also appears to be incorrect, and the best one can hope for is a statement like the present \[thm:comparison\].) It seems likely that, at least for unimodal ground-states on strongly convex subgraphs of homogeneous graphs (see [@Chung]), one should be able to show that the gap of the Hamiltonian scales with the gap of the graph. Additionally, [@Jarret2014a] shows that a condition like log-concavity is not enough to guarantee unimodality. In that paper, a seemingly bimodal distribution can satisfy log-concavity due to the nature of the boundary, whereas the continuous definition of log-concavity would imply unimodality. - Finally, one might consider what useful information the frustration index can provide about the spectral gap. In [@Martin2017], the author derives isoperimetric inequalities that utilize the frustration index. It is entirely possible that a suitably defined index can yield tighter bounds than those derived through our reductions here. It also seems likely that this concept might be a key component to obtaining gap lower bounds in the general Hermitian case. Acknowledgements ================ The idea for using $h$ to adjust the adiabatic path was arrived at during exchanges with Antonio Martinez. Kianna Wan pointed out many small errors that would have otherwise went unnoticed, helping me greatly improve my presentation. I thank Elizabeth Crosson, Stephen Jordan, Tsz Chiu Kwok, Brad Lackey, Lap Chi Lau, and Adrian Lupascu for helpful discussions. Proof of \[thm:nonstoq\] {#ap:proof} ======================== First, we note that in the proof of \[thm:cheeger\], we had the following corollary. \[cor:nonstoq\] For a graph $G=(V,E)$, suppose $$\gamma = \inf_{\tiny{g \perp q\phi^2}} \frac{\sum_{\{u,v\} \in E(G^+)}\widetilde\omega_{uv}[g(u)-g(v)]^2}{\sum_u q_u g^2(u)\phi^2(u)}.$$ Then, for $f$ achieving the infimum above and $$g(u) = \begin{cases} f(u) & f(u)\geq 0 \\ 0 & \text{otherwise}, \end{cases}$$ we have $${\gamma \geq \frac{\displaystyle\sum_{\{v,u\}}\omega_{uv}[g(u)-g(v)]^2}{\displaystyle\sum_{u}q_u g^2(u)\phi_0^2(u)} \geq \frac{\left(\displaystyle\sum_{\{v,u\}} \omega_{uv}\abs{g^2(u)-g^2(v)}\right)^2}{\left(\displaystyle\sum_{u\in V(S)}q_uf^2(u)\phi_0^2(u)\right) \left(\displaystyle\sum_{\{v,u\}}\omega_{uv}[g(u)+g(v)]^2\right)} }.$$ Now, we can prove \[thm:nonstoq\] by adapting the proof of \[thm:cheeger\]. First, we note that by \[cor:nonstoq\], $${\gamma \geq \Phi} = \frac{\displaystyle\sum_{\{v,u\}}\omega_{uv}[g(u)-g(v)]^2}{\displaystyle\sum_{u}q_u g^2(u)\phi_0^2(u)} \geq \frac{\left(\displaystyle\sum_{\{v,u\}} \omega_{uv}\abs{g^2(u)-g^2(v)}\right)^2}{\left(\displaystyle\sum_{u\in V(S)}q_uf^2(u)\phi_0^2(u)\right) \left(\displaystyle\sum_{\{v,u\}}\omega_{uv}[g(u)+g(v)]^2\right)}$$$$=\frac{\left(\displaystyle\sum_{\{v,u\}} \omega_{uv}\abs{g^2(u)-g^2(v)}\right)^2}{\left(\displaystyle\sum_{u\in V(S)}q_uf^2(u)\phi_0^2(u)\right) \left(\displaystyle 2\sum_{\{v,u\}}\omega_{uv}[g^2(u)+g^2(v)] -\sum_{\{v,u\}}\omega_{uv}[g(u)-g(v)]^2\right)}$$$$= \frac{\left(\displaystyle\sum_{\{v,u\}} \omega_{uv}\abs{g^2(u)-g^2(v)}\right)^2}{\left(\displaystyle\sum_{u\in V(S)}q_uf^2(u)\phi_0^2(u)\right) \left(\displaystyle 2\sum_{u}g^2(u) \sum_{v\sim u}\omega_{uv} -\sum_{\{v,u\}}\omega_{uv}[g(u)-g(v)]^2\right)}$$$$\geq\frac{\left(\displaystyle\sum_{\{v,u\}} \omega_{uv}\abs{g^2(u)-g^2(v)}\right)^2}{\left(\displaystyle\sum_{u\in V(S)}q_uf^2(u)\phi_0^2(u)\right) \left(\displaystyle 2\sum_{u}g^2(u) \sum_{v\sim u}\phi(u)\phi(v)w_{uv} -\sum_{\{v,u\}}\omega_{uv}[g(u)-g(v)]^2\right)} $$$$\geq\frac{\left(\displaystyle\sum_{\{v,u\}} \omega_{uv}\abs{g^2(u)-g^2(v)}\right)^2}{\left(\displaystyle\sum_{u\in V(S)}q_uf^2(u)\phi_0^2(u)\right) \left(\displaystyle 2\sum_{u}q_u f^2(u)\phi^2(u)\left(W_u + \frac{d_u}{q_u}- \lambda_0\right) -\sum_{\{v,u\}}\omega_{uv}[g(u)-g(v)]^2\right)} \geq\frac{\left(\displaystyle\sum_{\{v,u\}} \omega_{uv}\abs{g^2(u)-g^2(v)}\right)^2}{\left(\displaystyle\sum_{u\in V(S)}q_uf^2(u)\phi_0^2(u)\right)^2 \left(\displaystyle 2Q + 2 \left(\lambda_{\abs{V}-1}-\lambda_0\right) - \Phi \right)} \geq\frac{\left(\displaystyle\sum_{\{v,u\}} \omega_{uv}\abs{g^2(u)-g^2(v)}\right)^2}{\left(\displaystyle\sum_{u\in V(S)}q_uf^2(u)\phi_0^2(u)\right)^2 \left(\displaystyle 2Q + 2\rho - \Phi \right)}.$$ The remainder of this proof follows identically the remaining portion of the proof of \[thm:cheeger\]. [10]{} . Quantum Optimization Workshop, Fields Institute, Toronto, ON, Canada, 8 2014. Abbas Al-Shimary and Jiannis K Pachos. Energy gaps of hamiltonians from graph laplacians. , 2010. Tameem Albash and Daniel A. Lidar. . , 90(1), 11 2018. Ben Andrews and Julie Clutterbuck. Proof of the fundamental gap conjecture. , 24(3):899–916, 2011. Fatihcan M. Atay and Shiping Liu. . , 2014. Fatihcan M Atay and Hande Tuncel. On the spectrum of the normalized laplacian for signed graphs: Interlacing, contraction, and replication. , 442:165–177, 2014. F Barahona. . , 15(10):3241, 1982. Frank Bauer. . , 436(11):4193–4222, 2012. Sergey Bravyi, David P Divincenzo, Roberto Oliveira, and Barbara M Terhal. The complexity of stoquastic local hamiltonian problems. , 8(5):361–385, 2008. Jacob Bringewatt, William Dorland, Stephen P Jordan, and Alan Mink. Diffusion monte carlo approach versus adiabatic computation for local hamiltonians. , 97(2):022323, 2018. T. H Hubert Chan, Zhihao Gavin Tang, and Chenzi Zhang. . , 9198:30–41, 2015. Shiu-Yuen Cheng and Kevin Oden. Isoperimetric inequalities and the gap between the first and second eigenvalues of an euclidean domain. , 7(2):217–239, 1997. F R K Chung. , volume 92 of [*CBMS Regional Conference Series in Mathematics*]{}. American Mathematical Society, Providence, Rhode Island, 12 1997. Fan Chung. . , 9(1):1–19, 2005. Fan R K Chung and Kevin Oden. . , 192(2):257–273, 2000. Bertrand Cloez and Marie-No[é]{}mie Thai. Fleming-viot processes: two explicit examples. 2016. Bertrand Cloez and Marie-No[é]{}mie Thai. Quantitative results for the fleming–viot particle system and quasi-stationary distributions in discrete space. , 126(3):680–702, 2016. Pierre Collet, Servet Mart[í]{}nez, and Jaime San Mart[í]{}n. . Springer Science & Business Media, 2012. Pierre Collet, Servet Mart[í]{}nez, and Jaime San Mart[í]{}n. Markov chains on finite spaces. In [*Quasi-Stationary Distributions*]{}, pages 31–44. Springer, 2013. Elizabeth Crosson and John Bowen. Quantum ground state isoperimetric inequalities for the energy spectrum of local hamiltonians. 2017. Elizabeth Crosson, Edward Farhi, Cedric Yen-Yu Lin, Han-Hsuan Lin, and Peter Shor. Different strategies for optimization using the quantum adiabatic algorithm. , 2014. Elizabeth Crosson and Aram W. Harrow. . In [*2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)*]{}, pages 714–723. IEEE, 8 2016. Persi Diaconis and Daniel Stroock. Geometric bounds for eigenvalues of markov chains. , pages 36–61, 1991. M R Garey, D S Johnson, and L Stockmeyer. . , 1(3):237–267, 1976. Frank Harary and Jerald A Kabell. . , 1(1):131–136, 1980. M. B. Hastings. Obstructions to classically simulating the quantum adiabatic algorithm. , 13(11/12):1038–1076, 2013. With appendix by M. H. Freedman. Sabine Jansen, Mary-Beth Ruskai, and Ruedi Seiler. . , 102111(2007):15, 2006. M Jarret, S P Jordan, and B Lackey. . , 2016. Michael Jarret and Stephen P Jordan. Adiabatic optimization without local minima. , 14(Quantum Information & Computation), 2015. Michael Jarret, Stephen P Jordan, and Brad Lackey. Adiabatic optimization versus diffusion monte carlo methods. , 94(4):042318, 2016. Michael Jarret and Brad Lackey. Substochastic monte carlo algorithms. 2017. Volker Kaibel. On the expansion of graphs of 0/1-polytopes. In [*The Sharpest Cut: The Impact of Manfred Padberg and His Work*]{}, pages 199–216. SIAM, 2004. Ravi Kannan, Santosh Vempala, and Adrian Vetta. On clusterings: Good, bad and spectral. , 51(3):497–515, 2004. Carsten Lange, Shiping Liu, Norbert Peyerimhoff, and Olaf Post. . , 54(4):4165–4196, 12 2015. Tom Leighton and Satish Rao. An approximate max-flow min-cut theorem for uniform multicommodity flow problems with applications to approximation algorithms. In [*Foundations of Computer Science, 1988., 29th Annual Symposium on*]{}, pages 422–431. IEEE, 1988. Florian Martin. . , 217:276–285, 1 2017. Milad Marvian, Daniel A. Lidar, and Itay Hen. . pages 1–12, 2018. David W Matula and Farhad Shahrokhi. Sparsest cuts and bottlenecks in graphs. , 27(1-2):113–123, 1990. J[é]{}r[é]{}mie Roland and Nicolas J Cerf. Quantum search by local adiabatic evolution. , 65(4):042308, 2002. Subir Sachdev. . Wiley Online Library, 2007. David Sherrington and Scott Kirkpatrick. Solvable model of a spin-glass. , 35:1792–1796, 12 1975. Alistair Sinclair. . Springer Science & Business Media, 2012. Daniel A Spielman and Shang-Hua Teng. Nearly-linear time algorithms for graph partitioning, graph sparsification, and solving linear systems. In [*Proceedings of the thirty-sixth annual ACM symposium on Theory of computing*]{}, pages 81–90. ACM, 2004. Matthias Troyer and Uwe-Jens Wiese. Computational complexity and fundamental limitations to fermionic quantum monte carlo simulations. , 94(17):170201, 2005. Shing-Tung Yau. An estimate of the gap of the first two eigenvalues in the [S]{}chr odinger operator. 2009. [^1]: mjarret@pitp.ca [^2]: I would conjecture that, since the problem of determining a graph’s frustration index is NP-hard [@sher; @barahona] actually determining this unitary should be NP-hard. That one can efficiently detect whether a signed graph is balanced implies, with only slight modification, that one can efficiently detect whether a Hamiltonian is stoquastic [@HARARY1980131] in this simple case. Finding a unitary which makes a general Hamiltonian stoquastic is NP-complete [@Marvian2018]. [^3]: Stoquastic Hamiltonians are typically one in a long list of names for matrices with nonpositive (or nonnegative) off-diagonal terms. Nonetheless, I would be incredibly surprised if the extended class here has escaped a pre-existing label. [^4]: The signed Laplacian typically has the degree of vertex $u$ equal to the absolute value of the sum of the edge-weights incident on the vertex. Because we are about to allow for an arbitrary diagonal perturbation, we will also be able to recover the standard combinatorial signed Laplacian by taking $W_u \mapsto W_u + \sum_v ( \abs{w_{uv}} - w_{uv})$. [^5]: Although directed graphs do not correspond to the physical systems that we are presently interested in and are thus omitted, extending these results to such a setting is still well-motivated. For some results on directed graphs, see, e.g. [@Chung2005; @Bauer2012; @Chan2015]. [^6]: In an earlier version of this paper, I did not pursue a variational approach, joking that I was not a masochist. However, masochism seems inevitable, as the previous approach was inconsistent with \[eqn:rgap\]. [^7]: BAA reminds me of its sheepishness.
--- abstract: 'A brief review on the physics beyond the Standard Model is given, as was presented in the High Energy Particle Physics workshop on the $12^{th}$ of February 2015 at the iThemba North Labs. Particular emphasis is given to the Minimal Supersymmetric Standard Model, with mention of extra-dimensional theories also.' address: 'National Institute for Theoretical Physics, School of Physics and Mandelstam Institute for Theoretical Physics, University of the Witwatersrand, Johannesburg, Wits 2050, South Africa' author: - 'Alan S. Cornell' title: Some theories beyond the Standard Model --- WITS-MITP-008 February 2015 Introduction ============ The Standard Model (SM) of elementary particle physics, developed in the 1970’s, describes the behaviour of all known elementary particles with impressive accuracy, and it all traces back to the relatively simple mathematical laws within the formalism of quantum field theory. As such, symmetry groups play a particularly important role in this context, where our three generations of quarks and leptons have their interactions mediated by the gauge bosons. However, the most mysterious field within the SM is the Higgs field, which forms a condensate filling the whole Universe. The motion of all particles is influenced by this condensate, which is how the particles gain their mass. Recall that the Higgs boson, a spin-0 particle produced from excitations of the Higgs field, was recently discovered, completing the particle spectrum of the SM, but that doesn’t mean the story is now over. The study of the Higgs boson’s couplings is now extremely important, and is connected to beyond the SM (BSM) physics. ![*The one-loop contributions to the Higgs mass, where the left diagram contains the fermion loops, the middle contributions are from the gauge bosons, and the right from Higgs self-interactions.*[]{data-label="fig:1"}](Higgs-1.pdf "fig:"){width="5cm"} ![*The one-loop contributions to the Higgs mass, where the left diagram contains the fermion loops, the middle contributions are from the gauge bosons, and the right from Higgs self-interactions.*[]{data-label="fig:1"}](Higgs-2.pdf "fig:"){width="4cm"} ![*The one-loop contributions to the Higgs mass, where the left diagram contains the fermion loops, the middle contributions are from the gauge bosons, and the right from Higgs self-interactions.*[]{data-label="fig:1"}](Higgs-3.pdf "fig:"){width="4cm"} The structure of the radiative corrections to the Higgs boson is quite different from those of fermions and gauge bosons. To one-loop the renormalised Higgs mass is: $$\begin{aligned} m^2_h &=& m^2_{h0} - \frac{3}{8 \pi^2} y_t^2 \Lambda^2 + \frac{1}{16 \pi^2} g^2 \Lambda^2 + \frac{1}{16 \pi^2} \lambda^2 \Lambda^2 \; , \label{eqn:1}\end{aligned}$$ where $\Lambda$ is the scale at which the loop integrals are cut off. As such, the corrections to the Higgs mass, which are referred to as “quadratic divergences", are related to $\Lambda$. If $\Lambda$ is high, the correction is extremely large with respect to the on-shell Higgs mass. This is the “fine-tuning problem", where such quadratic divergences are not radiative corrections to fermion or gauge boson masses (these being protected by chiral and gauge symmetries). If there is an intermediate scale at which new physics manifests, this problem is resolved and the radiative corrections from any new particles ameliorates issues arising from the SM. Note that merely adding new particles that couple to the Higgs boson, to cancel one loop level quadratic corrections, is insufficient as such chance cancellations do not guarantee cancellations to all orders in a perturbative theory. To guarantee cancellations of these quadratic divergences we need a new symmetry. Some of the established approaches to handle this, which shall be reviewed here are: - [**Supersymmetry**]{} (SUSY): a symmetry between bosons and fermions, where the chiral symmetry of fermions controls the Higgs mass divergences by cancelling the diagrams containing SM fields with diagrams featuring their sparticles (their supersymmetric partners). - [**Extra-dimensions**]{}: whilst we seem to reside in a four-dimensional spacetime, we may actually reside in five or more spactime dimensions. The additional or extra-dimensions are somehow “inaccessible" to us (perhaps being compactified in some way), meaning that the true Planck mass could be of comparable scale to the electroweak scale. Current experimental measurements greatly constrain each of these models, where the SM remains our most successful model to date. However, we may find deviations in the future if we focus on the few fundamental parameters present in the SM, such as the gauge couplings and the vacuum expectation value (vev) of the Higgs; as the measurements of these parameters becomes more precise, greater constraints can be placed on any BSM theory. Returning to the quadratic divergences above, the divergence in Eq. (\[eqn:1\]) is estimated by $\Lambda$, a cut-off we introduced to the loop integrals. As this momentum cut-off is independent of the external momenta, the divergences are regularisation dependent objects. For dimensional regularisation these divergences are then trivially zero. As such is the fine-tuning problem something which should be taken seriously? Supersymmetry ============= This fine-tuning problem, as presented above with a momentum cut-off, is valid when at some higher scale a theory with a larger symmetry exists. For the case of SUSY models our regularisation scheme must satisfy our SUSY and we cannot remove all quadratic divergences. Therefore, the radiative corrections to the Higgs sector are proportional to the mass scale of the sparticles (the SUSY scale) under a proper regularisation. Taking the sparticle masses as being much greater than the SM particle masses, the theory appears as the SM at low energies with $\Lambda$ being at the SUSY scale. This is an example where fine-tuning arguments hold for theories with an extended scale above which a new symmetry arises. SUSY is a symmetry which exchanges fermions for bosons, and vice-versa. The Minimal Supersymmetric SM (MSSM) is where we extend the SM by having a SUSY in the limit when particle masses are negligible, and where this is presumed to be an effective theory of some fully supersymmetric model. As the full theory undergoes a spontaneous SUSY breaking, the sparticles obtain a mass that is much greater than those for their partner SM particles. The fermion’s superpartner is a spin-0 particle (a sfermion), whilst a gauge boson’s superpartner has spin-1/2 (a gaugino). For the Higgs boson, its superpartner has spin-1/2 also (a higgsino). In Tab. \[tab:1\] the particle content of the MSSM is listed, where the superpartners have the same charges as their SM counter-parts. This is due to the generators of the SUSY transform commuting with the $SU(3)\times SU(2)\times U(1)$ transformation of the SM. Note also that in the MSSM we have two Higgs doublets and two higgsinos, as chiral fermions with charge $(1,2)_{\pm 1/2}$ are constrained by anomaly cancellation. Names Spin $P_R$ Gauge Eigenstates Mass Eigenstates -------------- ------ ------- ---------------------------------------------------------------- ------------------------------------------------------------ Higgs bosons 0 +1 $H_u^0 \; H_d^0 \; H_u^+ \; H_d^-$ $h^0 \; H^0 \; A^0 \; H^\pm$ $\tilde{u}_L \; \tilde{u}_R \; \tilde{d}_L \; \tilde{d}_R$ (same) squarks 0 -1 $\tilde{s}_L \; \tilde{s}_R \; \tilde{c}_L \; \tilde{c}_R$ (same) $\tilde{t}_L \; \tilde{t}_R \; \tilde{b}_L \; \tilde{b}_R$ $\tilde{t}_1 \; \tilde{t}_2 \; \tilde{b}_1 \; \tilde{b}_2$ $\tilde{e}_L \; \tilde{e}_R \; \tilde{\nu}_e$ (same) sleptons 0 -1 $\tilde{\mu}_L \; \tilde{\mu}_R \; \tilde{\nu}_\mu$ (same) $\tilde{\tau}_L \; \tilde{\tau}_R \; \tilde{\nu}_\tau$ $\tilde{\tau}_1 \; \tilde{\tau}_2 \; \tilde{\nu}_\tau$ neutralinos 1/2 -1 $\tilde{B}^0 \; \tilde{W}^0 \; \tilde{H}_u^0 \; \tilde{H}_d^0$ $\tilde{N}_1 \; \tilde{N}_2 \; \tilde{N}_3 \; \tilde{N}_4$ charginos 1/2 -1 $\tilde{W}^\pm \; \tilde{H}_u^+ \; \tilde{H}_d^-$ $\tilde{C}_1^\pm \; \tilde{C}_2^\pm$ gluino 1/2 -1 $\tilde{g}$ (same) goldstino 1/2 -1 (same) (gravitino) 3/2 -1 $\tilde{G}$ (same) : \[tab:1\]The additional particle content of the MSSM. From Tab. \[tab:1\] we can see that the particle content has doubled from the SM. However, the SUSY does not determine the masses of the sparticles, even though all dimensionless couplings with these particles (such as Yukawa and four point couplings) are. The relationships between couplings can be understood only from the full supersymmetric theory. Let us consider some features of supersymmetric models: - As the quadratic divergence arising from the top loop is cancelled by the stop loop (from a Higgs-Higgs-stop-stop interaction), given that both diagrams are proportional to $y_t^2$, etc., there are no quadratic divergences in this theory. However, as scalar particles are in the same multiplet as fermions (fermion mass being logarithmically divergent), the Higgs quartic coupling is proportional to the square of the higgsino-gaugino loops. Therefore fine-tuning within the Higgs sector is greatly reduced. - As the four point coupling of the Higgs is now a gauge coupling, it is always positive at the Planck scale. Please see the work, including proceedings, of Abdalgabar [*et al.*]{} [@Abdalgabar:2014bfa] and references therein. - From gauge invariance there are no baryon and lepton number violating processes in the SM. This is not the case in SUSY models as higgsinos carry the same quantum numbers as leptons. - Gauge coupling: In models such as the MSSM the number of particles can be doubled from the SM, with runnings of the gauge couplings being modified above the sparticle mass scale. As such the gauge couplings unify much better than in the SM case at the GUT scale (see Fig. \[fig:2\]). This means that a supersymmetric GUT agrees with current experimental results, even though fine-tuning issues may persist, as the Higgs sector may violate the GUT symmetry. ![*The one loop renormalisation group evolution of the gauge couplings in the SM (dashed lines) and MSSM (solid lines).*[]{data-label="fig:2"}](Gauge_running.pdf){width="9cm"} Origins of SUSY breaking ------------------------ As a final discussion point on SUSY, models like the MSSM are incomplete theories as the necessary SUSY breaking mechanism comes from elsewhere. Note the SUSY breaking set-up has, in general, some hidden sector containing fields which spontaneously break the SUSY. The hidden sector does not couple directly to the visible sector but does so through some messenger sector, where the messenger particles have some mass scale. However, this messenger sector is already severely constrained for the MSSM sector from such measurements as flavour changing neutral currents ($K^0 - \bar{K}^0$ mixing for example). Even so, obtaining information on the hidden sector remains difficult. In some cases the gravitino (the graviton’s superpartner) mass $m_{3/2} = F_0/M_{Pl}$, where $F_0$ is the total energy of SUSY breaking, could be the lightest supersymmetric partner (LSP). Often the next to LSP (NLSP) would be long-lived in such cases. As such the NLSP may be detectable at colliders, with its life-time giving information on this hidden sector responsible for SUSY breaking. Note that when the gravitino is not the LSP, the gravitino will be long enough lived to effect the big bang nucleosynthesis. The mechanisms of the messenger sector set the scale of the sparticle masses, where the on-shell SUSY particle masses are then determined from the renormalisation group equations being run down to the lower energy scale. A more complete discussion can be found in Ref. [@Hall:1983iz]. Extra-dimensions ================ Among the other possible models which address the fine-tuning problem (we shall review) are those with extra-dimensions. These models address this problem by noticing that the observed Planck scale is effective and that the true (higher-dimensional) Planck scale can be smaller and even of the order of the electroweak scale. As such the parameters in the Higgs sector are of the order of the true Planck scale and not the effective one. Furthermore, in some extra-dimensional models the Higgs may be the fifth-dimensional component of the gauge field, and as such its parameters are protected by the gauge symmetry (no quadratic divergences). To given an overview of some of the ideas used in these models, let us consider the case where we have additional spatial dimensions that are compactified with radius $R$ [@Antoniadis:1998ig]. If we have one flat extra-dimension, then fields propagating in this extra-dimension must obey a periodic boundary condition, for example $$\begin{aligned} \phi (x, y) & = & \phi (x, y+R) \; ,\end{aligned}$$ with $x$ being our usual four spacetime dimensions, and $y$ our extra-dimension. With such a boundary condition wavefunctions can be written as $$\begin{aligned} \psi (x, y) & = & \psi'(x) \mathrm{exp}(ip_5y) \; ,\end{aligned}$$ where $p_5$ is the fifth component of our 5-momenta and satisfies $p_5 R = 2\pi n$ for an integer $n$. The equation of motion for a particle moving in the additional dimension becomes $$\begin{aligned} E_n^2 & = & p^2 + p_5^2 = p^2 + (2\pi)^2 \left( \frac{n}{R} \right)^2 \; .\end{aligned}$$ That is, we have an infinite tower of massive particles in the effective four-dimensional theory, with masses equal to the discrete values of $p_5^2$. The couplings in the higher-dimensional theory are related to those in the effective four-dimensional case, however, this can be non-trivial, depending on the model. For gauge couplings in our simple case above $$\begin{aligned} \int d^4x dx_5 \frac{1}{g_5^2} F_{\mu\nu}F^{\mu\nu} & \to & \int d^4x \frac{1}{g_4^2} F_{\mu\nu}F^{\mu\nu} \; ,\end{aligned}$$ where $g_4 = g_5/\sqrt{R}$. So as the extra-dimension becomes larger, $g_4$ is reduced. This is also true for gravitational interactions. The four-dimensional gravitational interaction may be very weak if the size of the extra-dimensions are very large. In the case of large extra-dimensional models these can solve the fine-tuning problem as discussed above, by making the true Planck scale in the higher-dimensional theory much smaller. There are many other varieties of extra-dimensional models, and they are not all flat. A famous example is that of the Randall-Sundrum model [@Randall:1999ee] where the additional spatial dimension has the “warped" metric $$\begin{aligned} ds^2 & = & e^{-2\sigma(\phi)} \eta_{\mu\nu} dx^\mu dx^\nu + r_c^2 d\phi^2 \; ,\end{aligned}$$ and where the boundaries of our additional dimension are $\phi = 0$ and $\pi$. In this case the action becomes $$\begin{aligned} S_{gravity} & = & \int d^4 x \int^{+\pi}_{-\pi} d\phi \sqrt{-G} - \Lambda + 2 M^3 R \; ,\end{aligned}$$ with “warp" factor $$\begin{aligned} \sigma(\phi) & = & r_c |\phi| \sqrt{\frac{-\Lambda}{24M^3}} \; , \end{aligned}$$ together with appropriate actions on the boundaries. Concluding remarks ================== As a final thought, note that there are other indications of the existence of new physics between the weak scale and the Planck scale. Firstly, consider the Higgs potential and how it evolves with energy in the SM, in particular its stability. The potential is a function of the top and Higgs masses, and current top and Higgs mass measurements favour a metastable Higgs potential [@Alekhin:2012py]. Now, there is no reason that the Higgs vev should fall in such a metastable region, and this also suggests that additional particles that couple to the Higgs sector change the shape of the potential dramatically, see Refs. [@Liu:2012mea] for example. So further analysis of this region of parameter space may give indications of BSM physics. Though we have not discussed it in this review, we note that another topical result which requires some new physics is dark matter. From various cosmological and astrophysical observations we know that $\sim 27$% of the energy content of the Universe is in the form of dark matter. And therefore such particles must be stable and neutral. Many candidates for dark matter particles exist in a range of BSM theories, including the MSSM, but more studies are required and this is an ongoing field of research. The points that have been raised in this brief proceedings will hopefully motivate the participants of this workshop to further readings on these and related topics, such as Refs. [@Hall:1983iz; @Rattazzi:2003ea; @Arneodo:2013re; @Strege:2012bt]. References {#references .unnumbered} ========== [99]{} A. Abdalgabar, A. S. Cornell, A. Deandrea and M. McGarrie, JHEP [**1407**]{}, 158 (2014) \[arXiv:1405.1038 \[hep-ph\]\]. L. J. Hall, J. D. Lykken and S. Weinberg, Phys. Rev. D [**27**]{}, 2359 (1983). I. Antoniadis, N. Arkani-Hamed, S. Dimopoulos and G. R. Dvali, Phys. Lett. B [**436**]{}, 257 (1998) \[hep-ph/9804398\]; N. Arkani-Hamed, S. Dimopoulos and G. R. Dvali, Phys. Lett. B [**429**]{}, 263 (1998) \[hep-ph/9803315\]; N. Arkani-Hamed, S. Dimopoulos and G. R. Dvali, Phys. Rev. D [**59**]{}, 086004 (1999) \[hep-ph/9807344\]. L. Randall and R. Sundrum, Phys. Rev. Lett.  [**83**]{}, 3370 (1999) \[hep-ph/9905221\]; L. Randall and R. Sundrum, Phys. Rev. Lett.  [**83**]{}, 4690 (1999) \[hep-th/9906064\]. S. Alekhin, A. Djouadi and S. Moch, Phys. Lett. B [**716**]{}, 214 (2012) \[arXiv:1207.0980 \[hep-ph\]\]. L. X. Liu and A. S. Cornell, Phys. Rev. D [**86**]{}, 056002 (2012) \[arXiv:1204.0532 \[hep-ph\]\]; A. Abdalgabar, A. S. Cornell, A. Deandrea and A. Tarhini, Eur. Phys. J. C [**74**]{}, no. 5, 2893 (2014) \[arXiv:1307.6401 \[hep-ph\]\]. R. Rattazzi, \*Cargese 2003, Particle physics and cosmology\* 461-517 \[hep-ph/0607055\]. F. Arneodo, arXiv:1301.0441 \[astro-ph.IM\]. C. Strege, G. Bertone, F. Feroz, M. Fornasa, R. Ruiz de Austri and R. Trotta, JCAP [**1304**]{}, 013 (2013) \[arXiv:1212.2636 \[hep-ph\]\].
--- abstract: 'Increasingly sophisticated mathematical modelling processes from Machine Learning are being used to analyse complex data. However, the performance and explainability of these models within practical critical systems requires a rigorous and continuous verification of their safe utilisation. Working towards addressing this challenge, this paper presents a principled novel safety argument framework for critical systems that utilise deep neural networks. The approach allows various forms of predictions, e.g., future reliability of passing some demands, or confidence on a required reliability level. It is supported by a Bayesian analysis using operational data and the recent verification and validation techniques for deep learning. The prediction is conservative – it starts with partial prior knowledge obtained from lifecycle activities and then determines the worst-case prediction. Open challenges are also identified.' author: - Xingyu Zhao - Alec Banks - James Sharp - Valentin Robu - David Flynn - Michael Fisher - Xiaowei Huang bibliography: - 'references.bib' title: 'A Safety Framework for Critical Systems Utilising Deep Neural Networks[^1] ' --- Introduction ============ Deep learning (DL) has been applied broadly in industrial sectors including automotive, healthcare, aviation and finance. To fully exploit the potential offered by DL, there is an urgent need to develop approaches to their certification in safety critical applications. For traditional systems, safety analysis has aided engineers in *arguing* that the system is sufficiently safe. However, the deployment of DL in critical systems requires a thorough revisit of that analysis to reflect the novel characteristics of Machine Learning (ML) in general [@BKCF2019; @alves_considerations_2018; @KKB2019]. Compared with traditional systems, the behaviour of learning-enabled systems is much harder to predict, due to, *inter alia*, their “black-box” nature and the lack of traceable functional requirements of their DL components. The “black-box” nature hinders the human operators in understanding the DL and makes it hard to predict the system behaviour when faced with new data. The lack of explicit requirement traceability through to code implementation is only partially offset by learning from a dataset, which at best provides an incomplete description of the problem. These characteristics of DL increase apparent non-determinism [@johnson_increasing_2018], which on the one hand emphasises the role of *probabilistic measures* in capturing uncertainty, but on the other hand makes it notoriously hard to estimate the probabilities (and also the consequences) of critical failures. Recently, progress has been made on formal verification [@HKWW2017] and coverage-guided testing [@sun2018concolic] to support the Verification and Validation (V&V) of DL. Whilst these methods are insufficient by themselves to justify overall system safety claims, they may provide evidence to support low-level claims, e.g. the local robustness of a neural network on a given input. In this paper, we present a novel safety argument framework for DL models (which may in turn support higher-level system safety arguments). We focus on deep neural networks (DNNs) that have been widely deployed as, e.g., perception and control units of autonomous systems. Due to the page limit, we also confine the framework to DNNs that are fixed in the operation; this can be extended for online learning DNNs in future. We consider safety-related properties including reliability, robustness, interpretability, fairness [@barocas-hardt-narayanan], and privacy [@Abadi_2016]. In particular, we emphasise the assessment of DNN *generalisation error* (in terms of inaccuracy), as a major reliability measure, throughout our safety case. We build arguments in two steps. The first is to provide initial confidence that the DNN’s generalisation error is bounded, through the assurance activities conducted at each stage of its lifecycle, e.g., formal verification on the DNN robustness. The second step is to adopt *proven-in-use/field-testing* arguments to boost the confidence and check whether the DNN is indeed sufficiently safe for the risk associated with its use in the system. The second step above is done in a statistically principled way via Conservative Bayesian Inference (CBI) [@bishop_toward_2011; @strigini_software_2013; @zhao_assessing_2019]. CBI requires only *limited and partial* prior knowledge of reliability, which differs from normal Bayesian analysis that usually assumes a *complete* prior distribution on the failure rate. This has a unique advantage: partial prior knowledge is more convincing (i.e. constitutes a more realistic claim) and easier to obtain, while complete prior distributions usually require extra assumptions and introduces optimistic bias. CBI allows many forms of prediction, e.g., posterior expected failure rate [@bishop_toward_2011], future reliability of passing some demands [@strigini_software_2013] or a posterior confidence on a required reliability bound [@zhao_assessing_2019]. Importantly, CBI guarantees conservative outcomes: it finds the worst-case prior distribution yielding, say, a maximised posterior expected failure rate, and satisfying the partial knowledge. That said, we are aware that there are other extant dangerous pitfalls in safety arguments [@KKB2019; @johnson_increasing_2018], thus we also identify *open challenges* in our proposed framework and map them onto on-going research in the ML and software engineering communities. The key contributions of this work are: *a)* A very first safety case framework for DNNs that mainly concerns quantitative claims based on structured heterogeneous safety arguments. *b)* Identification of open challenges in building safety arguments for quantitative claims, and mapping them onto on-going research of potential solutions. Next, we present preliminaries. Sec. \[sec\_top\_level\_sc\] provides top-level argument, and Sec. \[sec\_property\_and\_lifecycle\] presents how CBI approach assures reliability. Other safety related properties are discussed in Sec. \[sec-other-propreties\]. We discuss related work in Sec. \[sec-related\] and conclude in Sec. \[sec-conclusions\]. Preliminaries {#sec_preliminaries} ============= Safety cases ------------ A safety case is a comprehensive, defensible, and valid justification of the safety of a system for a given application in a defined operating environment, thus it is a means to provide the grounds for confidence and to assist decision making in certification [@bloomfield_safety_2010]. Early research in safety cases mainly focus on their formulation in terms of claims, arguments and evidence elements based on fundamental argumentation theories like the Toulmin model [@s_toulmin_uses_1958]. The two most popular notations are CAE [@bloomfield_safety_2010] and GSN [@kelly_arguing_1999]. In this paper, we choose the latter to present our safety case framework. ![The GSN core elements and an example of using GSN.[]{data-label="fig_GSN_example"}](fig_gsn_example.png){width="100.00000%"} Fig. \[fig\_GSN\_example\] shows the core GSN elements and a quick GSN example. Essentially, the GSN safety case starts with a top *goal* (claim) which then is decomposed through an argument *strategy* into sub-goals (sub-claims), and sub-goals can be further decomposed until being supported by *solutions* (evidence). A claim may be subject to some *context* or *assumption*. An *away goal* repeats a claim presented in another argument module. A description on all GSN elements used here can be found in [@kelly_arguing_1999]. Deep neural networks and lifecycle models ----------------------------------------- Let $(X,Y)$ be the training data, where $X$ is a vector of inputs and $Y$ is a vector of outputs such that $|X|=|Y|$. Let $\inputdomain$ be the input domain and $\outputdomain$ be the set of labels. Hence, $X\subset \inputdomain$. We may use $x$ and $y$ to range over $\inputdomain$ and $\outputdomain$, respectively. Let $\network$ be a DNN of a given architecture. A network $\network:\inputdomain\rightarrow \dist(\outputdomain)$ can be seen as a function mapping from $\inputdomain$ to probabilistic distributions over $\outputdomain$. That is, $\network(x)$ is a probabilistic distribution, which assigns for each possible label $y\in \outputdomain$ a probability value $(\network(x))_y$. We let $f_\network:\inputdomain\rightarrow \outputdomain$ be a function such that for any $x\in \inputdomain$, $f_\network(x) = \arg\max_{y\in \outputdomain}\{(\network(x))_y\}$, i.e. $f_\network(x)$ returns the classification label. The network is trained with a parameterised learning algorithm, in which there are (implicit) parameters representing e.g., the number of epochs, the loss function, the learning rate, the optimisation algorithm, etc. A comprehensive ML *Lifecycle Model* can be found in [@ashmore_assuring_2019], which identifies assurance desiderata for each stage, and reviews existing methods that contribute to achieving these desiderata. In this paper, we refer to a simpler lifecycle model that includes several phases: initiation, data collection, model construction, model training, analysis of the trained model, and run-time enforcement. Generalisation error -------------------- Generalisability requires that a neural network works well on all possible inputs in $\inputdomain$, although it is only trained on the training dataset $(X,Y)$. Assume that there is a ground truth function $f: \inputdomain\rightarrow \outputdomain$ and a probability function $O_p: \inputdomain\rightarrow [0,1]$ representing the operational profile. A network $\network$ trained on $(X,Y)$ has a generalisation error: $$G^{0-1}_\network = \sum_{x\in \inputdomain} {\bf 1}_{\{f_\network(x) \neq f(x)\}} \times O_p(x) \label{eq_gen_error_01}$$ where ${\bf 1}_{\tt S}$ is an indicator function – it is equal to 1 when [S]{} is true and 0 otherwise. We use the notation $O_p(x)$ to represent the probability of an input $x$ being selected, which aligns with the *operational profile* notion [@musa_operational_1993] in software engineering. Moreover, we use 0-1 loss function (i.e., assigns value 0 to loss for a correct classification and 1 for an incorrect classification) so that, for a given $O_p$, $G^{0-1}_\network$ is equivalent to the reliability measure *pfd* (the expected probability of the system failing on a random demand) defined in the safety standard IEC-61508. A “frequentist” interpretation of *pfd* is that it is the limiting relative frequency of demands for which the DNN fails in an infinite sequence of independently selected demands [@zhao_modeling_2017]. The primary safety measure we study here is *pfd*, which is equivalent to the generalisation error $G^{0-1}_\network$ in . Thus, we may use the two terms interchangeably in our safety case, depending on the context. The Top-level Argument {#sec_top_level_sc} ====================== Fig. \[fig\_top\_level\] gives a top-level safety argument for the top claim **G1** – the DNN is sufficiently safe. We first argue **S1**: that all safety related properties are satisfied. The list of all properties of interest for the given application can be obtained by utilising the Property Based Requirements (PBR) [@Micouin2008] approach. The PBR method is a way to specify requirements as a set of properties of system objects in either structured language or formal notations. PBR is recommended in [@alves_considerations_2018] as a method for the safety argument of autonomous systems. Without the loss of generality, in this paper, we focus on the major quantitative property: reliability (**G2**). Due to space constraints, other properties: interpretability, robustness, etc. are discussed in Sec. \[sec-other-propreties\] but remain an undeveloped goal (**G3**) here. More properties that have a safety impact can be incorporated in the framework as new requirements emerge from, e.g., ethical aspects of the DNN. ![image](fig_top_level){width="85.00000%"} Despite the controversy over the use of probabilistic measures (e.g., *pfd*) for the safety of conventional software systems [@littlewood_validation_2011], we believe probabilistic measures are useful when dealing with ML systems since arguments involving their inherent uncertainty are naturally stated in probabilistic terms. Setting a reliability goal (**G2**) for a DNN varies from one application to another. Questions we need to ask include: (i) What is the appropriate reliability measure? (ii) What is the quantitative requirement stated in that reliability measure? (iii) How can confidence be gained in that reliability claim? Reliability of safety critical systems, as a probabilistic claim, will be about the probabilities/rates of occurrence of failures that have safety impacts, e.g., a dangerous misclassification in a DNN. Generally, systems can be classified as either: continuous-time systems that are being continuously operated in the active control of some process; or on-demand systems, which are only called upon to act on receipt of discrete demands. Normally we study the failure rate (number of failures in one time unit) of the former (e.g., flight control software) and the probability of failure per demand (*pfd*) of the latter (e.g., the emergency shutdown system of a nuclear plant). In this paper, we focus on *pfd* which aligns with DNN classifiers for perception, where demands are e.g., images from cameras. Given the fact that most safety critical systems adopt a *defence in depth design* with safety backup channels [@littlewood_reasoning_2012], the required reliability (e.g., $p_{\mathit{req}}$ in **G2**) should be derived from the higher level system, e.g., a 1-out-of-2 (1oo2) system in which the other channel could be either hardware-only, conventional software-based, or another ML software. The required reliability of the whole 1oo2 system may be obtained from regulators or compared to human level performance (e.g., a target of 100 times safer than average human drivers, as studied in [@zhao_assessing_2019]). We remark that deriving a required reliability for individual channels to meet the whole 1oo2 reliability requirement is still an open challenge due to the dependencies among channels [@littlewood_conceptual_1989; @littlewood_conservative_2013] (e.g., a “hard” demand is likely to cause both channels to fail). That said, there is ongoing research towards rigorous methods to decompose the reliability of 1oo2 systems into those of individual channels which may apply and provide insights for future work, e.g., [@bishop_conservative_2014] for 1oo2 systems with one hardware-only and one software-based channels, [@littlewood_reasoning_2012; @zhao_modeling_2017] for a 1oo2 system with one possibly-perfect channel, and [@chen_diversity_2016] utilising fault-injection technique. In particular, for systems with duplicated DL channels, we note that there are similar techniques, e.g., (i) ensemble method [@Ponti2011], where a set of DL models run in parallel and the result is obtained by applying a voting protocol; (ii) simplex architecture [@Sha2001], where there is a main classifier and a safer classifier, with the latter being simple enough so that its safety can be formally verified. Whenever confidence of the main classifier is low, the decision making is taken over by the safer classifier; the safer classifier can be implemented with e.g., a smaller DNN. As discussed in [@bishop_toward_2011], the reliability measure, *pfd*, concerns system behaviour subject to *aleatory* uncertainty (“uncertainty in the world”). On the other hand, *epistemic* uncertainty concerns the uncertainty in the “beliefs about the world”. In our context, it is about the human assessor’s *epistemic* uncertainty of the reliability claim obtained through assurance activities. For example, we may not be *certain* whether a claim – the *pfd* is smaller than $10^{-4}$ – is true due to our imperfect understanding about the assurance activities. All assurance activities in the lifecycle with supportive evidence would increase our *confidence* in the reliability claim, whose formal quantitative treatment has been proposed in [@bloomfield_confidence:_2007; @littlewood_use_2007]. Similarly to the idea proposed in [@strigini_software_2013], we argue that all “process” evidence generated from the DNN lifecycle activities provides initial confidence of a desired *pfd* bound. Then the confidence in a *pfd* claim is acquired incrementally through operational data of the trained DNN via CBI – which we describe next. Reliability with Lifecycle Assurance {#sec_property_and_lifecycle} ==================================== CBI utilising operational data ------------------------------ In Bayesian reliability analysis, assessors normally have a prior distribution of *pfd* (capturing the *epistemic* uncertainties), and update their beliefs – the prior distribution – by using evidence of the observed operational data. Given the safety-critical nature, the systems under study will typically see *failure-free* operation or very *rare failures*. Bayesian inference based on such non or rare failures may introduce dangerously optimistic bias if using a *Uniform* or *Jeffreys prior* which describes not only one’s prior knowledge, but adds extra, unjustified assumptions [@zhao_assessing_2019]. Alternatively, CBI is a technique, first described in [@bishop_toward_2011], which applied Bayesian analysis with only *partial* prior knowledge; by partial prior knowledge, we mean the following typical forms: - ${{\mathbb E}}[\textit{pfd}] \leq m$: the prior mean *pfd* cannot be worse than a stated value; - $Pr(\textit{pfd}\leq\epsilon)=\theta$: a prior confidence bound on *pfd*; - $Pr(\textit{pfd}=0)=\theta$: a prior confidence in the perfection of the system; - ${{\mathbb E}}[(1-\textit{pfd})^n] \geq \gamma$: prior confidence in the reliability of passing $n$ tests. These can be used by CBI either solely or in combination (e.g., several confidence bounds). The partial prior knowledge is far from a complete prior distribution, thus it is easier to obtain from DNN lifecycle activities (**C4**). For instance, there are studies on the generalisation error bounds, based on how the DNN was constructed, trained and verified [@he_control_2019; @bagnall_certifying_2019]. We present examples on how to obtain such partial prior knowledge (**G6**) using evidence, e.g. from formal verification on DNN robustness, in the next section. CBI has also been investigated for various objective functions with a “posterior” flavour: - ${{\mathbb E}}[\textit{pfd}\mid\mbox{pass }n\mbox{ tests}] $: the posterior expected *pfd* [@bishop_toward_2011]; - $Pr(\textit{pfd} \leq p_{req}\mid k \mbox{ failures }\mbox{in }n\mbox{ tests})$: the posterior confidence bound on *pfd* [@zhao_modeling_2017; @zhao_assessing_2019]; the $p_{req}$ is normally a small *pfd*, stipulated at higher level; - ${{\mathbb E}}[(1-\textit{pfd})^t\mid \mbox{pass }n\mbox{ tests}] $: the future reliability of passing $t$ demands in [@strigini_software_2013]. Depending on the objective function of interest (**G2** is an example of a posterior confidence bound) and the set of partial prior knowledge obtained (**G6**), we choose a corresponding CBI model[^2] for **S2**. Note, we also need to explicitly assess the impact of CBI model assumptions (**G5**). Published CBI theorems abstract the stochastic failure process as a sequence of independent and identically distributed (i.i.d.) Bernoulli trials given the unknown *pfd*, and assume the operational profile is constant [@bishop_toward_2011; @strigini_software_2013; @zhao_assessing_2019]. Although we identify how to justify/relax those assumptions as open challenges, we note some promising ongoing research: *a)* The i.i.d. assumption means a constant *pfd* (a frozen system in an unchanging environment), which may not hold for a system update or deployment in a new environment. In [@littlewood_reliability_2020], the CBI is extended to a *multivariate* prior distribution case, which deals with scenarios of a changing *pfd*. The multivariate CBI may provide the basis of arguments for online learning DNNs. *b)* The effect of assuming independence between successive demands has been studied, e.g., [@strigini_testing_1996; @galves_rare_1998]. It is believed that the effect is negligible given non or rare failures; note this requires further (preferably conservative) studies. *c)* The changes to the operational profile is a major challenge for all proven-in-use/field-testing safety arguments [@KKB2019]. Recent research [@bishop_deriving_2017] provides a novel conservative treatment for the problem, which can be retrofitted for CBI. The safety argument via CBI is presented in Fig. \[fig\_CBI\_case\]. In summary, we collect a set of partial prior knowledge from various lifecycle activities, then boost our posterior confidence in a reliability claim of interest through operational data, in a conservative Bayesian manner. We believe this aligns with the practice of applying management systems in reality – a system is built with claims of sufficient confidence that it may be deployed; these claims are then independently assessed to confirm said confidence is justified. Once deployed, the system safety performance is then monitored for continuing validation of the claims. Where there is insufficient evidence systems can be fielded with the risk held by the operator, but that risk must be minimised through operational restrictions. As confidence then grows these restrictions may be relaxed. ![The CBI safety argument.[]{data-label="fig_CBI_case"}](fig_cbi_case){width="85.00000%"} Partial prior knowledge on the generalisation error --------------------------------------------------- Our novel CBI safety argument for the reliability of DNNs is essentially inspired by the idea proposed in [@strigini_software_2013] for conventional software, in which the authors seek prior confidence in the (quasi-)perfection of the software from “process” evidence like formal proofs, and effective development activities. In our case, to make clear the connection between lifecycle activities and their contributions to the generalisation error, we decompose the generalisation error into three: $$\label{eq_decomp_ge} G^{0-1}_\network = \underbrace{G^{0-1}_{\network} -\inf_{\network \in \networks}G^{0-1}_\network}_\text{Estimation error of $\network$} + \underbrace{\inf_{\network \in \networks}G^{0-1}_\network-G^{0-1,*}_{f,(X,Y)}}_\text{Approximation error of $\networks$} +\underbrace{G^{0-1,*}_{f,(X,Y)}}_\text{Bayes error}$$ *a)* The *Bayes error* is the lowest and irreducible error rate over all possible classifiers for the given classification problem [@fukunaga_introduction_2013]. It is non-zero if the true labels are not deterministic (e.g., an image being labelled as $y_1$ by one person but as $y_2$ by others), thus intuitively it captures the uncertainties in the dataset $(X,Y)$ and true distribution $f$ when aiming to solve a real-world problem with DL. We estimate this error (implicitly) at the **initiation** and **data collection** stages in activities like: necessity consideration and dataset preparation etc. *b)* The *Approximation error of $\networks$* measures how far the best classifier in $\networks$ is from the overall optimal classifier, after isolating the Bayes error. The set $\networks$ is determined by the architecture of DNNs (e.g., numbers of layers ), thus lifecycle activities at the **model construction** stage are used to minimise this error. *c)* The *Estimation error of $\network$* measures how far the learned classifier $\network$ is from the best classifier in $\networks$. Lifecycle activities at the **model training** stage essentially aim to reduce this error, i.e., performing optimisations of the set $\networks$. Both the Approximation and Estimation errors are reducible. We believe, the *ultimate goal* of all lifecycle activities is to reduce the two errors to 0, especially for safety-critical DNNs. This is analogous to the “possible perfection” notion of traditional software as pointed to by Rushby and Littlewood [@littlewood_reasoning_2012; @rushby_software_2009]. That is, assurance activities, e.g., performed in support of DO-178C, can be best understood as developing evidence of possible perfection – a confidence in $pfd=0$. Similarly, for safety critical DNNs, we believe ML lifecycle activities should be considered as aiming to train a “possible perfect” DNN in terms of the reducible Approximation and Estimation errors. Thus, we may have some confidence that the two errors are both 0 (equivalently, a prior confidence in the irreducible Bayes error since the other two are 0, that can be used by CBI), which indeed is supported by on-going research into finding globally optimised DNNs [@du_gradient_2018]. Meanwhile, on the **trained model**, V&V also provides prior knowledge as shown in Ex. \[example\_robustness\] below, and **online monitoring** continuously validates the assumptions for the prior knowledge being obtained. \[example\_robustness\] We present an illustrative example on how to obtain a prior confidence bound on the generalisation error from formal verification of DNN robustness [@ruan2018global; @HKWW2017]. *Robustness* requires that the decision making of a neural network cannot be drastically changed due to a small perturbation on the input. Formally, given a real number $d > 0$ and a distance measure $\distance{\cdot}{p}$, for any input $x\in \inputdomain$, we have that, $f_\network(x) = f_\network(x')$ whenever $\distance{x'-x}{p}\leq d$. Fig. \[fig\_illustrate\_robust\_veri\] shows an example of the robustness verification in a one-dimensional space. Each blue triangle represents an input $x$, and the green region around each input $x$ represents all the neighbours, $x'$ of $x$, which satisfy $\distance{x'-x}{p}\leq d$ and $f_\network(x) = f_\network(x')$. Now if we assume $Op(x)$ is uniformly distributed (an assumption for illustrative purposes which can be relaxed for other given $Op(x)$ distributions), the generalisation error has a lower bound – the chance that the next randomly selected input does not fall into the green regions. That is, if $\epsilon$ denotes the ratio of the length not being covered by the green regions to the total length of the black line, then $G^{0-1}_\network \leq \epsilon$. This said, we cannot be certain about the bound $G^{0-1}_\network \leq \epsilon$ due to assumptions like: (i) The formal verification tool itself is perfect, which may not hold; (ii) Any neighbour $x'$ of $x$ has the same ground truth label of $x$. For a more comprehensive list, cf. [@burton_confidence_2019]. Assessors need to capture the doubt (say $1-\theta$) in those assumptions, which leads to: $$\label{equation-robustness} Pr(G^{0-1}_\network \leq \epsilon)=\theta .$$ So far, we have presented an instance of the safety argument template in Fig. \[fig\_partial\_prior\_knowledge\]. The solution **So2** is the formal verification result showing $G^{0-1}_\network \leq \epsilon$, and **G9** in Fig. \[fig\_partial\_prior\_knowledge\] quantifies the confidence $\theta$ in that result. It is indeed an open challenge to rigorously develop **G8** further, which may involve scientific ways of eliciting expert judgement [@ohagan_uncertain_2006] and systematically collecting process data (e.g., statistics on the reliability of verification tools). However, we believe this challenge – evaluating confidence in claims, either quantitatively or qualitatively (e.g., ranking with low, medium, high), explicitly or implicitly – is a fundamental problem for all safety case based decision-makings [@denney_towards_2011; @bloomfield_confidence:_2007; @zhao_new_2012; @wang_confidence_2017], rather than a specific problem of our framework. The sub-goal **G9** represents the mechanism of online monitoring on the validity of offline actives, e.g., validating the environmental assumptions used by offline formal verifications against the real environment at runtime [@ferrando_verifying_2018]. ![Formal verification on DNN robustness in an one-dimensional space.[]{data-label="fig_illustrate_robust_veri"}](fig_illustrate_robust_veri){width="80.00000%"} ![A template of safety arguments for obtaining partial prior knowledge.[]{data-label="fig_partial_prior_knowledge"}](fig_partial_prior_knowledge){width="85.00000%"} Other Safety Related Properties {#sec-other-propreties} =============================== So far we have seen a reliability-centric safety case for DNNs. Recall that, in this paper, reliability is the probability of misclassification (i.e. the generalisation error in ) that has safety impacts. However, there are other DNN safety related properties concerning risks not directly caused by a misclassification, like interpretability, fairness, and privacy; discussed as follows. *Interpretability* is about an explanation procedure to present an interpretation of a single decision within the overall model in a way that is easy for humans to understand. There are different explanation techniques aiming to work with different objects, see [@Huangsurvey2018] for a survey. Here we take the instance explanation as an example – the goal is to find another representation $\explain(f_\network,x)$ of an input $x$, with the expectation that $\explain(f_\network,x)$ carries simple, yet essential, information that can help the user understand the decision $f_\network(x)$. We use $f(x)\Leftrightarrow\explain(f_\network,x)$ to denote that the explanation is consistent with a human’s explanation in $f(x)$. Thus, similarly to , we can define a probabilistic measure for the instance-wise interpretability: $$I_\network = \sum_{x\in \inputdomain} (f(x) \centernot\iff \explain(f_\network,x)) \times O_p(x) \label{eq_interpretability}$$ Then similarly as the argument for reliability, we can do statistical inference with the probabilistic measure $I_\network$. For instance, as in Ex. \[example\_robustness\], we (i) firstly define the robustness of explanations in norm balls, measuring the percentage of space that has been verified as a bound on $I_\network$, (ii) then estimate the confidence of the robust explanation assumption and obtain a prior confidence in interpretability, (iii) finally Bayesian inference is applied with runtime data. *Fairness* requires that, when using DL to predict an output, the prediction remains unbiased with respect to some protected features. For example, a financial service company may use DL to decide whether or not to provide loans to an applicant, and it is expected that such decision should not rely on sensitive features such as race and gender. *Privacy* is used to prevent an observer from determining whether or not a sample was in the model’s training dataset, when it is not allowed to observe the dataset directly. Training methods such as [@Abadi_2016] have been applied to pursue differential privacy. The lack of fairness or privacy may cause not only a significant monetary loss but also ethical issues. Ethics has been regarded as a long-term challenge for AI safety. For these properties, we believe the general methodology suggested here still works – we first introduce bespoke probabilistic measures according to their definitions, obtain prior knowledge on the measures from lifecycle activities, then conduct statistical inference during the continuous monitoring of the operation. Related Work {#sec-related} ============ Alves *et.al* [@alves_considerations_2018] present a comprehensive discussion on the aspects that need to be considered when developing a safety case for increasingly autonomous systems that contain ML components. In [@BKCF2019], a safety case framework with specific challenges for ML is proposed. [@SS2020] reviews available certification techniques from the aspects of lifecycle phases, maturity and applicability to different types of ML systems. In [@KKB2019], safety arguments that are being widely used for conventional systems – including conformance to standards, proven in use, field testing, simulation and formal proofs – are recapped for autonomous systems with discussions on the potential pitfalls. Similar to our CBI arguments that exploit operational data, [@matsuno_tackling_2019; @ishikawa_continuous_2018] propose utilising continuously updated arguments to monitor the weak points and the effectiveness of their countermeasures. The work [@asaadi_towards_2019] is also interested in quantitative claims in safety assurance arguments, after identifying the applicable quantitative measures of assurance, and characterising the associated uncertainty probabilistically. Regarding the safety of automated driving, [@SS2020b; @rudolph_consistent_2018; @SC2018] discuss the extension and adaptation of ISO-26262, and [@burton_making_2017] considers functional insufficiencies in the perception functions based on DL. Additionally, [@picardi_pattern_2019; @picardi_perspectives_2019] explores safety case patterns that are reusable for DL in the context of medical applications. While, in [@osborne_uas_2019], safety case approach is reviewed as useful for assuring the safety of drones. Formal verification [@HKWW2017; @katz2017reluplex; @xiang2017output; @GMDTCV2018; @LM2017; @wicker2018feature; @RHK2018; @wu2018game; @ruan2018global; @LLYCH2018] and coverage-guided testing [@sun2018concolic; @PCYJ2017; @sun2018testing-b; @ma2018deepgauge; @SHKSHA2019; @sun2018concolicb] currently form the two major classes of V&V techniques for DL, from which a collection of evidence may be obtained that supports the partial prior knowledge. The readers are referred to a recent survey [@Huangsurvey2018] for the introduction and summarisation of the techniques. Discussions, Conclusions and Future Work {#sec-conclusions} ======================================== In this paper, we present a novel safety argument framework for DNNs using probabilistic risk assessment, mainly considering quantitative reliability claims, generalising this idea to other safety related properties. We emphasise the use of probabilistic measures to describe the inherent uncertainties of DNNs in safety arguments, and conduct Bayesian inference to strengthen the top-level claims from safe operational data through to continuous monitoring after deployment. Bayesian inference requires prior knowledge, so we propose a novel view by (i) decomposing the DNN generalisation error into a composition of distinct errors and (ii) try to map each lifecycle activity to the reduction of these errors. Although we have shown an example of obtaining priors from robustness verification of DNNs, it is non-trivial (and identified as an open challenge) to establish a quantitative link between other lifecycle activities to the generalisation error. Expert judgement and past experience (e.g., a repository on DNNs developed by similar lifecycle activities) seem to be inevitable in overcoming such difficulties. Thanks to the CBI approach – Bayesian inference with limited and partial prior knowledge – even with sparse prior information (e.g., a single confidence bound on the generalisation error obtained from robustness verification), we can still apply probabilistic inference given the operational data. Whenever there are sound arguments to obtain additional partial prior knowledge, CBI can incorporate them as well, and reduce the conservatism in the reasoning [@bishop_toward_2011]. On the other hand, CBI as a type of proven-in-use/field-testing argument has some of the fundamental limitations as highlighted in [@KKB2019; @johnson_increasing_2018], for which we have identified on-going research towards potential solutions. We concur with [@KKB2019] that, despite the dangerous pitfalls for various existing safety arguments, credible safety cases require a heterogeneous approach. Our new quantitative safety case framework provides a novel supplementary approach to existing frameworks rather than replace them. We plan to conduct concrete case studies and continue to work on the open challenges identified. This document is an overview of UK MOD (part) sponsored research and is released for informational purposes only. The contents of this document should not be interpreted as representing the views of the UK MOD, nor should it be assumed that they reflect any current or future UK MOD policy. The information contained in this document cannot supersede any statutory or contractual requirements or liabilities and is offered without prejudice or commitment. Content includes material subject to  Crown copyright (2018), Dstl. This material is licensed under the terms of the Open Government Licence except where otherwise stated. To view this licence, visit http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3 or write to the Information Policy Team, The National Archives, Kew, London TW9 4DU, or email: psi@nationalarchives.gsi.gov.uk. [^1]: Supported by the UK EPSRC through the Offshore Robotics for Certification of Assets (ORCA) \[EP/R026173/1\] and ORCA’s Partnership Resource Fund through Continual Verification and Assurance of Robotic Systems under Uncertainty (COVE), UK Dstl projects on Test Coverage Metrics for Artificial Intelligence, and Assuring Autonomy International Programme (AAIP). [^2]: CBI is an ongoing research proving theorems for some combinations of objective functions and partial prior knowledge. There are combinations haven’t been investigated, which remains as open challenges.
--- abstract: | We determine the isomorphism class of the Brauer groups of certain nonrational genus zero extensions of number fields. In particular, for all genus zero extensions $E$ of the rational numbers ${\mathbb{Q}}$ that are split by ${\mathbb{Q}}(\sqrt{2})$, $\operatorname{Br}(E)\cong \operatorname{Br}({\mathbb{Q}}(t))$. address: - 'Department of Mathematics, Technion—Israel Institute of Technology, Haifa  32000  ISRAEL' - 'Department of Mathematics, Davidson College, Box 7046, Davidson, North Carolina  28035-7046  USA' author: - 'Jack Sonn$^*$' - 'John Swallow$^{**}$' date: 'January 29, 2003' title: Brauer Groups of Genus Zero Extensions of Number Fields --- plus 2pt minus 2pt [^1] [^2] Introduction {#introduction .unnumbered} ============ Let $K$ be a countable field, $\operatorname{Br}(K)$ its Brauer group. Then $\operatorname{Br}(K)$ is a countable abelian torsion group; hence, as an abstract group, it is completely determined by its Ulm invariants $U_p(\lambda, \operatorname{Br}(K))$, where $p$ is a prime number and $\lambda$ is an ordinal number (see definition below). With this observation, Fein and Schacher initiated the investigation of the Ulm invariants of algebraic function fields over global fields, which culminated in the determination of all the Ulm invariants of the Brauer group of a *rational* function field $K$ in finitely many variables over a global field $k$ (see [@FSS1]). For nonrational function fields in one variable over a global field $k$, the closest thing to a rational function field would be a nonrational function field $K$ of genus zero over $k$. In [@FSS2], all the Ulm invariants $U_p(\lambda, \operatorname{Br}(K))$ for such a field $K$ are determined, except for one (!), namely $U_2(\omega2, \operatorname{Br}(K))$. It turns out that all of the Ulm invariants of $\operatorname{Br}(K)$, except for the missing one, coincide with those of $\operatorname{Br}(k(t))$. The problem of this missing Ulm invariant has remained open; in fact, until now, there was not even a single example known of a nonrational genus zero function field over a global field for which the missing Ulm invariant $U_2(\omega2, \operatorname{Br}(K))$ had been computed. In view of the fact that $U_2(\omega2, \operatorname{Br}(k(t)))=0$, $\operatorname{Br}(K)\cong \operatorname{Br}(k(t))$ if and only if $U_2(\omega2, \operatorname{Br}(K))=0$. In this paper we determine the first known isomorphism classes of Brauer groups of nonrational genus zero extensions $E/k$ of number fields $k$. We prove the following Let $k$ be a totally real number field and $l/k$ the quadratic subfield of the cyclotomic ${\mathbb{Z}}_{2}$-extension $k^{cyc}/k$. Suppose that $E$ is a genus zero extension of $k$ which is split by $l$, and suppose that the Leopoldt Conjecture holds for the field $l$ and the prime $2$. Then $\operatorname{Br}(E)\cong \operatorname{Br}(k(t))$. Since the Leopoldt Conjecture (see [@NSW §10.3]) is known to be true for all abelian number fields [@Br] (see also [@NSW Thms. 10.3.14 and 10.3.16]), we have Suppose $k$ is a totally real abelian number field not containing $\sqrt{2}$. Then if $E/k$ is a genus zero extension split by $k(\sqrt{2})$, then $\operatorname{Br}(E)\cong \operatorname{Br}(k(t))$. In particular, $\operatorname{Br}(E)\cong \operatorname{Br}({\mathbb{Q}}(t))$ for all genus zero extensions $E$ of ${\mathbb{Q}}$ split by ${\mathbb{Q}}(\sqrt{2})$. We approach $\operatorname{Br}(E)$ via $\operatorname{Br}(El)$, where $l/k$ is a quadratic extension such that $El$ is a rational function field. The Auslander-Brumer-Faddeev theorem establishes an isomorphism between $\operatorname{Br}(El)$ and a direct sum of $\operatorname{Br}(l)$ and character groups of extensions of $l$, and we study an order two action on $\operatorname{Br}(El)$ in terms of standard cohomological maps on these summands. Section \[se:prelim\] introduces this approach, and section \[se:actions\] establishes properties of the order two action. Then in section \[se:subgroup\] we present a technical analysis of heights of elements in the fixed subgroup of this order two action. Finally, in section \[se:proof\] we prove the Main Theorem. Preliminaries {#se:prelim} ============= Let $k$ be a perfect field. For any Galois field extension $K/k$, let $G_{K/k}$ denote the Galois group. Let $\bar K$ denote the algebraic closure of $K$ and $G_{K}$ the absolute Galois group $G_{\bar K/K}$. We denote by $k^*$ the multiplicative group of $k$, by ${\mathrm{X}}(k)$ its character group $H^1(G_k, {\mathbb{Q}}/{\mathbb{Z}})$, and by $\operatorname{Br}(k)$ its Brauer group $H^{2}(G_{k}, \bar k^{*})$. Similarly, $\operatorname{Br}(K/k)$ denotes the relative Brauer group of $K/k$, identified with $H^{2}(G_{K/k}, K^{*})$. All cohomology groups will be written additively, and all modules will be left modules. For an additive abelian torsion group $A$, we write $A_2$ for the $2$-primary component of $A$. The Ulm subgroups (for the prime 2) of $A_2$ are defined for any ordinary $\lambda$ by $A_2(0)=A_2$, $A_2(\lambda+1)=2A_2(\lambda)$, and for $\lambda$ a limit ordinal, $A_2(\lambda)=\cap_{\lambda'<\lambda} A_2(\lambda')$. The least $\lambda$ such that $A_2(\lambda)=A_2(\lambda+1)$ is the Ulm length of $A_2$. Now let $P(\lambda)=\{\alpha\in A_2(\lambda) : 2\alpha=0\}$. The Ulm invariant of $A_2$ at $\lambda$, denoted $U_2(\lambda, A_2)$, is $[P(\lambda)/P(\lambda+1):{\mathbb{Z}}/2{\mathbb{Z}}]$. We write $\operatorname{ht}_{A_2}(\alpha)$ or $\operatorname{ht}_A(\alpha)$ for the height of $\alpha\in A_2$ in $A_2$, defined to be $\lambda$ such that $\alpha\in A_2(\lambda)\setminus A_2(\lambda+1)$ if such a $\lambda$ exists; otherwise, $\alpha\in A_2$ is divisible and we write $\operatorname{ht}_{A}(\alpha)=\infty$. We denote by $D(A)$ the divisible subgroup of $A$. For $\alpha\neq 0$, we will say that a *divisible tower over* $\alpha$ is a set $\{\alpha_i\}_{i=0}^\infty$ satisfying $2^j\alpha_i= \alpha_{i-j}$ for $j\le i$ and $\alpha_0=\alpha$. Given nonzero $\alpha\in A_2$, $\alpha\in D(A_2)$ if and only if there exists a divisible tower over $\alpha$. The Genus Zero Extension $E$ ---------------------------- A genus zero extension $E$ over a field $k$ is the quotient field of $k[x,y]/\langle 1-cx^{2}-dy^{2} \rangle$ for $c, d\in k^*$. We set $l=k(\sqrt{d})$. The quaternion algebra $(c,d)$ is split in $\operatorname{Br}(k)$ iff $E$ is isomorphic to a rational function field in one variable over $k$. We will assume throughout that $(c,d)$ is not split. Hence $l\not\subset E$. Let $El$ denote the compositum of $l$ and $E$. We determine an element $u$ such that $El=l(u)$ as follows. Let $u = (1+\sqrt{d}y) / x$. Then $u(1-\sqrt{d}y)/x=c$, so that $c/u = (1-\sqrt{d}y)/ x$. Moreover, $x=2/(u+c/u)$ and $y = (2u/(u+c/u)-1)/ \sqrt{d}$, establishing that $El=l(u)$. We denote by $\sigma$ the unique nontrivial element of $G_{l/k}$ and by $s$ the nontrivial element of $G_{El/E}$. We may extend $\sigma$ to an element of $G_k$, which by abuse of language we also denote by $\sigma$, and we similarly denote by $s$ the corresponding extension of $s$ to $G_{E\bar k/E}$. In what follows $p$ will always be restricted to monic, irreducible $p\in l[u]\setminus \{u\}$, and for each $p$ we fix a root $a_p$ of $p$ for the remainder of the paper. If $\{a_{i}\}\subset \bar k^{*}$ are the roots of $p$, we set $\tilde p\in l[X]$ to be the monic, irreducible polynomial with roots $\{c/ \sigma(a_{i})\}$. Since $\sigma^2\in G_l$, $p\mapsto \tilde p$ is an order two action. \[le:sact\] The map $s$ acts triangularly on $${\bar k}(u)^* = \bar k^* \times \langle u\rangle \times \coprod_{a\in \bar k^*} \langle u+a\rangle$$ by 1. $s(a)=\sigma(a)$, $a\in \bar k^*$; 2. $s(u)=c\cdot \frac{1}{u}$; and 3. $s(u+a) = \sigma(a) \cdot \frac{1}{u} \cdot (u+\frac{c}{\sigma(a)})$, $a\in \bar k^{*}$. Moreover, for all $p$, $s(p) = \sigma(p(0))\cdot u^{-\deg p}\cdot \tilde p(u)$. For later use we define componentwise homomorphisms $s_{uu} \colon \langle u\rangle \to \langle u\rangle$, $s_{u+a,u} \colon \langle u+a\rangle\to \langle u\rangle$, and $s_{p\tilde p}\colon \prod_{p(a)=0} \langle u+a\rangle\to \prod_{\tilde p(a)=0} \langle u+a\rangle$ by $s_{uu}(u^e) = s_{u+a,u}((u+a)^e)=1/u^e$ and $$s_{p\tilde p} \left( \prod_{p(a)=0} (u+a)^{e_a} \right) = \prod_{p(a)=0} \left(u+\frac{c}{\sigma(a)}\right)^{e_a} = \prod_{\tilde p(a)=0} (u+a)^{e_{c/\sigma^{-1}(a)}}.$$ The Brauer Group of $E$ {#se:brgrp} ----------------------- We have identified $\operatorname{Br}(E)$ with $H^{2}(G_{E}, \bar E^{*})$. The algebraic closure $\bar E$ of $E$ is identical to the algebraic closure of $\bar k(u)$, and since by Tsen’s theorem the Brauer group of $\bar k(u)$ is trivial, we have that $H^2(G_{E}, \bar E^{*})\cong H^{2}(G_{\bar k(u)/E}, \bar k(u)^{*})$. Now every element $\gamma\in G_{k}$ lifts to $\gamma' \in G_{\bar k(u)/E}$ by extending the automorphism trivially on $x$ and $y$. Inversely, since $\bar k(u) = E\otimes_{k} \bar k$ and $k$ is algebraically closed in $E$, any element $\gamma' \in G_{\bar k(u)/E}$ sends $\bar k$ to $\bar k$. Therefore we may and do identify $G_{\bar k(u)/E}$ with $G_{k}$, and we have $\operatorname{Br}(E)\cong H^{2}(G_{k}, \bar k(u)^{*})$. Now by Hilbert’s Theorem 90, $H^1(G_l,\bar k(u)^*)$ is trivial. Moreover, since $G_{l/k}$ is finite cyclic, $H^3(G_{l/k},l(u)^*) \cong H^1(G_{l/k},l(u)^*)$, which is also trivial by Theorem 90. The standard inflation-restriction five-term exact sequence [@NSW Prop. 1.6.6] beginning with $H^2$ then begins $$\label{eq:brexactseq} 0\to H^2(G_{l/k},l(u)^*)\to H^2(G_k,\bar k(u)^*) \xrightarrow{\phi} H^2(G_l,\bar k(u)^*)^{G_{l/k}}\to 0,$$ or, equivalently, $$0\to \operatorname{Br}(El/E) \to \operatorname{Br}(E)\to \operatorname{Br}(l(u))^{G_{l/k}} \to 0.$$ Now $\sigma \in G_k$ acts naturally on $\operatorname{Br}(l(u))=H^2(G_l,\bar k(u)^*)$: given a 2-cocycle $h$, this action is $h^s(g_1, g_2) = s(h(\sigma^{-1}g_1 \sigma, \sigma^{-1}g_2 \sigma))$, where $s$ acts on $\bar k(u)^*$ as above. We denote this action on $\operatorname{Br}(l(u))$ by $s^*$, and the fixed group in on the right is therefore $\operatorname{Br}(l(u))^{\langle s^*\rangle}$. The Auslander-Brumer-Faddeev Decomposition ------------------------------------------ Recall that the Auslander-Brumer-Faddeev theorem ([@AB Prop. 4.1], [@F Thms. 15.2, 15.3]) establishes an isomorphism $$\label{eq:fund} \operatorname{Br}(l(u)) \cong \operatorname{Br}(l) \oplus {\mathrm{X}}(l) \oplus \left(\oplus_p {\mathrm{X}}(l(a_p))\right).$$ We will need the particular isomorphisms contained in the proof of this result, which we review as follows. Let $G=G_l$ and for an arbitrary fixed $p$, $H=G_{l(a_p)}$. Let $A= \operatorname{Ind}_{G}^{H}({\mathbb{Z}}) = \operatorname{Hom}_H(G, {\mathbb{Z}})$ be the $G$-module of $H$-module homomorphisms from $G$ to the trivial $G$-module ${\mathbb{Z}}$; $g\in G$ acts on $x\in \operatorname{Ind}_{G}^{H}({\mathbb{Z}})$ via $(gx)(g_1)= x(g_1 g)$. $A$ may be considered the set of functions from $G$ to ${\mathbb{Z}}$ defined on right cosets of $H$ in $G$. Define $\bar x(g_1)= x(g_1^{-1})$ for $x\in A$. Then $\bar x$ is defined on left cosets of $H$ in $G$ and $\overline{gx}(g_1) = \bar x(g^{-1}g_1)$. Consider $B = B(p) := \prod_{p(a)=0} \langle u-a\rangle \subset \bar k(u)^*$. We claim that $A$ and $B$ are isomorphic $G$-modules under the map $\iota\colon A\to B$ given by $$\iota(x) = \prod_{\tau H\in G/H} (u-\tau(a_p))^{\bar x(\tau)}.$$ To check that $\iota$ respects $G$-action, we calculate $$\begin{aligned} \iota({gx}) &= \prod_{\tau H\in G/H} (u-\tau(a_p))^{\overline{gx}(\tau)} = \prod_{\tau H\in G/H} (u-\tau(a_p))^{\bar x(g^{-1}\tau)} \\ &= \prod_{g\tau H\in G/H} (u-g\tau(a_p))^{\bar x(\tau)} = g (\prod_{\tau H\in G/H} (u-\tau(a_p))^{\bar x(\tau)}) = g\iota(x).\end{aligned}$$ The proof of the Auslander-Brumer-Faddeev theorem proceeds by splitting the $G_l$-module $\bar k(u)^*$ of $H^2(G_l,\bar k(u)^*)$ as $$\label{eq:sp} \bar k(u)^* = \bar k^* \times \langle u\rangle \times \coprod_{p} \prod_{p(a)=0} \langle u-a\rangle,$$ and then realizing the summands ${\mathrm{X}}(l)$ and ${\mathrm{X}}(l(a_p))$ with the isomorphisms $$\label{eq:chiofl} {\mathrm{X}}(l) = H^1(G_l,{\mathbb{Q}}/{\mathbb{Z}})\xrightarrow{\delta} H^2(G_l,{\mathbb{Z}}) \xrightarrow{\pi^*} H^2(G_l,\langle u\rangle)$$ and $$\label{eq:chioflap} {\mathrm{X}}(l(a_p)) = H^1(H, {\mathbb{Q}}/{\mathbb{Z}}) \xrightarrow{\delta} H^2(H, {\mathbb{Z}}) \xrightarrow{sh} H^2(G,A) \xrightarrow{\iota^*} H^2(G, B).$$ Here $\delta$ denotes the standard coboundary map, $sh$ the map of Shapiro’s Lemma, and $\pi\colon {\mathbb{Z}}\to \langle u\rangle$ the homomorphism $\pi(e)=u^e$ of trivial $G_l$-modules ${\mathbb{Z}}$ and $\langle u\rangle$. We identify $\operatorname{Br}(l(u))$ with the decomposition and denote an arbitrary element of this group by $\beta+\chi_u+ \sum \chi_p$ or $\beta \oplus\chi_u \oplus \sum \chi_p$. By Lemma \[le:sact\], since $s$ acts triangularly on $\bar k(u)^*$ on the factors in and $s_{p\tilde p}$ sends $B(p)$ to $B(\tilde p)$, we have \[le:sstaract\] The map $s^*$ acts triangularly on $\operatorname{Br}(l(u))$. In particular, for arbitrary $\beta \oplus \chi_u \oplus \sum\chi_p \in \operatorname{Br}(l(u))$, $$\label{eq:sstar} \begin{split} s^*(\beta \oplus \chi_u \oplus \sum \chi_p) &= \left( s_{11}^*(\beta) + s_{u1}^*(\chi_u) + \sum s_{p1}^*(\chi_p) \right) \oplus \\ &\phantom{=} \ \left( s_{uu}^*(\chi_u) + \sum s_{pu}^*(\chi_p) \right) \oplus \\ &\phantom{=} \ \sum s_{p\tilde p}(\chi_p), \end{split}$$ where we denote the component parts of $s^*$ as follows: 1. $s_{p\tilde p}^*\colon {\mathrm{X}}(l(a_p))\to {\mathrm{X}}(l(a_{\tilde p}))$; 2. $s_{pu}^* \colon {\mathrm{X}}(l(a_p))\to {\mathrm{X}}(l)$; 3. $s_{uu}^*\colon {\mathrm{X}}(l)\to {\mathrm{X}}(l)$; 4. $s_{p1}^* \colon {\mathrm{X}}(l(a_p))\to \operatorname{Br}(l)$; 5. $s_{u1}^* \colon {\mathrm{X}}(l)\to \operatorname{Br}(l)$; and 6. $s_{11}^* \colon \operatorname{Br}(l)\to \operatorname{Br}(l)$. For later use we further define $s_u^*=\sum_p s_{pu}^*$ and $s_1^*=s_{u1}^*+\sum_p s_{p1}^*$. Decomposing $s^*$ on $\operatorname{Br}(l(u))$ {#se:actions} ============================================== In this section we study the component functions of $s^*$, comparing them to natural Galois actions on $\operatorname{Br}(l)$ and ${\mathrm{X}}(l)$. We denote by $\sigma$ the natural Galois action of $\sigma$ on $\operatorname{Br}(l)\cong H^2(G_l, \bar k^*)$ and on ${\mathrm{X}}(l)\cong H^1(G_l, {\mathbb{Q}}/{\mathbb{Z}})$. Note that in $H^1(G_l, {\mathbb{Q}}/{\mathbb{Z}})$, ${\mathbb{Q}}/{\mathbb{Z}}$ is a trivial $G_k$-module. $s_{11}^*$ on $\operatorname{Br}(l)$ and $\operatorname{Br}(l_{\mathfrak{p}})$ ------------------------------------------------------------------------------ Since $s$ acts on $\bar k^*$ via $\sigma$, the action of $s_{11}^*$ on $\operatorname{Br}(l)$ is the natural Galois action: $$ s_{11}^*(\beta) = \sigma(\beta), \qquad \beta\in \operatorname{Br}(l).$$ We now determine the action of $\sigma$ on the local invariants $b_{\mathfrak{p}}$ of an element $b$ of $\operatorname{Br}(l)$ with the following general result. \[pr:action\] Let $K/k$ be a finite Galois extension of global fields. Suppose $g\in G_{K/k}$ and ${\mathfrak{p}}$ be a prime of $K$, and let $b_{\mathfrak{p}}$ denote the local invariant of $b\in \operatorname{Br}(K)$ at ${\mathfrak{p}}$. Then $$\label{eq:action} (g(b))_{\mathfrak{p}}= b_{g^{-1}({\mathfrak{p}})}.$$ For an arbitrary field $K$, a $K$-algebra $A$ is an associative ring $A$ with unity together with an embedding $\alpha\colon K\hookrightarrow A$ of $K$ into the center of $A$. Thus a $K$-algebra should be considered a pair $(A,\alpha)$. Now let $(B,\beta)$ be a second $K$-algebra. Then the tensor product $$(C, \gamma)=(A, \alpha)\otimes_K(B, \beta)$$ is generated by elements $a\otimes b$, $a\in A$, $b \in B$, satisfying $\alpha(r)a\otimes b=a\otimes \beta(r)b$, $r \in K$, and where $\gamma=\alpha \odot \beta \colon K \hookrightarrow C$ is defined by $\gamma(r) = \alpha(r) \otimes 1_B = 1_A \otimes \beta(r)$. Let $g$ be an automorphism of $K$. An action of $g$ on $(A, \alpha)$ may be defined by $(A, \alpha)^{g} := (A, \alpha^{g})$, where $\alpha^{g}(r)=\alpha g(r)$ for $r\in K$. Note that as rings, $(A,\alpha)$ and $(A,\alpha)^{g}$ are isomorphic. It follows that $\gamma^{g}(r)=\gamma g(r)=\alpha g(r)\otimes 1 =1\otimes\beta g(r)$, *i.e.*, $\alpha^{g}(r)\otimes 1 = 1\otimes \beta^{g}(r)$, so $$\label{eq:tensoract} \gamma^{g}= (\alpha\odot\beta)^{g} = \alpha^{g} \odot \beta^{g}.$$ We now specialize to the case where $(A, \alpha)$ is a finite-dimensional central simple $K$-algebra and $B$ is a complete discrete valued field $K_{\mathfrak{p}}$, where ${\mathfrak{p}}$ will denote the valuation of $K_{\mathfrak{p}}$ as well as the valuation induced on $K$ by the embedding $\beta$ of $K$ into $K_{\mathfrak{p}}$. By , $$(\alpha^{{g}^{-1}}\odot \beta)^{g} = \alpha\odot\beta^{g}.$$ Note again that, as rings, $(C, (\alpha^{{g}^{-1}} \odot \beta)^{g})$ and $(C, \alpha^{{g}^{-1}} \odot \beta)$ are isomorphic. Moreover, $(C, (\alpha^{{g}^{-1}} \odot \beta)^{g})$ and $(C, \alpha\odot\beta^{g})$ both represent finite-dimensional central simple algebras over complete discrete valued fields. Now assume the hypotheses of the Proposition. By a lemma of Janusz [@Ja Lemma, p. 385], $(C, (\alpha^{{g}^{-1}} \odot \beta)^{g})$ and $(C, \alpha^{{g}^{-1}}\odot\beta)$ have the same local invariant. It follows that $(C, \alpha^{{g}^{-1}} \odot \beta)$ and $(C, \alpha \odot \beta^{g})$ have the same local invariant. In the proof of Janusz’ lemma, the assertion is made that a field isomorphism of $p$-adic local fields is an isomorphism of $p$-adic local fields, *i.e.*, that prime elements are mapped to prime elements. To prove this statement, the essential idea is the same as in the proof that the only algebraic automorphism of ${\mathbb{Q}}_p$ is the identity. Suppose $L_i$, $i=1, 2$ are finite extensions of ${\mathbb{Q}}_p$ with ramification indices $e_i$ and normalized valuations $v_i$, $f\colon L_1\to L_2$ is a field isomorphism, and $\pi$ is a prime element of $L_1$. Then $\pi^{e_1}=pu$ with $u\in L_1$ a unit. Write $f(\pi)^{e_1} = f(pu) = f(p) f(u) = p f(u)$. Now if $f(u)$ is a unit in $L_2$, we are done: since $e_1v_2(f(\pi))=v_2(p)=e_2$, $e_1 \mid e_2$ and by symmetry $e_1=e_2$ and $v_2(f(\pi))=1$. That $f$ maps units to units follows from the fact that the units of a $p$-adic field may be characterized algebraically, as the only elements that have $n$th roots in the field for infinitely many $n$. $s_{uu}^*$ on ${\mathrm{X}}(l)$ ------------------------------- \[pr:suuact\] In the decomposition , $$\label{eq:suuact} s_{uu}^*(\chi) = -\sigma(\chi), \quad \chi\in {\mathrm{X}}(l),$$ where $\sigma$ denotes the natural Galois action on ${\mathrm{X}}(l)=H^{1}(G_{l},{\mathbb{Q}}/{\mathbb{Z}})$. The homomorphism $s_{uu}^*$ acts on a 2-cocycle $f\in H^{2}(G_{l},\langle u\rangle)$ by $$f(g_1, g_2) \mapsto s_{uu}(f(\sigma^{-1}g_1\sigma, \sigma^{-1}g_2\sigma)) = -f(\sigma^{-1}g_1\sigma, \sigma^{-1}g_2\sigma),$$ since the image of $f$ lies in $\langle u\rangle$ and $s_{uu}$ is the inversion map. Now ${\mathrm{X}}(l)$ appears in under the isomorphisms in . Using the facts that in the natural Galois actions on $H^1(G_l, {\mathbb{Q}}/{\mathbb{Z}})$ and $H^2(G_l, {\mathbb{Z}})$, the action of $\sigma$ on the modules ${\mathbb{Q}}/{\mathbb{Z}}$ and ${\mathbb{Z}}$ is trivial, and that $\sigma$ commutes with coboundary maps $\delta$ [@NSW Prop. 1.5.2], a direct calculation yields $$\delta^{-1} \circ (\pi^*)^{-1} \circ s_{uu}^* \circ \pi^* \circ \delta = -\sigma.$$ $s_{pu}^*$ on ${\mathrm{X}}(l(a_p))$ ------------------------------------ \[pr:spuact\] In the decomposition , $$\label{eq:spuact} s_{pu}^*(\chi) = -\sigma(\operatorname{Cor}_{l(a)/l} (\chi_p)), \quad \chi_p\in {\mathrm{X}}(l(a_p)),$$ where $\sigma$ denotes the natural Galois action on ${\mathrm{X}}(l)=H^{1}(G_{l},{\mathbb{Q}}/{\mathbb{Z}})$. Let $B=\prod_{p(a)=0} \langle u-a\rangle$. The homomorphism $s_{pu}^*$ acts on a 2-cocycle $f\in Z^{2}(G_{l}, B)$ by $$f(g_1, g_2) \mapsto (\prod s_{u+a,u}) (f(\sigma^{-1}g_1\sigma, \sigma^{-1}g_2\sigma)),$$ where for each $a$, $s_{u+a,u}(u+a)=\frac{1}{u}$. Further let $\rho\colon B\to \langle u\rangle$ be defined by $$\rho\left( \prod_{p(a)=0} (u-a)^{e_a}\right)= u^{(\sum_{p(a)=0} e_a)},$$ and let $\rho^*$ be the induced map $\rho^*\colon H^2(G_l, B)\to H^2(G_l, \langle u\rangle)$ of cohomology. Observe that $s_{pu}=s_{uu}\circ \rho$, hence $s_{pu}^* = s_{uu}^* \circ \rho^*$. Keeping the notation for $G$, $H$, and $A$ as in , define a $G$-module homomorphism $\nu\colon A \to {\mathbb{Z}}$ by $$\nu(x)=\sum_{\tau H\in G/H} \tau x(\tau^{-1}) =\sum_{\tau H\in G/H} x(\tau^{-1}).$$ We calculate $$\begin{aligned} \rho(\iota(x)) &= \rho\left(\prod_{\tau H\in G/H} (u-\tau(a_p))^{\bar x(\tau)}\right) = \prod_{\tau H\in G/H} u^{\bar x(\tau)} \\ &= u^{(\sum_{\tau H\in G/H} x(\tau^{-1}))} = \pi\left(\sum_{\tau H\in G/H} x(\tau^{-1})\right) = \pi(\nu(x)). \end{aligned}$$ Hence we have a commutative diagram $$\begin{CD} H^2(G, A) @>{\iota^*}>> H^2(G, B)\\ @V{\nu^*}VV @VV{\rho^*}V \\ H^2(G, {\mathbb{Z}}) @>{\pi^*}>> H^2(G, \langle u\rangle)\\ \end{CD}$$ Now $\nu^*\circ sh$ is the corestriction map from $H^2(G_{l(a_p)},{\mathbb{Z}})\to H^2(G_l,{\mathbb{Z}})$ [@NSW Prop. 1.6.4]. Moreover, the coboundary map $\delta$ from $H^1$ to $H^2$ commutes with the corestriction map [@NSW Prop. 1.5.2]. Hence under the isomorphisms in and , $$\begin{aligned} \delta^{-1} \circ (\pi^*)^{-1} \circ \rho^* \circ \iota^* \circ sh \circ \delta & = \delta^{-1} \circ \nu^* \circ sh \circ \delta \\ &= \delta^{-1} \circ (\operatorname{Cor}_{H^2}) \circ \delta = \operatorname{Cor}_{H^1}. \end{aligned}$$ Therefore in the decomposition , $\rho^*$ is the corestriction map, and since $s_{pu}^*=s_{uu}^*\circ \rho^*$, by , $s_{pu}^*=-\sigma \operatorname{Cor}_{l(a)/l}$. $s_{p\tilde p}^*$ on ${\mathrm{X}}(l(a_p))$ ------------------------------------------- Let $\operatorname{Cor}\colon \oplus {\mathrm{X}}(l(a_p))\to {\mathrm{X}}(l)$ be the sum $\sum \operatorname{Cor}_{l(a_p)/l}$ of the corestrictions on each summand. \[pr:sppact\] In the decomposition , $$\label{eq:corsigma} \operatorname{Cor}_{l(a_{\tilde p})/l}(s_{p\tilde p}^*(\chi_p)) = \sigma(\operatorname{Cor}_{l(a)/l} (\chi_p)), \quad \chi_p\in {\mathrm{X}}(l(a_p))$$ and if $p=\tilde p$, then $$\label{eq:spppistp} s_{pp}^*(\chi_p) = \tilde \sigma(\chi_p), \quad \chi_p\in {\mathrm{X}}(l(a_p)),$$ where $\tilde\sigma=\sigma\tau$ for a $\tau\in G_l$ such that $\tilde\sigma(a_p)=c/a_p$. If for all $p$, $s_{p\tilde p}^*(\chi_p)= \chi_{\tilde p}$, then $$\label{eq:corsum} \sigma(\operatorname{Cor}\sum \chi_p) = \operatorname{Cor}\sum \chi_p$$ and if additionally $\chi_u\in {\mathrm{X}}(l)$ such that $(1+\sigma)\chi_u = - \operatorname{Cor}\sum \chi_p$, then $$\label{eq:corsum2} \sigma(s_1^*(\chi_u+\sum \chi_p)) = -s_1^*(\chi_u+\sum \chi_p).$$ Equations , , and each follow from the fact that $s^*$ has order 2. For the first result, let $\chi_p\in {\mathrm{X}}(l(a_p))$ be arbitrary and set $\chi_{\tilde p}=s_{p\tilde p}^*(\chi_p)$. Then by applying twice, we see that the component of $s^*(s^*(\chi_p))=\chi_p$ in ${\mathrm{X}}(l)$ is $$s_{uu}^*s_{pu}^*(\chi_p)+s_{\tilde pu}^*(\chi_{\tilde p}),$$ which, by and , is $\operatorname{Cor}\chi_p -\sigma \operatorname{Cor}\chi_{\tilde p}$. Then since this component must be trivial, the first result follows. A similar argument establishes and . For , first observe that if $p=\tilde p$, then the action $a\mapsto c/\sigma(a)$ permutes the roots of $p$. Hence $c/\sigma(a)=a_p$ for some root $a$. Since $G_l$ is transitive on the roots of $p$, $a=\tau(a_p)$ for some $\tau \in G_l$. Hence $a_p=c/\sigma\tau(a_p)$. Let $\tilde \sigma = \sigma \tau$. Keeping the notation for $G$, $H$, $A$, and $B$ as in , observe that $s_{pp}^*$ acts on $H^2(G,B)$ by sending an arbitrary 2-cocycle $f(g_1, g_2)$ to $$s_{pp}(f(\sigma^{-1}g_1 \sigma, \sigma^{-1}g_2 \sigma)).$$ Now $\tau^*$ acts on $H^2(G, B)$ trivially [@NSW Prop. 1.6.2], so $s_{pp}^*\circ \tau^*=s_{pp}^*$. However, direct calculation shows that $s_{pp}^*\circ \tau^*$ sends $f(g_1, g_2)$ to $(s_{pp}\circ \tau)(f({\tilde\sigma}^{-1}g_1 \tilde\sigma, \ {\tilde\sigma}^{-1}g_2 \tilde\sigma))$, where $$\begin{aligned} (s_{pp}\circ \tau)\left(\prod_{p(a)=0} (u-a)^{e_a}\right) &= \prod \left(u-\frac{c}{\sigma \tau(a)}\right)^{e_a} \\ &= (u-a_p)^{e_{a_p}} \prod_{a\neq a_p} \left(u-\frac{c}{\sigma\tau(a)}\right)^{e_a}. \end{aligned}$$ Hence $s_{pp}\circ \tau$ acts trivially on $\langle u-a_p\rangle \subset B$, and consequently $\iota^{-1} \circ (s_{pp}\circ \tau) \circ \iota$ acts trivially on the values $x(1)$, $x\in A$. Recall that $(sh)^{-1}$ is induced by the module map $A\to {\mathbb{Z}}$ given by $x\mapsto x(1)=x(H)$ [@NSW Prop. 1.6.3]; hence $(sh)^{-1} \circ (\iota^*)^{-1} \circ s_{pp}^* \circ \tau^* \circ \iota^* \circ sh$ acts on $H^2(H, {\mathbb{Z}})$ by sending a 2-cocycle $f(g_1, g_2)$ to $f({\tilde \sigma}^{-1}g_1 \tilde\sigma, \ {\tilde \sigma}^{-1}g_2 \tilde\sigma)$. As a result, $${\tilde \sigma} = sh^{-1} \circ (\iota^*)^{-1} \circ s_{pp}^* \circ \iota^* \circ sh$$ and since $\tilde\sigma$ commutes with coboundary maps [@NSW Prop. 1.5.2], $s_{pp}^*(\chi_p) = \tilde\sigma(\chi_p)$. The Fixed Subgroup $\operatorname{Br}(l(u))^{\langle s^*\rangle}$ {#se:subgroup} ==================================================== Notation -------- Let ${\mathrm{B}}^G$ denote the fixed subgroup $\operatorname{Br}(l(u))_2^{\langle s^* \rangle}$ of $\operatorname{Br}(l(u))_2$. By and , a direct calculation shows that this group consists of precisely those elements satisfying 1. \[cond1\] $s_{p\tilde p}^* \chi_p = \chi_{\tilde p}$, $\forall p$; 2. \[cond2\] $(1+\sigma)\chi_u = - \operatorname{Cor}\sum \chi_p$; and 3. \[cond3\] $(1-\sigma)\beta = s_1^*(\chi_u+\sum \chi_p)$. Let ${\mathrm{X}_{\neq u}}=\oplus_p {\mathrm{X}}(l(a_p))_2$ and ${\mathrm{X}}={\mathrm{X}}(l)_2 \oplus {\mathrm{X}_{\neq u}}$. We further define ${\mathrm{X}_{\neq u}}^G$ and ${\mathrm{X}}^G$ as follows: $$\begin{aligned} {\mathrm{X}_{\neq u}}^G &= \left\{ \sum \chi_p \in {\mathrm{X}_{\neq u}}\colon s_{p\tilde p}^*(\chi_p) = \chi_{\tilde p} \ \forall p\right\}; \\ {\mathrm{X}}^G &= \left\{ \chi = \chi_u + \sum \chi_p \in {\mathrm{X}}\colon (1+\sigma)\chi_u = - \operatorname{Cor}\sum \chi_p, \right. \\ &\phantom{=} \left. \ \ s_{p\tilde p}^* \chi_p = \chi_{\tilde p} \ \forall p \right\}.\end{aligned}$$ Now assume that $k$, and so also $l$, is a number field. Then $D({\mathrm{X}}(l)_2)$ is of finite rank [@NSW Thm. 11.1.2] and its dual $\operatorname{Hom}(D({\mathrm{X}}(l)_2), {\mathbb{Q}}/{\mathbb{Z}})$ is a free ${\mathbb{Z}}_2$-module of finite rank. Applying [@CR Thm. 74.3] and passing to the dual again, we may then decompose $D({\mathrm{X}}(l)_2)$ as $$\label{eq:dchil2} D({\mathrm{X}}(l)_2) = I\oplus N\oplus P,$$ where each of $I$, $N$, and $P$ is a finite direct sum of ${\mathbb{Q}}_2/{\mathbb{Z}}_2$ summands and the natural Galois action $\sigma$ is trivial on $I$, negation on $N$, and permutation of pairs of ${\mathbb{Q}}_2/{\mathbb{Z}}_2$ summands on $P$. Then $D({\mathrm{X}}(l)_2)^{\langle \sigma\rangle}=I\oplus {}_2N\oplus P^{\langle \sigma\rangle}$. A sketch of an elementary proof of is as follows. Let $M$ be a direct sum of finitely many copies of ${\mathbb{Q}}_2/{\mathbb{Z}}_2$, and let $\sigma$ be an order 2 action on $M$. Denote by $V$ the ${\mathbb{F}}_2$-vector space given by the 2-torsion ${}_2M$ of $M$. By linear algebra $V$ decomposes into $V_I\oplus V_P \oplus \sigma(V_P)$. Let $W_1$ be a complement of $(V_P+\sigma(V_P))\cap (1+\sigma)M$ in $V\cap (1+\sigma)M$ and $W_{-1}$ a complement of $(V_P+\sigma(V_P))\cap (1-\sigma)M$ in $V\cap (1-\sigma)M$. Using the fact that $M=2M=(1+\sigma)M+(1-\sigma)M$, one shows that $V=V_1 \oplus V_{-1} \oplus V_P \oplus \sigma(V_P)$. Now since $M$ is divisible, the homomorphic image $(1+\sigma)M$ is divisible; we may therefore construct a divisible tower inside $(1+\sigma)M$ over each element of a basis of $W_1$. Denote by $I$ the subgroup generated by these towers. Similarly denote by $N$ the subgroup generated by towers inside $(1-\sigma)M$ constructed over each element of a basis of $W_{-1}$. Finally construct towers over each basis element of $V_P$ and let $P$ be the subgroup generated by these towers and their $\sigma$-conjugates. Observe that a sum $\sum M_i$ of subgroups $M_i$ of $M$ is direct if and only if $\sum (M_i\cap V)=\oplus (M_i\cap V)$. Moreover, if each $M_i$ is divisible and $\sum \dim_{{\mathbb{F}}_2} (M_i\cap V) = \dim_{{\mathbb{F}}_2} V$, then $M=\oplus M_i$. Hence $M=I\oplus N\oplus P$. Each $w\in D({\mathrm{X}}(l)_2)^{\langle \sigma\rangle}$ of order 2 represents a cyclic extension of $l$ of degree 2 which is fixed under the action of $\sigma$. By Kummer theory $w=l(\sqrt{e})$ for $e\in l^*\setminus {l^{*2}}$, and since $w$ is fixed by $\sigma$, $l(\sqrt{e})$ is Galois over $k$. The group $G_{l(\sqrt{e})/k}$ is isomorphic either to ${\mathbb{Z}}/4{\mathbb{Z}}$ or to ${\mathbb{Z}}/2{\mathbb{Z}}\oplus {\mathbb{Z}}/2{\mathbb{Z}}$. One may choose a representative in $k^*$ for the class of $e$ in $l^*/{l^{*2}}$ if and only if $G_{l(\sqrt{e})/k} \cong {\mathbb{Z}}/2{\mathbb{Z}}\oplus {\mathbb{Z}}/2{\mathbb{Z}}$. In this case we write $w=l(\sqrt{e})$, $e\in k^*\setminus k^{*2}$. We also write that $w=0$ satisfies $w=l(\sqrt{1})$. Define $W$ to be the exponent 2 subgroup of $D({\mathrm{X}}(l)_2)^{\langle \sigma\rangle}$ consisting of extensions with Klein 4-group over $k$, together with the identity element: $$W=\{ w\in D({\mathrm{X}}(l)_2)^{\langle \sigma\rangle}: \vert w\vert \le 2, \ w=l(\sqrt{e}), \ e\in k^*\}.$$ For an element $w\in D({\mathrm{X}}(l)_2)$, write $w_I$ for the component of $w$ in $I$, and set $W_I = \{ w_I\in I : w\in W\}$. Now define a function $\lambda$ from $W_I$ to the set of ordinals as follows: $$\lambda_w = \sup \left\{ \operatorname{ht}_{{\mathrm{X}_{\neq u}}^G}\left(\sum \chi_p\right) : \left\vert\sum \chi_p \right\vert = 2, \ \ \left(\operatorname{Cor}\sum \chi_p\right)_I = w\right\}.$$ We will repeatedly use the fact that the Ulm length of ${\mathrm{X}}(K)$ for $K$ a number field is less than $\omega 2$ [@FS Thm. 1], and hence that the Ulm lengths of ${\mathrm{X}}_{\neq u}$ and ${\mathrm{X}}$ are at most $\omega 2$. Here, this result implies that for any $w\in W_I$ we have that $\lambda_w \le \omega 2$ or $\lambda_w=\infty$. Reducing to ${\mathrm{X}}^G$ ---------------------------- \[pr:equiv1\] Suppose $k$ is a number field. Then there exists an order 2 element in ${\mathrm{B}}^G$ of height $\omega 2$ if and only if there exists an order 2 element in ${\mathrm{X}}^G$ of height $\omega 2$. \[le:bggei\] Suppose $k$ is a number field and that $\alpha=\beta + \chi\in {\mathrm{B}}^G$ has $\beta\in D(\operatorname{Br}(l))$ and $\operatorname{ht}_{{\mathrm{X}}^G}(\chi)\ge i+1$ for some $i\in {\mathbb{N}}_{\ge 0}$. Then $\operatorname{ht}_{{\mathrm{B}}^G}(\alpha)\ge i$. We are given that there exists $\chi_{i+1}\in {\mathrm{X}}^G$ with $2^{i+1}\chi_{i+1}=\chi$. Let $\chi_i=2\chi_{i+1}$. By specifying the element by invariants at each place ${\mathfrak{p}}$ of $l$, we will construct $\beta_i\in \operatorname{Br}(l)$ such that $2^i\beta_i=\beta$ and $(1-\sigma)\beta_i = s_1^*\chi_i$. Then $\alpha_i = \beta_i+\chi_i$ satisfies $\alpha_i\in {\mathrm{B}}^G$ and $2^i\alpha_i=\alpha$. Consider places ${\mathfrak{p}}_1, {\mathfrak{p}}_2$ permuted by $\sigma$. Let $x=\beta_{{\mathfrak{p}}_1}$, $y=\beta_{{\mathfrak{p}}_2}$. Since $\alpha\in {\mathrm{B}}^G$, $(1-\sigma)\beta = s_1^*\chi$. Then, by , $(s_1^*\chi)_{{\mathfrak{p}}_1} = ((1-\sigma)\beta)_{{\mathfrak{p}}_1} =x-y$. Let $z=(s_1^* \chi_{i+1})_{{\mathfrak{p}}_1}$. Since $s_1^*$ is a homomorphism, $2^{i+1}z=x-y$. Set $(\beta_i)_{{\mathfrak{p}}_1} = 2z + \frac{1}{2^i}y$ and $(\beta_i)_{{\mathfrak{p}}_2} = \frac{1}{2^i} y$. Then $(2^i\beta_i)_{{\mathfrak{p}}_1} =2^{i+1}z+y=x-y+y=x$ and $(2^i\beta_i)_{{\mathfrak{p}}_2}=y$. Now by $$((1-\sigma)\beta_i)_{{\mathfrak{p}}_1} = 2z + \frac{1}{2^i}y - \frac{1}{2^i}y = 2z = (s_1^*\chi_i)_{{\mathfrak{p}}_1}.$$ Moreover, using and, by , that $\sigma(s_1^*(\chi)) = -s_1^*(\chi)$ for $\chi\in {\mathrm{X}}^G$, we obtain $$((1-\sigma)\beta_i)_{{\mathfrak{p}}_2} = -2z = (s_1^*\chi_i)_{{\mathfrak{p}}_2}.$$ For all pairs ${\mathfrak{p}}_j$, $j=1,2$ considered, then, we have $(2^i\beta_i)_{{\mathfrak{p}}_j}=\beta_{{\mathfrak{p}}_j}$ and $((1-\sigma)\beta_i)_{{\mathfrak{p}}_j} = (s_1^*\chi_i)_{{\mathfrak{p}}_j}$. Now consider archimedean, inert, or ramified places ${\mathfrak{p}}$. By , for any $\gamma\in \operatorname{Br}(l)$, $((1-\sigma)\gamma)_{{\mathfrak{p}}}=0$ at each such ${\mathfrak{p}}$. Moreover, by , for any $\delta\in {\mathrm{X}}^G$, $\sigma(s_1^*(\delta)) = -s_1^*(\delta)$. Hence $(s_1^*(\delta))_{\mathfrak{p}}\in \{0,1/2\}$. If $\delta$ is divisible by $2$ in ${\mathrm{X}}^G$, then since $s_1^*$ is a homomorphism we have that $(s_1^*(\delta))_{\mathfrak{p}}=0$ at any such ${\mathfrak{p}}$. Hence the condition $((1-\sigma)\beta_i)_{{\mathfrak{p}}}=(s_1^*\chi_i)_{{\mathfrak{p}}}$ is always satisfied for any choice of $(\beta_i)_{{\mathfrak{p}}}$. We choose $(\beta_i)_{{\mathfrak{p}}}$ for these ${\mathfrak{p}}$ as follows. Since $\beta\in D(\operatorname{Br}(l))$, $\beta_{\mathfrak{p}}=0$ at any archimedean ${\mathfrak{p}}$. We define $(\beta_i)_{\mathfrak{p}}= 0$ at all archimedean ${\mathfrak{p}}$. Now consider the other ${\mathfrak{p}}$, which are inert or ramified over $k$. Let ${\mathfrak{q}}$ denote some such place with $\beta_{{\mathfrak{q}}}=0$. Now for all such ${\mathfrak{p}}\neq {\mathfrak{q}}$, define $(\beta_i)_{\mathfrak{p}}= \frac{1}{2^i}\beta_{\mathfrak{p}}$ at such ${\mathfrak{p}}$. Then for all these ${\mathfrak{p}}$ considered, we have $(2^i\beta_i)_{\mathfrak{p}}=\beta_{\mathfrak{p}}$ and $((1-\sigma)\beta_i)_{\mathfrak{p}}= (s_1^*\chi_i)_{\mathfrak{p}}$. Now $\sum_{{\mathfrak{p}}} \beta_{{\mathfrak{p}}} = \sum_{{\mathfrak{p}}\neq {\mathfrak{q}}} \beta_{{\mathfrak{p}}} = 0$ since $\beta_{{\mathfrak{q}}}=0$. Moreover, $2^i(\beta_i)_{{\mathfrak{p}}} = \beta_{{\mathfrak{p}}}$ for all ${\mathfrak{p}}\neq {\mathfrak{q}}$. Hence $2^i(\sum_{{\mathfrak{p}}\neq {\mathfrak{q}}} (\beta_i)_{{\mathfrak{p}}}) = 0$. Set $(\beta_i)_{{\mathfrak{q}}} = - \sum_{{\mathfrak{p}}\neq {\mathfrak{q}}} (\beta_i)_{{\mathfrak{p}}}$. Then $2^i(\beta_i)_{{\mathfrak{q}}} = 0 = \beta_{{\mathfrak{q}}}$ and $\sum_{{\mathfrak{p}}} (\beta_i)_{{\mathfrak{p}}} = 0$. Hence $\{\beta_{{\mathfrak{p}}}\}$ defines an element of $\operatorname{Br}(l)_2$ satisfying $2^i\beta_i=\beta$ and $(1-\sigma)\beta_i = s_1^*\chi_i$ and we are done. \[le:doublele\] Suppose $k$ is a number field, and let $\alpha=\beta+\chi \in {\mathrm{B}}^G$. Suppose that $\operatorname{ht}_{{\mathrm{B}}^G}(\alpha)>0$. Then $\operatorname{ht}_{{\mathrm{X}}^G}(\chi)-1\le \operatorname{ht}_{{\mathrm{B}}^G}(\alpha) \le \operatorname{ht}_{{\mathrm{X}}^G}(\chi)$. By projection from ${\mathrm{B}}^G$ to ${\mathrm{X}}^G$ we see that $\operatorname{ht}_{{\mathrm{B}}^G}(\alpha)\le \operatorname{ht}_{{\mathrm{X}}^G}(\chi)$. Suppose that $h=\operatorname{ht}_{{\mathrm{X}}^G}(\chi)$ lies in ${\mathbb{N}}$. Since $\operatorname{ht}_{{\mathrm{B}}^G} (\alpha)>0$, $\beta_{{\mathfrak{p}}}=0$ at all archimedean ${\mathfrak{p}}$ and $\beta\in D(\operatorname{Br}(l))$. By Lemma \[le:bggei\], $\operatorname{ht}_{{\mathrm{B}}^G}(\alpha) \ge \operatorname{ht}_{{\mathrm{X}}^G}(\chi)-1.$ Now suppose that $h=\omega$. In this case Lemma \[le:bggei\] shows that in fact $\operatorname{ht}_{{\mathrm{B}}^G}(\alpha) = \operatorname{ht}_{{\mathrm{X}}^G}(\chi).$ Since the Ulm length of ${\mathrm{X}}^G$ is less than or equal to $\omega 2$ [@FS Thm. 1], we are left with the cases $h=\omega+n$ for some $n$, $h= \omega 2$, and $h=\infty$. For each $i\ge 1$ such that $h\ge \omega + i$ we do the following. Let $\chi_i\in {\mathrm{X}}^G$ be such that $2^i\chi_i=\chi$ and for each $j \in {\mathbb{N}}$ let $\delta_j\in {\mathrm{X}}^G$ satisfy $2^j\delta_j = \chi_i$. The proof of Lemma \[le:bggei\] shows that we may find $\beta_{i-1}$ such that $\alpha_{i-1}=\beta_{i-1}+\chi_{i-1}\in {\mathrm{B}}^G$ and $2^{i-1}\alpha_{i-1}=\alpha$. By construction $(\beta_{i-1})_{\mathfrak{p}}= 0$ at all archimedean ${\mathfrak{p}}$, and so $\beta_{i-1}\in D(\operatorname{Br}(l))$. Now $\operatorname{ht}_{{\mathrm{X}}^G}(\chi_{i-1})\ge \omega$, and hence we may apply Lemma \[le:bggei\] again to show that $\operatorname{ht}_{{\mathrm{B}}^G}(\alpha_{i-1}) \ge \omega$. Hence $\operatorname{ht}_{{\mathrm{B}}^G}(\alpha)\ge \omega + (i-1)$. If $h=\omega+n$ for some $n$, we have that $\omega+(n-1)=\operatorname{ht}_{{\mathrm{X}}^G}(\chi)-1\le \operatorname{ht}_{{\mathrm{B}}^G}(\alpha)$. If $h=\omega 2$, we have shown that $\operatorname{ht}_{{\mathrm{B}}^G}(\alpha)=\omega 2$. A similar argument handles the case $h=\infty$, where $\operatorname{ht}_{{\mathrm{B}}^G}(\alpha)=\infty$ as well. \[le:constructbeta\] Suppose that $k$ is a number field, $\chi\in {\mathrm{X}}^G$, and $\operatorname{ht}_{{\mathrm{X}}^G}(\chi)>1$. Then there exists $\alpha=\beta+\chi\in {\mathrm{B}}^G$ such that $\vert \alpha \vert = \vert \chi \vert$ and $\operatorname{ht}_{{\mathrm{X}}^G}(\chi)-1\le \operatorname{ht}_{{\mathrm{B}}^G}(\alpha) \le \operatorname{ht}_{{\mathrm{X}}^G}(\chi)$. Since $\operatorname{ht}_{{\mathrm{X}}^G}(\chi)>1$, $\chi=4\epsilon$ for some $\epsilon\in {\mathrm{X}}^G$. Let $\delta=2\epsilon$ and $\gamma'=s_1^*(\delta)$. Since $\operatorname{ht}_{{\mathrm{X}}^G}(\delta)>0$, $\gamma'_{{\mathfrak{p}}}=0$ at each archimedean place ${\mathfrak{p}}$ of $l$. Similarly, since $\operatorname{ht}_{{\mathrm{X}}^G}(\delta)>0$ and since by the image $s_1^*({\mathrm{X}}^G)$ is $\sigma$-negated, we have that $\gamma'_{{\mathfrak{p}}}=0$ at every inert or ramified place ${\mathfrak{p}}$ of $l$, and, by , that at every pair ${\mathfrak{p}}_1, {\mathfrak{p}}_2$ of $\sigma$-permuted places of $l$, $\gamma'_{{\mathfrak{p}}_1} = -\gamma'_{{\mathfrak{p}}_2}$ as well. We will define a $\gamma\in \operatorname{Br}(l)_2$ via its invariants. Let $\gamma_{{\mathfrak{p}}}=0$ at every archimedean place of $l$. For every pair ${\mathfrak{p}}_1, {\mathfrak{p}}_2$ of $\sigma$-permuted places of $l$, let $\gamma_{{\mathfrak{p}}_1}=\gamma'_{{\mathfrak{p}}_1}$ and $\gamma_{{\mathfrak{p}}_2}=0.$ Now there are only finitely many such pairs ${\mathfrak{p}}_1, {\mathfrak{p}}_2$ at which $\gamma'$ has nontrivial invariants. For each such pair ${\mathfrak{p}}_1, {\mathfrak{p}}_2$ choose an inert or ramified place ${\mathfrak{q}}$ of $l$ and set $\gamma_{{\mathfrak{q}}}=-\gamma_{{\mathfrak{p}}_1}$. At all other inert or ramified places ${\mathfrak{q}}$ set $\gamma_{\mathfrak{q}}=0$. By construction $\sum_{\mathfrak{p}}\gamma_{\mathfrak{p}}= 0$, so there exists a $\gamma\in \operatorname{Br}(l)_2$ with invariants $\{\gamma_{\mathfrak{p}}\}$. Furthermore, $((1-\sigma)\gamma)_{\mathfrak{p}}=0$ at all inert, ramified, or archimedean places ${\mathfrak{p}}$ by . At pairs ${\mathfrak{p}}_1, {\mathfrak{p}}_2$ of $\sigma$-permuted places, we have $$((1-\sigma)\gamma)_{{\mathfrak{p}}_1}=\gamma'_{{\mathfrak{p}}_1}=-\gamma'_{{\mathfrak{p}}_2}= ((1-\sigma)\gamma)_{{\mathfrak{p}}_2}.$$ Hence $(1-\sigma)\gamma = 2(1-\sigma)\gamma' = 2s_1^*(\delta) = s_1^*(2\delta) = s_1^*\chi$ and $\gamma+\delta\in {\mathrm{B}}^G$. Since the invariants of $\gamma$ are either 0 or equal to a corresponding invariant of $\gamma'$, which is a homomorphic image of $\delta$, $\vert \gamma \vert \le \vert \delta \vert$, and hence $\vert \gamma+\delta \vert = \vert \delta\vert$. Setting $\beta=2\gamma$ and $\alpha= 2(\gamma+\delta)= \beta+\chi$, we then have $\vert \alpha \vert = \vert \chi\vert$. Moreover, $\operatorname{ht}_{{\mathrm{B}}^G}(\alpha)>0$ since $\alpha=2(\gamma+\delta)$. Using Lemma \[le:doublele\], we have that $\operatorname{ht}_{{\mathrm{X}}^G}(\chi)-1\le \operatorname{ht}_{{\mathrm{B}}^G}(\alpha) \le \operatorname{ht}_{{\mathrm{X}}^G}(\chi)$. ($\Rightarrow$) Let $\alpha=\beta + \chi$ be an order 2 element in ${\mathrm{B}}^G$ with $\operatorname{ht}_{{\mathrm{B}}^G}(\alpha)=\omega 2$. By restriction, we have that $\operatorname{ht}_{{\mathrm{B}}^G}(\alpha)\le \operatorname{ht}_{{\mathrm{X}}^G}(\chi)$. Since the Ulm length of ${\mathrm{X}}$ is at most $\omega 2$ [@FS Thm. 1], we have that $\operatorname{ht}_{{\mathrm{X}}^G}(\chi)\in \{\omega 2, \infty\}$. Suppose that $\chi\in D({\mathrm{X}}^G)$. By Lemma \[le:constructbeta\], there exists $\beta'\in \operatorname{Br}(l)_2$ such that $\alpha'=\beta'+\chi\in {\mathrm{B}}^G$, $\vert \alpha'\vert = \vert \chi\vert$, and $\alpha'\in D({\mathrm{B}}^G)$. But then $\gamma=\alpha-\alpha' = \beta-\beta'\in {\mathrm{B}}^G$ satisfies $\operatorname{ht}_{{\mathrm{B}}^G}(\gamma)=\omega 2$. Now an element $b\in \operatorname{Br}(l)_2$ lies in ${\mathrm{B}}^G$ if and only if $(1-\sigma)b=0$. But $\operatorname{Br}(l)_2^{\langle \sigma\rangle}$ consists, by , of a restricted direct sum of ${\mathbb{Z}}/2{\mathbb{Z}}$ and ${\mathbb{Q}}_2/{\mathbb{Z}}_2$ summands. Therefore there is no element in $\operatorname{Br}(l)_2\cap {\mathrm{B}}^G$ of height $\omega 2$ and we have a contradiction. Therefore $\operatorname{ht}_{{\mathrm{X}}^G}(\chi)=\omega 2$. ($\Leftarrow$) Now suppose that $\chi\in {\mathrm{X}}^G$ is an order 2 element of height $\omega 2$. By Lemma \[le:constructbeta\], there exists a $\beta\in \operatorname{Br}(l)$ such that $\alpha=\beta+\chi\in {\mathrm{B}}^G$ is of order 2 and $\operatorname{ht}_{{\mathrm{B}}^G}(\alpha)=\omega 2$, and we are done. Reducing to $\lambda$ --------------------- \[pr:order2iflambda\] Suppose $k$ is a number field. Then there exists an order 2 element in ${\mathrm{B}}^G$ of height $\omega 2$ if and only if $\lambda_w=\omega 2$ for some nontrivial $w\in W_I$. \[le:kpinside\] Suppose $k$ is a number field and $\sum\chi_p\in {\mathrm{X}_{\neq u}}^G$ is an order 2 element with height greater than the Ulm length of ${\mathrm{X}}(l)_2$. Then $\operatorname{Cor}\sum \chi_p\in W$. If $\operatorname{Cor}\sum\chi_p=0$, then $0\in W$ and we are done. Otherwise, let $w=\operatorname{Cor}\sum\chi_p$. The order of $w$ is 2, and, since by $\operatorname{Cor}$ is a homomorphism from ${\mathrm{X}_{\neq u}}^G$ to the $\sigma$-invariant subgroup of ${\mathrm{X}}(l)$, $w\in D({\mathrm{X}}(l)_2^{\langle \sigma\rangle})\subset D({\mathrm{X}}(l)_2)^{\langle \sigma\rangle}$. Hence $w\in I\oplus {}_2N\oplus P^{\langle\sigma\rangle}$. Consider $p\neq \tilde p$ for which $\operatorname{Cor}\chi_p$ is not trivial. By , $$\operatorname{Cor}(\chi_p + \chi_{\tilde p}) = \operatorname{Cor}(\chi_p + s_{p\tilde p}^*(\chi_p)) = (1+\sigma)\operatorname{Cor}\chi_p.$$ Now $\operatorname{Cor}\chi_p$ is an element of order at most two in ${\mathrm{X}}(l)_2$, therefore represented by $l(\sqrt{e})$ for $e\in l^*$. Then $(1+\sigma)\operatorname{Cor}\chi_p$ is represented by $l(\sqrt{N_{l/k}(e)})$. Set $z_{p\tilde p}=N_{l/k}(e)$. Now consider $p=\tilde p$ for which $\operatorname{Cor}\chi_p$ is not trivial. Then as in the proof of , there exists a $\tau\in G_l$ such that $a_p=c/\sigma\tau(a_p)$ and we let $\tilde \sigma = \sigma \tau$. Then $\tilde \sigma$ is an automorphism of $l(a_p)$ of order 2 with fixed field $k_p := k(a_p+c/a_p)$. We claim that $[k_p:k]$ is even. Since $\sqrt{d}\notin k_p$ and $[l(a_p):k_p]=2$, $l(a_p)= k_p(\sqrt{d})$. Then $a_p\in l(a_p)$ satisfies $N_{l(a_p)/k_p} a_p=c$, so the quaternion algebra $(c,d)$ splits over $k_p$. But then $[k_p\colon k]$ is even. Since $\chi_p\in {\mathrm{X}}(l(a_p))$ is of order 2, it is represented by $l(a_p)(\sqrt{f})$, where $f$ is determined up to its class in $l(a_p)^*/{l(a_p)^{*2}}$. Moreover, since $s_{pp}^*(\chi_p)= \chi_p$, by , $\tilde\sigma(\chi_p)= \chi_p$, or $\tilde\sigma(f)=f$ in $l(a_p)^*/{l(a_p)^{*2}}$. Hence $N_{l(a_p)/k_p}(f) \in l(a_p)^{*2}$. By Kummer theory, $(k_p^*\cap l(a_p)^{*2})/k_p^{*2}$ consists only of the classes $1$ and $d$. Therefore, modulo $k_p^{*2}$, $N_{l(a_p)/k_p}(f)$ is either $1$ or $d$. Now $\operatorname{Cor}\chi_p$ is represented by $l(\sqrt{e})$, where $e=N_{l(a_p)/l}(f)$, and then $N_{l/k}(e)=N_{k_p/k} N_{l(a_p)/k_p}(f)$. Since $N_{l(a_p)/k_p}(f)$ is $1$ or $d$ mod $k_p^{*2}$ and $N_{k_p/k}(dz^2) = d^{[k_p:k]} N_{k_p/k}(z)^2 \in k^{*2}$, we deduce that $N_{l/k}(e)$ is a square in $k^*$. Then, from the square-class exact sequence ([@La Thm. 3.4]) $$1\to \langle d\cdot {k^*}^2\rangle \to k^*/{k^*}^2 \to l^*/{l^*}^2 \xrightarrow{N_{l/k}} k^*/{k^*}^2$$ we have that, up to squares in $l^*$, $e$ is represented by an element of $k^*$. Set $z_{pp}$ to be this value. For all remaining $p$, set $z_{p\tilde p}=1$. Now $\operatorname{Cor}\sum\chi_p$ is represented by $l(\sqrt{e})$, where $e$ is the product in $l^*$ of $z_{p\tilde p}$ for all $\{p,\tilde p\}$. Hence $\operatorname{Cor}\sum\chi_p$ is an element of order at most 2 represented by $l(\sqrt{e})$, where $e$ is a product of elements from $k$. \[le:lambdaomega2\] Suppose $k$ is a number field and $\chi\in {\mathrm{X}}^G$ is an order 2 element with $\operatorname{ht}_{{\mathrm{X}}^G}(\chi)= \omega 2$. Then there exists $\hat\chi \in D({\mathrm{X}}^G)$ with $\vert\hat\chi\vert=2$ such that $w=\chi-\hat\chi\in W_I$ is nontrivial and $\lambda_w=\omega 2$. Write $\chi=\chi_u + \chi_{\neq u}$ with $\chi_{\neq u} = \sum \chi_p$. Since for each $p$ the Ulm length of ${\mathrm{X}}(l(a_p))_2$ is $\omega+n$ for some $n\in {\mathbb{N}}_{\ge 0}$ [@FS Thm. 1] and since $\operatorname{ht}_{{\mathrm{X}}^G}(\chi)=\omega 2$, we have $\chi_{\neq u}\in D({\mathrm{X}_{\neq u}})$. For each summand ${\mathrm{X}}(l(a_p))_2$ with $p=\tilde p$, and for each pair of summands ${\mathrm{X}}(l(a_p))_2\oplus {\mathrm{X}}(l(a_{\tilde p}))_2$ for $p\neq \tilde p$, the divisible subgroup is a finite direct sum of ${\mathbb{Q}}_2/{\mathbb{Z}}_2$ components [@NSW Thm. 11.1.2]. The fixed subgroup of an order 2 action on a finite direct sum of ${\mathbb{Q}}_2/{\mathbb{Z}}_2$ components is a direct sum of ${\mathbb{Z}}/2{\mathbb{Z}}$ and ${\mathbb{Q}}_2/{\mathbb{Z}}_2$ components. Hence for $\tilde p=p$ the $s_{pp}^*$-fixed subgroup of $D({\mathrm{X}}(l(a_p))_2)$, and when $\tilde p\neq p$, the $(s_{p\tilde p}+s_{\tilde pp})^*$-fixed subgroup of $D({\mathrm{X}}(l(a_p))_2\oplus {\mathrm{X}}(l(a_{\tilde p}))_2)$, is a direct sum of ${\mathbb{Z}}/2{\mathbb{Z}}$ and ${\mathbb{Q}}_2/{\mathbb{Z}}_2$ components. Since $\chi_{\neq u}\in 2{\mathrm{X}}^G_{\neq u}$, in each summand or pair of summands, then, the components of $\chi_{\neq u}$ lie in the divisible part of the $\sum_p s_{p\tilde p}^*$-fixed subgroup. Hence $\chi_{\neq u}\in D({\mathrm{X}_{\neq u}}^G)$. For each $p$, let $\{\chi_p^{(i)}\}_{i=0}^\infty$ be a divisible tower over $\chi_p$, so that $\{\sum\chi_p^{(i)}\}_{i=0}^\infty \subset X^G_{\neq u}$ is a divisible tower over $\chi_{\neq u}$. Since $\operatorname{ht}_{{\mathrm{X}}^G}(\chi)=\omega 2$ and the Ulm length of ${\mathrm{X}}(l)_2$ is $\omega+n$ for some $n\in {\mathbb{N}}_{\ge 0}$, $\chi_u\in D({\mathrm{X}}(l)_2)$. Following the decomposition of $D({\mathrm{X}}(l)_2)$ in , write $\chi_u=w_I+w_N+w_P$. For each pair ${\mathbb{Q}}_2/{\mathbb{Z}}_2\oplus {\mathbb{Q}}_2/{\mathbb{Z}}_2$ of $\sigma$-permuted summands in $P$, denote the components of $z\in P$ in these summands by $z_s$ and $z_t$. Define $$(w_P^{(i)})_s = - (\operatorname{Cor}\sum \chi_p^{(i)})_s - \frac{1}{2^i} (w_P)_t$$ and $(w_P^{(i)})_t = \frac{1}{2^i} (w_P)_t$. (We denote by $(1/2^i)(w_P)_t$ some element yielding $(w_P)_t$ under multiplication by $2^i$, and we fix this element for the duration.) Then $\vert w_P^{(i)}\vert \le 2^{i+1}$ and $\{w_P^{(i)}\}_{i=0}^\infty$ is a divisible tower over $w_P^{(0)}=w_P$ since $(1+\sigma)w_P = (-\operatorname{Cor}\sum \chi_p)_P$ implies $((1+\sigma)w_P)_s = (-\operatorname{Cor}\sum \chi_p)_s$ and $(w_P)_s = (-\operatorname{Cor}\sum \chi_p)_s - (w_P)_t$. Now $$((1+\sigma)w_P^{(i)})_s = - (\operatorname{Cor}\sum \chi_p^{(i)})_s = ((1+\sigma)w_P^{(i)})_t.$$ Furthermore, by the image of $\operatorname{Cor}$ on the divisible tower $\{\sum\chi_p^{(i)}\}_{i=0}^\infty$ over $\chi_{\neq u}=\sum \chi_p$ lies in the $\sigma$-invariant part of $P$, hence with components lying in the diagonals of the pairs ${\mathbb{Q}}_2/{\mathbb{Z}}_2\oplus {\mathbb{Q}}_2/{\mathbb{Z}}_2$. Hence $((1+\sigma)w_P^{(i)})_t = -(\operatorname{Cor}\sum \chi_p^{(i)})_t$ as well. Since $N$ is divisible, we may choose a divisible tower $\{w_N^{(i)}\}_{i=0}^\infty\subset N$ over $w_N$. Now $(1+\sigma)w_N^{(i)}=0$ since $\sigma$ acts by negation on $N$. Furthermore, by the image of $\operatorname{Cor}$ on the divisible tower over $\chi_{\neq u}$ lies in $D({\mathrm{X}}(l)_2)$ and is $\sigma$-invariant, hence has zero component in $N$. Hence $(1+\sigma)(w_N^{(i)}+w_P^{(i)})= - (\operatorname{Cor}\sum \chi_p^{(i)})_{N\oplus P}$. Finally set $\hat w_I^{(i)} = (-\operatorname{Cor}\sum \chi_p^{(i+1)})_I$. Then since $\sigma$ is invariant on $I$, $$(1+\sigma)(\hat w_I^{(i)})= 2\hat w_I^{(i)}=2(-\operatorname{Cor}\sum \chi_p^{(i+1)})_I=(-\operatorname{Cor}\sum\chi_p^{(i)})_I.$$ Moreover, $\{\hat w_I^{(i)}\}_{i=0}^\infty$ is a divisible tower over $\hat w_I := \hat w_I^{(0)}$. Note that $\hat w_I$ has order at most 2 because $(-\operatorname{Cor}\sum \chi_p)_I = ((1+\sigma)\chi_u)_I = (1+\sigma)w_I = 0$ since $\chi\in {\mathrm{X}}^G$ and $w_I$ is of order 2. Let $\hat\chi_u^{(i)}=\hat w_I^{(i)}+w_N^{(i)}+w_P^{(i)}$ and $\hat\chi^{(i)}=\hat\chi_u^{(i)}+\sum \chi_p^{(i)}$. Then $\{\hat\chi^{(i)}\}_{i=0}^\infty \subset {\mathrm{X}}^G$ is a divisible tower over $\hat\chi := \hat\chi_u^{(0)} + \sum \chi_p$. Clearly $w := \chi-\hat \chi=w_I-\hat w_I\in I$ and $\vert w\vert \le 2$. If $w=0$ then $\chi=\hat\chi$ and we have a contradiction: $\chi\in D({\mathrm{X}}^G)$. Hence $\vert w\vert = 2$ and $\operatorname{ht}_{{\mathrm{X}}^G}(w)=\omega 2$. Now since $\operatorname{ht}_{{\mathrm{X}}^G}(w)=\omega 2$, for any $n\in {\mathbb{N}}$ there exists a $\chi'\in {\mathrm{X}}^G$ of height $\omega+n$ and $w=2\chi'$. We restrict $\omega+n$ to ordinals greater than the Ulm length of ${\mathrm{X}}(l)_2$. Write $\chi'=\chi'_u+\sum\chi'_p$. Then $\chi'_u\in D({\mathrm{X}}(l)_2)$. Moreover, $(1+\sigma)\chi'_u=-\operatorname{Cor}\sum \chi'_p$. Now $\vert \sum\chi'_p \vert$ is at most 2 since $w=2\chi'$ has no component in ${\mathrm{X}_{\neq u}}$. If the order is 2, then by Lemma \[le:kpinside\], $-\operatorname{Cor}\sum\chi'_p$ lies in $W$; if the order is 1, then $-\operatorname{Cor}\sum\chi'_p=0\in W$. Since $\chi'_u\in D({\mathrm{X}}(l)_2)$, write $\chi'_u = \chi'_I+\chi'_N+\chi'_P$ according to . Since $2\chi'=w\in I$, $\chi'_N$ and $\chi'_P$ are of order at most 2. Hence $(1+\sigma)\chi'_I= 2\chi'_I=w_I=w$ and $(1+\sigma)\chi'_N= 2\chi'_N=0$. Therefore $(1+\sigma)\chi'_u=-\operatorname{Cor}\sum \chi'_p = w+(1+\sigma)\chi'_P\in W$. Therefore $(-\operatorname{Cor}\sum \chi'_p)_I = w$, and $\operatorname{ht}_{{\mathrm{X}_{\neq u}}^G}(\sum\chi'_p)\ge \operatorname{ht}_{{\mathrm{X}}^G}(\chi')=\omega+n$. Hence, by considering $-\sum\chi'_p$ instead of $\sum\chi'_p$, we have $\lambda_w\ge \omega 2$. Now suppose that $\lambda_w=\infty$. Then since the Ulm length of ${\mathrm{X}}^G_{\neq u}$ is at most $\omega 2$ [@FS Thm. 1], there exists $\chi_{\neq u}=\sum \chi_p\in {\mathrm{X}_{\neq u}}^G$ of order at most 2 with $\operatorname{ht}_{{\mathrm{X}_{\neq u}}^G}(\chi_{\neq u})=\infty$ and $(\operatorname{Cor}\chi_{\neq u})_I=w$. Let $\{\sum \chi_p^{(i)}\}_{i=0}^\infty \subset {\mathrm{X}}^G_{\neq u}$ be a divisible tower over $\chi_{\neq u}$. Now let $\hat P$ be a finite direct sum of ${\mathbb{Q}}_2/{\mathbb{Z}}_2$ summands in $P$ such that $\hat P \oplus \sigma(\hat P) = P$. For each $i\ge 1$, set $\chi_u^{(i)}=(-\operatorname{Cor}\sum\chi_p^{(i+1)})_I + (-\operatorname{Cor}\sum\chi_p^{(i)})_{\hat P}$. Then $$((1+\sigma)(\chi_u^{(i)}))_I = 2(\chi_u^{(i)})_I= (-2\operatorname{Cor}\sum\chi_p^{(i+1)})_I= (-\operatorname{Cor}\sum\chi_p^{(i)})_I,$$ and $$((1+\sigma)(\chi_u^{(i)}))_{\hat P} = (\chi_u^{(i)})_{\hat P}= (-\operatorname{Cor}\sum\chi_p^{(i)})_{\hat P}.$$ Moreover, since by the image of $\operatorname{Cor}$ is $\sigma$-invariant on ${\mathrm{X}_{\neq u}}^G$, the equality holds over $\sigma(\hat P)$ as well, and holds over $N$ since both sides must be trivial on $N$. Then for $i\ge 0$, $\chi_u^{(i)}+ \sum\chi_p^{(i)}\in {\mathrm{X}}^G$. Now $$\begin{aligned} 2\chi_u^{(0)} &= 2((-\operatorname{Cor}\sum\chi_p^{(1)})_I+ (-\operatorname{Cor}\sum\chi_p^{(0)})_{\hat P})\\ &=(-\operatorname{Cor}\sum\chi_p^{(0)})_I=-w. \end{aligned}$$ Since $\vert w\vert \le 2$, $-w=w$. Let $\chi'_0=w$ and for $i\ge 1$, $\chi'_i=\chi_u^{(i-1)}+\sum\chi_p^{(i-1)}$. Then $2\chi'_1=w$ and $\{\chi'_i\}_{i=0}^\infty \subset {\mathrm{X}}^G$ is a divisible tower over $w$, so $w\in D({\mathrm{X}}^G)$. But then $\chi=w+\hat\chi\in D({\mathrm{X}}^G)$, a contradiction. ($\Rightarrow$) By Proposition \[pr:equiv1\], if there is an order 2 element in ${\mathrm{B}}^G$ of height $\omega 2$, then there is an order 2 element in ${\mathrm{X}}^G$ of height $\omega 2$. By Lemma \[le:lambdaomega2\], there exists a nontrivial $w\in W_I$ with $\lambda_w=\omega 2$. ($\Leftarrow$) Suppose that $\lambda_w=\omega 2$ for some nontrivial $w\in W_I$. We claim that there is an order 2 element $\hat w$ in ${\mathrm{X}}^G$ with $\operatorname{ht}_{{\mathrm{X}}^G}(\hat w)=\omega 2$. Since $\lambda_w=\omega 2$, for each ordinal $\omega+i$ greater than the Ulm length of ${\mathrm{X}}(l)_2$ there exists an element $\chi_{\neq u}=\sum\chi_p\in {\mathrm{X}_{\neq u}}^G$ with $\operatorname{ht}_{{\mathrm{X}_{\neq u}}^G}(\chi_{\neq u})=\omega+i$ and $-(\operatorname{Cor}\chi_{\neq u})_I=-w=w$. For each such ordinal $\omega+i$, we proceed as follows. Let $\chi_{\neq u}^{(i)}\in {\mathrm{X}_{\neq u}}^G$ be an element such that $2^i\chi_{\neq u}^{(i)}=\chi_{\neq u}$ and for each $j\in {\mathbb{N}}$ let $\delta_{\neq u}^{(j)}\in {\mathrm{X}_{\neq u}}^G$ satisfy $2^j\delta_{\neq u}^{(j)}=\chi_{\neq u}^{(i)}$. Now set $\chi_u^{(i)}=-\operatorname{Cor}\chi_{\neq u}^{(i)}$ and $\chi^{(i)}=\chi_u^{(i)}+\chi_{\neq u}^{(i-1)}$. Then since by the image of $-\operatorname{Cor}$ on ${\mathrm{X}_{\neq u}}^G$ is $\sigma$-invariant, $$(1+\sigma)\chi_u^{(i)}=2\chi_u^{(i)}=-2 \operatorname{Cor}\chi_{\neq u}^{(i)}=-\operatorname{Cor}\chi_{\neq u}^{(i-1)}$$ and we have $\chi^{(i)}\in {\mathrm{X}}^G$. Moreover, $2^i \chi^{(i)} = -\operatorname{Cor}\chi_{\neq u}$ with $-(\operatorname{Cor}\chi_{\neq u})_I = w$. Continuing on, for each $j\ge 2$, set $\delta_u^{(j)}=-\operatorname{Cor}\delta_{\neq u}^{(j)}$ and $\delta^{(j)}=\delta_u^{(j)}+ 2\delta_{\neq u}^{(j)}$; for $j=1$ set $\delta_u^{(1)}=-\operatorname{Cor}\delta_{\neq u}^{(1)}$ and $\delta^{(1)}=\delta_u^{(1)}+ \chi_{\neq u}^{(i)}$. Then $2^j\delta^{(j)}=\chi^{(i)}$ and as before, $\delta^{(j)}\in {\mathrm{X}}^G$ for each $j$. Hence we have shown that $w'=-\operatorname{Cor}\chi_{\neq u}$ is an order 2 element in ${\mathrm{X}}(l)^{\langle\sigma\rangle}$ with $\operatorname{ht}_{{\mathrm{X}}^G}(w')\ge \omega+i$ and $w'_I=w$. In fact, because the height is greater than the Ulm length of ${\mathrm{X}}(l)_2$, $w'\in D({\mathrm{X}}(l)_2)^{\langle\sigma\rangle}$. Hence for every $\omega+i$ we have produced an order 2 element $w'_i\in D({\mathrm{X}}(l)_2)^{\langle\sigma\rangle}$ which satisfies $\operatorname{ht}_{{\mathrm{X}}^G}(w'_i)\ge \omega+i$ and $(w'_i)_I=w$. Now the divisible subgroup of ${\mathrm{X}}(l)_2$ is a finite direct sum of ${\mathbb{Q}}_2/{\mathbb{Z}}_2$ summands [@NSW Thm. 11.1.2], and hence its exponent 2 subgroup is finite. Hence for some $\hat w\in D({\mathrm{X}}(l)_2)^{\langle \sigma\rangle}$, $\operatorname{ht}_{{\mathrm{X}}^G}(\hat w)\ge \omega + i_n$ for an unbounded strictly increasing $\{i_n\}_{n=1}^\infty$ of natural numbers. Therefore there exists a $\hat w$ of order 2 in $D({\mathrm{X}}(l))^{\langle\sigma\rangle} \subset {\mathrm{X}}^G$ with $\operatorname{ht}_{{\mathrm{X}}^G}(\hat w)\ge \omega 2$ and $\hat w_I=w$. Now suppose that $\hat w$ is divisible in ${\mathrm{X}}^G$. Let $\{\chi^{(i)}\}_{i=0}^\infty \subset {\mathrm{X}}^G$ be a divisible tower over $\hat w$. Write $\chi^{(i)}=\chi_u^{(i)}+\chi_{\neq u}^{(i)}$ for each $i$, and consider $\chi'=\chi_{\neq u}^{(1)}$, necessarily of order less than or equal to 2 since $\chi_{\neq u}^{(0)}=0$ because $\hat w\in {\mathrm{X}}(l)_2$. Then $\{\chi_{\neq u}^{(i+1)}\}_{i=0}^\infty \subset {\mathrm{X}_{\neq u}}^G$ is a divisible tower over $\chi'$. Since $\chi^{(1)}\in {\mathrm{X}}^G$, we have that $(1+\sigma)\chi_u^{(1)}=-\operatorname{Cor}\chi_{\neq u}^{(1)}$. Restricting the equation to $I$, we have $(\chi_u^{(0)})_I = (-\operatorname{Cor}\chi')_I$, and the left hand side is in fact $(\hat w)_I=w$. Hence $\chi'$ is an order 2 element in ${\mathrm{X}_{\neq u}}^G$ with $(\operatorname{Cor}\chi')_I=w$; we have then that $\lambda_{w}=\infty$, a contradiction. Proof of Main Theorem {#se:proof} ===================== We show that any order two element $\alpha$ in $\operatorname{Br}(E)$ of height at least $\omega 2$ is divisible. Suppose $\alpha$ is an order two element in $\operatorname{Br}(E) = H^{2}(G_{k},\bar k(u)^{*})$ of height at least $\omega 2$. Then the image $\phi(\alpha)$ of $\alpha$ in is an element of ${\mathrm{B}}^G$ of height at least $\omega 2$. Since the Leopoldt conjecture holds for $l$ and $2$, there is only one ${\mathbb{Z}}_2$-extension of $l$. Since $l$ is the quadratic subextension of $k^{cyc}/k$, the ${\mathbb{Z}}_2$-extension of $l$ is precisely $k^{cyc}$. Hence $D({\mathrm{X}}(l)_2)\cong {\mathbb{Q}}_2/{\mathbb{Z}}_2$ and $\sigma$ acts trivially on $D({\mathrm{X}}(l)_2)$. In the decomposition of $D({\mathrm{X}}(l)_2)$ in , $N=P=0$ and $I\cong {\mathbb{Q}}_2/{\mathbb{Z}}_2$. But since the quadratic subextension of $k^{cyc}/l$ is cyclic of order 4 over $k$, $W=0$. Hence by Proposition \[pr:order2iflambda\], there exists no order 2 element of ${\mathrm{B}}^G$ of height $\omega 2$. Hence $\phi(\alpha)$ is divisible. If $\phi(\alpha)\neq 0$ and $\{\tilde \alpha_i\}$ is a divisible tower over $\phi(\alpha)$, then we may find preimages $\alpha_{n}\in \operatorname{Br}(E)$ such that $\phi(\alpha_{n})=\tilde\alpha_{n}$ and $2^{m}\alpha_{n} = \alpha_{n-m}$, as follows. Let $\alpha_{n}=2\cdot \phi^{-1}(\tilde\alpha_{n+1})$. Since the kernel of $\phi$ is the relative Brauer group $\operatorname{Br}(El/E)$ and is therefore of exponent 2, this map is well-defined. We calculate $$\phi(\alpha_{n})=\phi(2\cdot \phi^{-1}(\tilde \alpha_{n+1}))= 2 \cdot \phi(\phi^{-1}(\tilde \alpha_{n+1}))=2 \cdot \tilde \alpha_{n+1}=\tilde \alpha_{n}$$ and $$\begin{aligned} 2^{m}\alpha_{n} & =2^{m}\cdot 2\cdot \phi^{-1}(\tilde\alpha_{n+1}) =2\cdot 2^{m}\cdot \phi^{-1}(\tilde\alpha_{n+1}) \\ &=2\cdot \phi^{-1}(2^{m}\tilde\alpha_{n+1})= 2\cdot \phi^{-1}(\tilde\alpha_{n-m+1}) =\alpha_{n-m}. \end{aligned}$$ Hence $\{\alpha_n\}$ is a divisible tower over $\alpha$ and $\alpha$ is a divisible element of $\operatorname{Br}(E)$. If $\phi(\alpha)=0$, $\alpha$ lies in the kernel of $\phi$ and so is split by base change to $El=l(u)$. Hence $\alpha$ is represented by the class of a central simple algebra with $E(\sqrt{d})/E$ as a maximal subfield. Suppose that $\alpha=\epsilon_1$ for a quaternion algebra $\epsilon_{1}= (d,e)_{E}$ with $e\in E^{\times}$. Now since $k^{cyc}$ and $E$ are linearly disjoint over $k$, $k^{cyc}E$ is a ${\mathbb{Z}}_{2}$-extension of $E$ containing $El$. Let $k^{cyc}_nE$ be the ${\mathbb{Z}}/2^{n}{\mathbb{Z}}$ layer of $k^{cyc}E/E$, and choose generators $\sigma_{n}\in G_{k^{cyc}_{n}E/E}$ satisfying ${\sigma_{n}}\vert_{k_{n-1}^{cyc}E} = \sigma_{n-1}$. Then the cyclic algebras $\epsilon_{n} = (k^{cyc}_{n}E/E,\langle \sigma_{n}\rangle, e)$ are each of order $2^{n}$ in $\operatorname{Br}(E)$ and, moreover, $2^{m}\epsilon_{n}=\epsilon_{n-m}$. Hence $\alpha$ is divisible in $\operatorname{Br}(E)$. We have shown that $U_2(\omega 2, \operatorname{Br}(E)) = 0$. By [@FSS1] and [@FSS2], $U_2(\omega 2, \operatorname{Br}(k(t))) = 0$ and all other Ulm invariants of $\operatorname{Br}(E)$ and $\operatorname{Br}(k(t))$ are identical. Therefore $\operatorname{Br}(E)\cong \operatorname{Br}(k(t))$. Acknowledgments {#acknowledgments .unnumbered} =============== The second author thanks the Department of Mathematics at the Technion—Israel Institute of Technology for its hospitality during 1998–1999. [NSW]{} M. Auslander and A. Brumer. Brauer groups of discrete valuation rings. *Nederl. Akad. Wetensch. Proc. Ser. A* **30** (1968), 286–296. A. Brumer. On the units of algebraic number fields. *Mathematika* **14** (1967), 121–124. C. Curtis and I. Reiner. *Representation theory of finite groups and associative algebras*. Pure and Applied Mathematics 11. New York: Wiley-Interscience, 1962. D. K. Faddeev. Simple algebras over a field of algebraic functions of one variable. *Trudy Mat. Inst. Steklov* **38** (1951), 321–344; *Amer. Math. Soc. Transl.* II **3** (1956), 15–38. B. Fein and M. Schacher. Brauer groups and character groups of function fields. *J. Algebra* **61** (1979), 249–255. B. Fein, M. Schacher, and J. Sonn. Brauer groups of rational function fields. *Bull. Amer. Math. Soc. (N.S.)* **1** (1979), no. 5, 766–768. B. Fein, M. Schacher, and J. Sonn. Brauer groups of fields of genus zero. *J. Algebra* **114** (1988), no. 2, 479–483. G. Janusz. Automorphism groups of simple algebras and group algebras. *Representation theory of algebras (Proc. Conf., Temple Univ., Philadelphia, Pa., 1976)*. Lecture Notes in Pure Appl. Math. 37. New York: Dekker, 1978, pp. 381–388. T. Y. Lam. *The algebraic theory of quadratic forms*, revised 2nd printing. Mathematics Lecture Note Series. Reading, Mass.: Benjamin/Cummings Publishing Co., Inc., 1980. J. Neukirch, A. Schmidt, and K. Wingberg. *Cohomology of number fields*. Grundlehren der mathematischen Wissenschaften 323. Berlin: Springer-Verlag, 2000. [^1]: $^*$Research supported by the Fund for Promotion of Research at the Technion. [^2]: $^{**}$Research supported in part by an International Research Fellowship, awarded by the National Science Foundation (INT–980199) and held at the Technion—Israel Institute of Technology during 1998–1999, and a Young Investigator Grant from the National Security Agency (MDA904-02-1-0061).
--- author: - 'Philip D. Engelke' - 'Department of Physics and Astronomy, The Johns Hopkins University' bibliography: - 'mybib.bib' title: ' MOND Fit of Nature Physics 11:245 Mass Distribution Model to Rotation Curve Data' --- In a recent note, @ioc15b analyze the consistency of Modified Newtonian Dynamics (MOND) with their compiled Milky Way data and baryonic mass distribution models from @ioc15a, looking especially at whether they recover the canonical value of the MOND critical acceleration $a_0$ parameter when fitting two alternate versions of the MOND function using the value of the MOND constant $a_0$ as an adjustable parameter. In this way, they tested the “standard” interpolation function (the original, proposed by @mil83) and the “simple” interpolation function [@fam05]. What they report finding is that the standard interpolation function requires a different value of $a_0$ from that used for external galaxies in order to fit their Milky Way data, whereas the simple interpolation function can fit the observed rotation curve for “a subset of models” [@ioc15b] using the traditional $a_0$ value. However, they do not explicitly show in their paper a plot of the resultant MOND fit through the rotation curve data. We plot this using the simple interpolation function, which has been found in a comparison against the standard interpolation to give better fits to the rotation curves of a number of external galaxies [@gen11]. We read 450 points from the Newtonian rotation curve shown as a thin black line directly off of Figure 2 in @ioc15a using online image pixel coordinate software, and fed the values into the MOND formula. The result, when superimposed on the same figure, strikingly passes right through the red points showing the compilation of real observational Milky Way rotation curve data. We display these results in Figure 1 of this paper, with our MOND rotation curve prediction plotted in green on top of a copy of Figure 2 from @ioc15a. ![image](MONDResults.jpg){width="100.00000%"} Our MOND rotation curve prediction was calculated as follows. We begin with the MOND equation for a general interpolation function $\mu(x)$, where $x = a/a_0$ and $a_0 = 1.2 \times 10^{-10}$ m/s$^2$. $$a = \frac{F}{m\mu(x)}.$$ Defining $g$ as the acceleration predicted under Newtonian mechanics and gravity, the acceleration $a$ predicted by MOND is given by $$a\mu(x) = g.$$ The simple interpolation function is given by $$\mu(x) = \frac{x}{1 + x}$$ which, when inserted into the MOND acceleration equation, yields $$a\frac{\frac{a}{a_0}}{1 + \frac{a}{a_0}} = g.$$ Rearranging the equation, we are faced with a quadratic equation in a: $$a^2 - ga - ga_0 = 0.$$ Solving for a, we find $$a = \frac{g + \sqrt{g^2 + 4ga_0}}{2}.$$ To find the circular velocities, we write $$a = \frac{v_c^2}{R}$$ which means that $$v_c = \sqrt{aR}.$$ @ioc15a plot angular circular velocity $v_c / R$ in terms of km/s per kpc, so to compare the MOND predicted circular velocities to the data that they plotted, we find the MOND angular circular velocities in the same units as they do by converting the circular velocities into km/s and dividing by the distance from the Galactic center in kpc. This one visually striking fit is of course not a conclusive demonstration that MOND using the simple interpolation function describes the rotation curve of the Milky Way, because the baryonic mass distribution shown as a thin black line in @ioc15a Figure 2 is but one of many possible models, and there could be systematic errors in the compilation of the data, as pointed out by @mcg15. The analysis performed in @ioc15b is valuable and extensive, and shows that MOND with the simple interpolation function cannot be ruled out as a fit to the Milky Way rotation curve. However, we believe that this visual fit to the rotation curve data is insightful as a supplement to the reports of @ioc15b. The results are consistent with the findings of @mcg08, a similar MOND fit to the Milky Way rotation curve. They are also consistent with studies of MOND fits to other galaxies using the simple interpolation function, such as @gen11. NOTE: Several corrections have been made in this update to arXiv:1505.06174. We apologize for the previous mischaracterizations. [dummytext]{} Famaey, B. & Binney, J. 2005, MNRAS 363, 603 Gentile, G., Famaey, B., & de Blok, W. J. G. 2011, A&A, 527, A76 Iocco, F., Pato, M., & Bertone, G. 2015, Nature Physics, 11, 245 Iocco, F., Pato, M., & Bertone, G 2015, arXiv:1505.05181 McGaugh, S. 2008, ApJ, 683, 137 McGaugh, S., Lelli, F., Pawlowski, M., Angus, G., Bienaymé, O., Bland-Hawthorn, J., de Blok, E., Famaey, B., Fraternali, F., Freeman, K., Gentile, G., Ibata, R., Kroupa, P., Lüghausen, F., McMillan, P., Merritt, D., Minchev, I., Monari, G., D’Onghia, E., Quillen, A., Sanders, B., Sellwood, J., & Siebert, A., Zhao, H. 2015, arXiv:150307813M Milgrom, M. 1983, ApJ, 270, 365
--- author: - | [**Stephen Mussmann[^1]**]{} , [**Daniel Levy$^*$**]{}, [**Stefano Ermon**]{}\ Department of Computer Science\ Stanford University\ Stanford, CA 94305\ `{mussmann,danilevy,ermon}@cs.stanford.edu`\ bibliography: - 'bibliography.bib' title: 'Fast Amortized Inference and Learning in Log-linear Models with Randomly Perturbed Nearest Neighbor Search' --- ### References {#references .unnumbered} [^1]: Both authors contributed equally.
--- abstract: | Reputation mechanisms offer an effective alternative to verification authorities for building trust in electronic markets with moral hazard. Future clients guide their business decisions by considering the feedback from past transactions; if truthfully exposed, cheating behavior is sanctioned and thus becomes irrational. It therefore becomes important to ensure that rational clients have the right incentives to report honestly. As an alternative to side-payment schemes that explicitly reward truthful reports, we show that honesty can emerge as a rational behavior when clients have a repeated presence in the market. To this end we describe a mechanism that supports an equilibrium where truthful feedback is obtained. Then we characterize the set of pareto-optimal equilibria of the mechanism, and derive an upper bound on the percentage of false reports that can be recorded by the mechanism. An important role in the existence of this bound is played by the fact that rational clients can establish a reputation for reporting honestly. author: - | Radu Jurca radu.jurca@epfl.ch\ Boi Faltings boi.faltings@epfl.ch\ Ecole Polytechnique Fédérale de Lausanne (EPFL)\ Artificial Intelligence Laboratory (LIA)\ CH-1015 Lausanne, Switzerland\ <http://liawww.epfl.ch> title: Obtaining Reliable Feedback for Sanctioning Reputation Mechanisms --- Introduction ============ The availability of ubiquitous communication through the Internet is driving the migration of business transactions from direct contact between people to electronically mediated interactions. People interact electronically either through human-computer interfaces or through programs representing humans, so-called agents. In either case, no physical interactions among entities occur, and the systems are much more susceptible to fraud and deception. Traditional methods to avoid cheating involve cryptographic schemes and *trusted third parties* (TTP’s) that overlook every transaction. Such systems are very costly, introduce potential bottlenecks, and may be difficult to deploy due to the complexity and heterogeneity of the environment: e.g., agents in different geographical locations may be subject to different legislation, or different interaction protocols. Reputation mechanisms offer a novel and effective way of ensuring the necessary level of trust which is essential to the functioning of any market. They are based on the observation that agent strategies change when we consider that interactions are repeated: the other party remembers past cheating, and changes its terms of business accordingly in the future. Therefore, the expected gains due to future transactions in which the agent has a higher reputation can offset the loss incurred by not cheating in the present. This effect can be amplified considerably when such reputation information is shared among a large population, and thus multiplies the expected future gains made accessible by honest behavior. Existing reputation mechanisms enjoy huge success. Systems such as eBay[^1] or Amazon[^2] implement reputation mechanisms which are partly credited for the businesses’ success. Studies show that human users seriously take into account the reputation of the seller when placing bids in online auctions [@Houser/Wooders:2006], and that despite the incentive to free ride, feedback is provided in more than half of the transactions on eBay [@RZ:2002]. One important challenge associated with designing reputation mechanisms is to ensure that truthful feedback is obtained about the actual interactions, a property called *incentive-compatibility*. Rational users can regard the private information they have observed as a valuable asset, not to be freely shared. Worse even, agents can have external incentives to misreport and thus manipulate the reputation information available to other agents [@Harmon:2004]. Without proper measures, the reputation mechanism will obtain unreliable information, biased by the strategic interests of the reporters. Honest reporting incentives should be addressed differently depending on the predominant role of the reputation mechanisms. The *signaling* role is useful in environments where the service offered by different providers may have different quality, but all clients interacting with the same provider are treated equally (markets with *adverse selection*). This is the case, for example, in a market of web-services. Different providers possess different hardware resources and employ different algorithms; this makes certain web-services better than others. Nevertheless, all requests issued to the same web-service are treated by the same program. Some clients might experience worse service than others, but these differences are random, and not determined by the provider. The feedback from previous clients statistically estimates the quality delivered by a provider in the future, and hence signals to future clients which provider should be selected. The *sanctioning* role, on the other hand, is present in settings where service requests issued by clients must be individually addressed by the provider. Think of a barber, who must skillfully shave every client that walks in his shop. The problem here is that providers must exert care (and costly effort) for satisfying every service request. Good quality can result only when enough effort was exerted, but the provider is better off by exerting less effort: e.g., clients will anyway pay for the shave, so the barber is better off by doing a sloppy job as fast as possible in order to have time for more customers. This *moral hazard* situation can be eliminated by a reputation mechanism that punishes providers for not exerting effort. Low effort results in negative feedback that decreases the reputation, and hence the future business opportunities of the provider. The future loss due to a bad reputation offsets the momentary gain obtained by cheating, and makes cooperative behavior profitable. There are well known solutions for providing honest reporting incentives for signaling reputation mechanisms. Since all clients interacting with a service receive the same quality (in a statistical sense), a client’s private observation influences her belief regarding the experience of other clients. In the web-services market mentioned before, the fact that one client had a bad experience with a certain web-service makes her more likely to believe that other clients will also encounter problems with that same web-service. This correlation between the client’s private belief and the feedback reported by other clients can be used to design feedback payments that make honesty a Nash equilibrium. When submitting feedback, clients get paid an amount that depends both on the the value they reported and on the reports submitted by other clients. As long as others report truthfully, the expected payment of every client is maximized by the honest report – thus the equilibrium. and show that incentive-compatible payments can be designed to offset both reporting costs and lying incentives. For *sanctioning* reputation mechanisms the same payment schemes are not guaranteed to be incentive-compatible. Different clients may experience different service quality because the provider decided to exert different effort levels. The private beliefs of the reporter may no longer be correlated to the feedback of other clients, and therefore, the statistical properties exploited by are no longer present. As an alternative, we propose different incentives to motivate honest reporting based on the repeated presence of the client in the market. Game theoretic results (i.e., the *folk theorems*) show that repeated interactions support new equilibria where present deviations are made unattractive by future penalties. Even without a reputation mechanism, a client can guide her future play depending on the experience of previous interactions. As a first result of this paper, we describe a mechanism that indeed supports a cooperative equilibrium where providers exert effort all the time. The reputation mechanism correctly records when the client received low quality. There are certainly some applications where clients repeatedly interact with the same seller with a potential moral hazard problem. The barber shop mentioned above is one example, as most people prefer going to the same barber (or hairdresser). Another example is a market of delivery services. Every package must be scheduled for timely delivery, and this involves a cost for the provider. Some of this cost may be saved by occasionally dropping a package, hence the moral hazard. Moreover, business clients typically rely on the same carrier to dispatch their documents or merchandise. As their own business depends on the quality and timeliness of the delivery, they do have the incentive to form a lasting relationship and get good service. Yet another example is that of a business person who repeatedly travels to an offshore client. The business person has a direct interest to repeatedly obtain good service from the hotel which is closest to the client’s offices. We assume that the quality observed by the clients is also influenced by environmental factors outside the control of, however observable by, the provider. Despite the barber’s best effort, a sudden movement of the client can always generate an accidental cut that will make the client unhappy. Likewise, the delivery company may occasionally lose or damage some packages due to transportation accidents. Nevertheless, the delivery company (like the barber) eventually learns with certainty about any delays, damages or losses that entitle clients to complain about unsatisfactory service. The mechanism we propose is quite simple. Before asking feedback from the client, the mechanism gives the provider the opportunity to acknowledge failure, and reimburse the client. Only when the provider claims good service does the reputation mechanism record the feedback of the client. Contradictory reports (the provider claims good service, but the client submits negative feedback) may only appear when one of the parties is lying, and therefore, both the client and the provider are sanctioned: the provider suffers a loss as a consequence of the negative report, while the client is given a small fine. One equilibrium of the mechanism is when providers always do their best to deliver the promised quality, and truthfully acknowledge the failures caused by the environmental factors. Their “honest” behavior is motivated by the threat that any mistake will drive the unsatisfied client away from the market. When future transactions generate sufficient revenue, the provider does not afford to risk losing a client, hence the equilibrium. Unfortunately, this socially desired equilibrium is not unique. Clients can occasionally accept bad service and keep returning to the same provider because they don’t have better alternatives. Moreover, since complaining for bad service is sanctioned by the reputation mechanism, clients might be reluctant to report negative feedback. Penalties for negative reports and the clients’ lack of choice drives the provider to occasionally cheat in order to increase his revenue. As a second result, we characterize the set of pareto-optimal equilibria of our mechanism and prove that the amount of unreported cheating that can occur is limited by two factors. The first factor limits the amount of cheating in general, and is given by the quality of the alternatives available to the clients. Better alternatives increase the expectations of the clients, therefore the provider must cheat less in order to keep his customers. The second factor limits the amount of unreported cheating, and represents the cost incurred by clients to establish a reputation for reporting the truth. By stubbornly exposing bad service when it happens, despite the fine imposed by the reputation mechanism, the client signals to the provider that she is committed to always report the truth. Such signals will eventually change the strategy of the provider to full cooperation, who will avoid the punishment for negative feedback. Having a reputation for reporting truthfully is of course, valuable to the client; therefore, a rational client accepts to lie (and give up the reputation) only when the cost of building a reputation for reporting honestly is greater than the occasional loss created by tolerated cheating. This cost is given by the ease with which the provider switches to cooperative play, and by the magnitude of the fine imposed for negative feedback. Concretely, this paper proceeds as follows. In Section \[related\_work\] we describe related work, followed by a more detailed description of our setting in Section \[setting\]. Section \[GTanalysis\] presents a game theoretic model of our mechanism and an analysis of reporting incentives and equilibria. Here we establish the existence of the cooperative equilibrium, and derive un upper bound on the amount of cheating that can occur in any pareto-optimal equilibrium. In Section \[buidingReputation\] we establish the cost of building a reputation for reporting honestly, and hence compute an upper bound on the percentage of false reports recorded by the reputation mechanism in any equilibrium. We continue in Section \[evilBuyers\] by analyzing the impact of malicious buyers that explicitly try to destroy the reputation of the provider. We give some initial approximations on the worst case damage such buyers can cause to providers. Further discussions, open issues and directions for future work are discussed in Section \[future\_work\]. Finally, Section \[conclusions\] concludes our work. Related Work {#related_work} ============ The notion of *reputation* is often used in Game Theory to signal the commitment of a player towards a fixed strategy. This is what we mean by saying that *clients establish a reputation for reporting the truth*: they commit to always report the truth. Building a reputation usually requires some incomplete information repeated game, and can significantly impact the set of equilibrium points of the game. This is commonly referred to as the *reputation effect*, first characterized by the seminal papers of , and . The reputation effect can be extended to all games where a player ($A$) could benefit from committing to a certain strategy $\sigma$ that is not credible in a complete information game: e.g., a monopolist seller would like to commit to fight all potential entrants in a chain-store game [@Selten:1978], however, this commitment is not credible due to the cost of fighting. In an incomplete information game where the commitment type has positive probability, $A$’s opponent ($B$) can at some point become convinced that $A$ is playing as if she were the commitment type. At that point, $B$ will play a best response against $\sigma$, which gives $A$ the desired payoff. Establishing a reputation for the commitment strategy requires time and cost. When the higher future payoffs offset the cost of building reputation, the reputation effect prescribes minimum payoffs any equilibrium strategy should give to player $A$ (otherwise, $A$ can profitably deviate by playing as if she were a commitment type). study the class of all repeated games in which a long-run player faces a sequence of single-shot opponents who can observe all previous games. If the long-run player is sufficiently patient and the single-shot players have a positive prior belief that the long-run player might be a commitment type, the authors derive a lower bound on the payoff received by the long-run player in any Nash equilibrium of the repeated game. This result holds for both finitely and infinitely repeated games, and is robust against further perturbations of the information structure (i.e., it is independent of what other types have positive probability). provides a generalization of the above result for the two long-run player case in a special class of games called of “conflicting interests”, when one of the players is sufficiently more patient than the opponent. A game is of conflicting interests when the commitment strategy of one player ($A$) holds the opponent ($B$) to his minimax payoff. The author derives an upper limit on the number of rounds $B$ will not play a best response to $A$’s commitment type, which in turn generates a lower bound on $A$’s equilibrium payoff. For a detailed treatment of the reputation effect, the reader is directed to the work of . In computer science and information systems research, *reputation* information defines some aggregate of feedback reports about past transactions. This is the semantics we are using when referring to the reputation of the provider. Reputation information encompasses a unitary appreciation of the personal attributes of the provider, and influences the trusting decisions of clients. Depending on the environment, reputation has two main roles: to *signal* the capabilities of the provider, and to *sanction* cheating behavior [@Kuwabara:2003]. *Signaling* reputation mechanisms allow clients to learn which providers are the most capable of providing good service. Such systems have been widely used in computational trust mechanisms. and describe systems where agents use their direct past experience to recognize trustworthy partners. The global efficiency of the market is clearly increased, however, the time needed to build the reputation information prohibits the use of this kind of mechanisms in a large scale online market. A number of signaling reputation mechanisms also take into consideration indirect reputation information, i.e., information reported by peers. and use social networks in order to obtain the reputation of an unknown agent. Agents ask acquaintances several hops away about the trustworthiness of an unknown agent. Recommendations are afterwards aggregated into a single measure of the agent’s reputation. This class of mechanisms, however intuitive, does not provide any rational participation incentives for the agents. Moreover, there is little protection against untruthful reporting, and no guarantee that the mechanism cannot be manipulated by a malicious provider in order to obtain higher payoffs. Truthful reporting incentives for signaling reputation mechanisms are described by . Honest reports are explicitly rewarded by payments that take into account the value of the submitted report, and the value of a report submitted by another client (called the *reference reporter*). The payment schemes are designed based on *proper scoring rules*, mathematical functions that make possible the revelation of private beliefs [@Cooke:1991]. The essence behind honest reporting incentives is the observation that the private information a client obtains from interacting with a provider changes her belief regarding the reports of other clients. This change in beliefs can be exploited to make honesty an ex-ante Nash equilibrium strategy. extend the above result by taking a computational approach to designing incentive compatible payment schemes. Instead of using closed form scoring rules, they compute the payments using an optimization problem that minimizes the total budget required to reward the reporters. By also using several reference reports and filtering mechanisms, they render the payment mechanisms cheaper and more practical. presents a comprehensive investigation of binary *sanctioning* reputation mechanisms. As in our setting, providers are equally capable of providing high quality, however, doing so requires costly effort. The role of the reputation mechanism is to encourage cooperative behavior by punishing cheating: negative feedback reduces future revenues either by excluding the provider from the market, or by decreasing the price the provider can charge in future transactions. Dellarocas shows that simple information structures and decision rules can lead to efficient equilibria, given that clients report honestly. Our paper builds upon such mechanisms by addressing reporting incentives. We will abstract away the details of the underlying reputation mechanism through an explicit penalty associated with a negative feedback. Given that such high enough penalties exist, any reputation mechanism (i.e., feedback aggregation and trusting decision rules) can be plugged in our scheme. In the same group of work that addresses reporting incentives, we mention the work of , and . @Braynov/Sandholm:2002 consider exchanges of goods for money and prove that a market in which agents are trusted to the degree they deserve to be trusted is equally efficient as a market with complete trustworthiness. By scaling the amount of the traded product, the authors prove that it is possible to make it rational for sellers to truthfully declare their trustworthiness. Truthful declaration of one’s trustworthiness eliminates the need of reputation mechanisms and significantly reduces the cost of trust management. However, the assumptions made about the trading environment (i.e. the form of the cost function and the selling price which is supposed to be smaller than the marginal cost) are not common in most electronic markets. For e-Bay-like auctions, the Goodwill Hunting mechanism [@Dellarocas:2002_LNCS2531] provides a way to make sellers indifferent between lying or truthfully declaring the quality of the good offered for sale. Momentary gains or losses obtained from misrepresenting the good’s quality are later compensated by the mechanism which has the power to modify the announcement of the seller. describe an incentive-compatible reputation mechanism that is particularly suited for peer-to-peer applications. Their mechanism is similar to ours, in the sense that both the provider and the client are punished for submitting conflicting reports. The authors experimentally show that a class of common lying strategies are successfully deterred by their scheme. Unlike their results, our paper considers *all* possible equilibrium strategies and sets bounds on the amount of untruthful information recorded by the reputation mechanism. The Setting {#setting} =========== We assume an online market, where rational clients (she) repeatedly request the same service from one provider (he). Every client repeatedly interacts with the service provider, however, successive requests from the same client are always interleaved with enough requests generated by other clients. Transactions are assumed sequential, the provider does not have capacity constraints, and accepts all requests. The price of service is $p$ monetary units, and the service can have either high ($q_1$) or low ($q_0$) quality. Only high quality is valuable to the clients, and has utility $u(q_1)=u$. Low quality has utility 0, and can be precisely distinguished from high quality. Before each round, the client can decide to request the service from the provider, or quit the market and resort to an outside provider that is completely trustworthy. The outside provider always delivers high quality service, but for a higher price $p(1+\rho)$. If the client decides to interact with the online provider, she issues a request to the provider, and pays for the service. The provider can now decide to exert low ($e_0$) or high ($e_1$) effort when treating the request. Low effort has a normalized cost of 0, but generates only low quality. High effort is expensive (normalized cost equals $c(e_1)=c$) and generates high quality with probability $\alpha < 1$. $\alpha$ is fixed, and depends on the environmental factors outside the control of the provider. $\alpha p > c$, so that it is individually rational for providers to exert effort. After exerting effort, the provider can observe the quality of the resulting service. He can then decide to deliver the service as it is, or to acknowledge failure and roll back the transaction by fully reimbursing[^3] the client. We assume perfect delivery channels, such that the client perceives exactly the same quality as the provider. After delivery, the client inspects the quality of service, and can accuse low quality by submitting a negative report to the reputation mechanism. The reputation mechanism (RM) is unique in the market, and trusted by all participants. It can oversee monetary transactions (i.e., payments made between clients and the provider) and can impose fines on all parties. However, the RM does not observe the effort level exerted by the provider, nor does it know the quality of the delivered service. The RM asks feedback from the client only if she chose to transact with the provider in the current round (i.e., paid the price of service to the provider) and the provider delivered the service (i.e., provider did not reimburse the client). When the client submits negative feedback, the RM punishes both the client and the provider: the client must pay a fine $\varepsilon$, and the provider accumulates a negative reputation report. Examples {#setting_example} -------- Although simplistic, this model retains the main characteristics of several interesting applications. A delivery service for perishable goods (goods that lose value past a certain deadline) is one of them. Pizza, for example, must be delivered within 30 minutes, otherwise it gets cold and loses its taste. Hungry clients can order at home, or drive to a more expensive local restaurant, where they’re sure to get a hot pizza. The price of a home delivered pizza is $p=1$, while at the restaurant, the same pizza would cost $p(1+\rho) = 1.2$. In both cases, the utility of a warm meal is $u = 2$. The pizza delivery provider must exert costly effort to deliver orders within the deadline. A courier must be dispatched immediately (high effort), for an estimated cost of $c=0.8$. While such action usually results in good service (the probability of a timely delivery is $\alpha = 99\%$), traffic conditions and unexpected accidents (e.g., the address is not easily found) may still delay some deliveries past the deadline. Once at the destination, the delivery person, as well as the client, know if the delivery was late or not. As it is common practice, the provider can acknowledge being late, and reimburse the client. Clients may provide feedback to a reputation mechanism, but their feedback counts only if they were not reimbursed. The client’s fine for submitting a negative report can be set for example at $\varepsilon = 0.01$. The future loss to the provider caused by the negative report (and quantified through $\bar{\varepsilon}$) depends on the reputation mechanism. A simplified market of car garagists or plumbers could fit the same model. The provider is commissioned to repair a car (respectively the plumbing) and the quality of the work depends on the exerted effort. High effort is more costly but ensures a lasting result with high probability. Low effort is cheap, but the resulting fix is only temporary. In both cases, however, the warranty convention may specify the right of the client to ask for a reimbursement if problems reoccur within the warranty period. Reputation feedback may be submitted at the end of the warranty period, and is accepted only if reimbursements didn’t occur. An interesting emerging application comes with a new generation of web services that can optimally decide how to treat every request. For some service types, a high quality response requires the exclusive use of costly resources. For example, computation jobs require CPU time, storage requests need disk space, information requests need queries to databases. Sufficient resources, is a prerequisite, but not a guarantee for good service. Software and hardware failures may occur, however, these failures are properly signaled to the provider. Once monetary incentives become sufficiently important in such markets, intelligent providers will identify the moral hazard problem, and may act strategically as identified in our model. Behavior and Reporting Incentives {#GTanalysis} ================================= From game theoretic point of view, one interaction between the client and the provider can be modeled by the extensive-form game ($G$) with imperfect public information, shown in Figure \[fig:game\]. The client moves first and decides (at node 1) whether to play $in$ and interact with the provider, or to play $out$ and resort to the trusted outside option. Once the client plays $in$, the provider can chose at node 2 whether to exert high or low effort (i.e., plays $e_1$ or $e_0$ respectively). When the provider plays $e_0$ the generated quality is low. When the provider plays $e_1$, nature chooses between high quality ($q_1$) with probability $\alpha$, and low quality ($q_0$) with probability $1-\alpha$. The constant $\alpha$ is assumed common knowledge in the market. Having seen the resulting quality, the provider delivers (i.e., plays $d$) the service, or acknowledges low quality and rolls back the transaction (i.e., plays $l$) by fully reimbursing the client. If the service is delivered, the client can report positive ($1$) or negative ($0$) feedback. ![The game representing one interaction. Empty circles represent decision nodes, edge labels represent actions, full circles represent terminal nodes and the dotted oval represents an information set. Payoffs are represented in rectangles, the top row describes the payoff of the client, the second row describes the payoff of the provider.[]{data-label="fig:game"}](figs/game.eps){width="0.9\columnwidth"} A pure strategy is a deterministic mapping describing an action for each of the player’s information sets. The client has three information sets in the game $G$. The first information set is singleton and contains the node 1 at the beginning of game when the client must decide between playing $in$ or $out$. The second information set contains the nodes 7 and 8 (the dotted oval in Figure $\ref{fig:game}$) where the client must decide between reporting $0$ or $1$, given that she has received low quality, $q_0$. The third information set is singleton and contains the node 9 where the client must decide between reporting $0$ or $1$, given that she received high quality, $q_1$. The strategy $in 0^{q_0}1^{q_1}$, for example, is the honest reporting strategy, specifying that the client enters the game, reports $0$ when she receives low quality, and reports $1$ when she receives high quality. The set of pure strategies of the client is: $$A_C = \{ out 1^{q_0}1^{q_1}, out 1^{q_0}0^{q_1}, out 0^{q_0}1^{q_1}, out 0^{q_0}0^{q_1}, in 1^{q_0}1^{q_1}, in 1^{q_0}0^{q_1}, in 0^{q_0}1^{q_1}, in 1^{q_0}1^{q_1} \};$$ Similarly, the set of pure strategies of the provider is: $$A_P = \{ e_0 l, e_0 d, e_1 l^{q_0} l^{q_1}, e_1 l^{q_0} d^{q_1}, e_1 d^{q_0} l^{q_1}, e_1 d^{q_0} d^{q_1}\};$$ where $e_1 l^{q_0} d^{q_1}$, for example, is the socially desired strategy: the provider exerts effort at node 2, acknowledges low quality at node 5, and delivers high quality at node 6. A pure strategy profile $s$ is a pair $(s_C,s_P)$ where $s_C \in A_C$ and $s_P \in A_P$. If $\Delta(A)$ denotes the set of probability distributions over the elements of $A$, $\sigma_C \in \Delta(A_C)$ and $\sigma_P \in \Delta(A_P)$ are mixed strategies for the client, respectively the provider, and $\sigma = (\sigma_C, \sigma_P)$ is a mixed strategy profile. The payoffs to the players depend on the chosen strategy profile, and on the move of nature. Let $g(\sigma) = \big(g_C(\sigma), g_P(\sigma)\big)$ denote the pair of expected payoffs received by the client, respectively by the provider when playing strategy profile $\sigma$. The function $g : \Delta(A_C) \times \Delta(A_P) \rightarrow \mathbb{R}^2$ is characterized in Table \[tab:normalForm\] and also describs the normal form transformation of $G$. Besides the corresponding payments made between the client and the provider, Table \[tab:normalForm\] also reflects the influence of the reputation mechanism, as further explained in Section \[reputation\_mech\]. The four strategies of the client that involve playing $out$ at node 1 generate the same outcomes, and therefore, have been collapsed for simplicity into a single row of Table \[tab:normalForm\]. [cc]{} Provider & ----------------------- ---------------------- ---------------------- ---------------------- ---------------------- ------- $in 1^{q_0} 1^{q_1}$ $in 1^{q_0} 0^{q_1}$ $in 0^{q_0} 1^{q_1}$ $in 0^{q_0} 0^{q_1}$ $out$ $e_0 l$ [ ]{} [ ]{} [ ]{} [ ]{} [ ]{} $e_0 d$ [ ]{} [ ]{} [ ]{} [ ]{} [ ]{} $e_1 l^{q_0} l^{q_1}$ [ ]{} [ ]{} [ ]{} [ ]{} [ ]{} $e_1 l^{q_0} d^{q_1}$ [ ]{} [ ]{} [ ]{} [ ]{} [ ]{} $e_1 d^{q_0} l^{q_1}$ [ ]{} [ ]{} [ ]{} [ ]{} [ ]{} $e_1 d^{q_0} d^{q_1}$ [ ]{} [ ]{} [ ]{} [ ]{} [ ]{} ----------------------- ---------------------- ---------------------- ---------------------- ---------------------- ------- : Normal transformation of the extensive form game, $G$[]{data-label="tab:normalForm"} The Reputation Mechanism {#reputation_mech} ------------------------ For every interaction, the reputation mechanism records one of the three different signals it may receive: *positive* feedback when the client reports $1$, *negative* feedback when the client reports $0$, and *neutral* feedback when the provider rolls back the transaction and reimburses the client. In Figure \[fig:game\] (and Table \[tab:normalForm\]) positive and neutral feedback do not influence the payoff of the provider, while negative feedback imposes a punishment equivalent to $\bar{\varepsilon}$. Two considerations made us choose this representation. First, we associate neutral and positive feedback with the same reward (0 in this case) because intuitively, the acknowledgement of failure may also be regarded as “honest” behavior on behalf of the provider. Failures occur despite best effort, and by acknowledging them, the provider shouldn’t suffer. However, neutral feedback may also result because the provider did not exert effort. The lack of punishment for these instances contradicts the goal of the reputation mechanism to encourage exertion of effort. Fortunately, the action $e_0 l$ can be the result of rational behavior only in two circumstances, both excusable: one, when the provider defends himself against a malicious client that is expected to falsely report negative feedback (details in Section \[evilBuyers\]), and two, when the environmental noise is too big ($\alpha$ is too small) to justify exertion of effort. Neutral feedback can be used to estimate the parameter $\alpha$, or to detect coalitions of malicious clients, and indirectly, may influence the revenue of the provider. However, for the simplified model presented above, positive and neutral feedback are considered the same in terms of generated payoffs. The second argument relates to the role of the RM to constrain the revenue of the provider depending on the feedback of the client. There are several ways of doing that. describes two principles, and two mechanisms that punish the provider when the clients submit negative reports. The first, works by exclusion. After each negative report the reputation mechanism bans the provider from the market with probability $\pi$. This probability can be tuned such that the provider has the incentive to cooperate almost all the time, and the market stays efficient. The second works by changing the conditions of future trade. Every negative report triggers the decrease of the price the next $N$ clients will pay for the service. For lower values of $N$ the price decrease is higher, nonetheless, $N$ can take any value in an efficient market. Both mechanisms work because the future losses offset the momentary gain the provider would have had by intentionally cheating on the client. Note that these penalties are given endogenously by lost future opportunities, and require some minimum premiums for trusted providers. When margins are not high enough, providers do not care enough about future transactions, and will use the present opportunity of cheating. Another option is to use exogenous penalties for cheating. For example, the provider may be required to buy a licence for operating in the market[^4]. The licence is *partially destroyed* by every negative feedback. Totaly destroyed licences must be restored through a new payment, and remaining parts can be sold if the provider quits the market. The price of the licence and the amount that is destroyed by a negative feedback can be scaled such that rational providers have the incentive to cooperate. Unlike the previous solutions, this mechanism does not require minimum transaction margins as punishments for negative feedback are directly subtracted from the upfront deposit. One way or another, all reputation mechanisms foster cooperation because the provider associates value to client feedback. Let $V(R^+)$ and $V(R^-)$ be the value of a positive, respectively a negative report. In the game in Figure \[fig:game\], $V(R^+)$ is normalized to 0, and $V(R^-)$ is $\bar{\varepsilon}$. By using this notation, we abstract away the details of the reputation mechanism, and retain only the essential punishment associated with negative feedback. Any reputation mechanism can be plugged in our scheme, as long as the particular constraints (e.g., minimum margins for transactions) are satisfied. One last aspect to be considered is the influence of the reputation mechanism on the future transactions of the client. If negative reports attract lower prices, rational long-run clients might be tempted to falsely report in order to purchase cheaper services in the future. Fortunately, some of the mechanisms designed for single-run clients, do not influence the reporting strategy of long-run clients. The reputation mechanism that only keeps the last $N$ reports [@Dellarocas:2005] is one of them. A false negative report only influences the next $N$ transactions of the provider; given that more than $N$ other requests are interleaved between any two successive requests of the same client, a dishonest reporter cannot decrease the price for her future transactions. The licence-based mechanism we have described above is another example. The price of service remains unchanged, therefore reporting incentives are unaffected. On the other hand, when negative feedback is punished by exclusion, clients may be more reluctant to report negatively, since they also lose a trading partner. Analysis of Equilibria {#eqAnalysis} ---------------------- The one-time game presented in Figure \[fig:game\] has only one subgame equilibrium where the client opts $out$. When asked to report feedback, the client always prefers to report $1$ (reporting $0$ attracts the penalty $\varepsilon$). Knowing this, the best strategy for the provider is to exert low effort and deliver the service. Knowing the provider will play $e_0 d$, it is strictly better for the client to play $out$. The repeated game between the same client and provider may, however, have other equilibria. Before analyzing the repeated game, let us note that every interaction between a provider and a particular client can be strategically isolated and considered independently. As the provider accepts all clients and views them identically, he will maximize his expected revenue in each of the isolated repeated games. From now on, we will only consider the repeated interaction between the provider and one client. This can be modeled by a repetition of the stage game $G$, denoted $G^T$, where $T$ is finite or infinite. In this paper we will deal with the infinite horizon case, however, the results obtained can also be applied with minor modifications to finitely repeated games where $T$ is large enough. If $\hat{\delta}$ is the per period discount factor reflecting the probability that the market ceases to exist after each round, (or the present value of future revenues), let us denote by $\delta$ the expected discount factor in the game $G^T$. If our client interacts with the provider on the average every $N$ rounds, $\delta = \hat{\delta}^N$. The life-time expected payoff of the players is computed as: $$\sum_{\tau =0}^T \delta^{\tau}g_i^\tau;$$ where $i \in \{C,P\}$ is the client, respectively the provider, $g_i^\tau$ is the expected payoff obtained by player $i$ in the $\tau^{th}$ interaction, and $\delta^{\tau}$ is the discount applied to compute the present day value of $g_i^\tau$. We will consider *normalized* life-time expected payoffs, so that payoffs in $G$ and $G^T$ can be expressed using the same measure: $$V_i = (1 - \delta) \sum_{\tau =0}^T \delta^{\tau}g_i^\tau;$$ We define the *average continuation payoff* for player $i$ from period $t$ onward (and including period $t$) as: $$V_i^t= (1-\delta) \sum_{\tau =t}^T \delta^{\tau -t}g_i^\tau;$$ The set of outcomes publicly perceived by both players after each round is: $$Y = \{out, l, q_0 1, q_0 0, q_1 1, q_1 0\}$$ where: - $out$ is observed when the client opts $out$, - $l$ is observed when the provider acknowledges low quality and rolls back the transaction, - $q_i \,j$ is observed when the provider delivers quality $q_i \in \{q_0,q_1\}$ and the client reports $j \in \{0,1\}$. We denote by $h^t$ a specific public history of the repeated game out of the set $H^t=(\times Y)^t$ of all possible histories up to and including period $t$. In the repeated game, a public strategy $\sigma_i$ of player $i$ is a sequence of maps $(\sigma_i^t)$, where $\sigma_i^t:H^{t-1} \rightarrow \Delta(A_i)$ prescribes the (mixed) strategy to be played in round $t$, after the public history $h^{t-1} \in H^{t-1}$. A *perfect public equilibrium* (PPE) is a profile of public strategies $\sigma = (\sigma_C,\sigma_P)$ that, beginning at any time $t$ and given any public history $h^{t-1}$, form a Nash equilibrium from that point on [@Fudenberg/Levine/Maskin:1994]. $V_i^t(\sigma)$ is the continuation payoff to player $i$ given by the strategy profile $\sigma$. $G$ is a game with *product structure* since any public outcome can be expressed as a vector of two components $(y_C,y_P)$ such that the distribution of $y_i$ depends only on the actions of player $i \in \{C,P\}$, the client, respectively the provider. For such games, establish a Folk Theorem proving that any feasible, individually rational payoff profile is achievable as a PPE of $G^\infty$ when the discount factor is close enough to 1. The set of feasible, individually rational payoff profiles is characterized by: - the minimax payoff to the client, obtained by the option $out$: $\underline{V_C} = u - p(1+\rho)$; - the minimax payoff to the provider, obtained when the provider plays $e_0 l$: $\underline{V_P} = 0$; - the pareto optimal frontier (graphically presented in Figure \[fig:paretoOptFront\]) delimited by the payoffs given by (linear combination of) the strategy profiles $(in 1^{q_0} 1^{q_1}$, $e_1 l^{q_0} d^{q_1})$, $(in 1^{q_0} 1^{q_1}, e_1 d^{q_0} d^{q_1})$ and $(in 1^{q_0} 1^{q_1}, e_0d)$. and contains more than one point (i.e., the payoff when the client plays $out$) when $\alpha(u-p) > u - p(1+\rho)$ and $\alpha p -c > 0$. Both conditions impose restrictions on the minimum margin generated by a transaction such that the interaction is profitable. The PPE payoff profile that gives the provider the maximum payoff is $(\underline{V_C}, \overline{V_P})$ where: $$\overline{V_P} = \left \{ \begin{array}{ll} \alpha*u -c - u + p(1+\rho) & \mbox{if $\rho \leq \frac{u(1-\alpha)}{p}$}\\ p+ \frac{c(p\rho -u)}{\alpha u} & \mbox{if $\rho > \frac{u(1-\alpha)}{p}$} \end{array} \right.$$ and $\underline{V_C}$ is defined above. ![The pareto-optimal frontier of the set of feasible, individually rational payoff profiles of $G$.[]{data-label="fig:paretoOptFront"}](figs/paretoOptFront.eps){width="0.55\columnwidth"} While completely characterizing the set of PPE payoffs for discount factors strictly smaller than 1 is outside the scope of this paper, let us note the following results: First, if the discount factor is high enough (but strictly less than 1) with respect to the profit margin obtained by the provider from one interaction, there is at least one PPE such that the reputation mechanism records only honest reports. Moreover, this equilibrium is pareto-optimal. When $\delta > \frac{p}{p(1+\alpha) -c}$, the strategy profile: - the provider always exerts high effort, and delivers only high quality; if the client deviates from the equilibrium , the provider switches to $e_0 d$ for the rest of the rounds; - the client always reports $1$ when asked to submit feedback; if the provider deviates, (i.e., she receives low quality), the client switches to $out$ for the rest of the rounds. is a pareto-optimal PPE. \[prop:lowerBoundDelta\] It is not profitable for the client to deviate from the equilibrium path. Reporting $0$ attracts the penalty $\varepsilon$ in the present round, and the termination of the interaction with the provider (the provider stops exerting effort from that round onwards). The provider, on the other hand, can momentarily gain by deviating to $e_1 d^{q_0} d^{q_1}$ or $e_0 d$. A deviation to $e_1 d^{q_0} d^{q_1}$ gives an expected momentary gain of $p(1-\alpha)$ and an expected continuation loss of $(1-\alpha)(\alpha p -c)$. A deviation to $e_0 d$ brings an expected momentary gain equal to $(1-\alpha)p +c$ and an expected continuation loss of $\alpha p -c$. For the discount factor satisfying our hypothesis, both deviations are not profitable. The discount factor is low enough with respect to profit margins, such that the future revenues given by the equilibrium strategy offset the momentary gains obtained by deviating. The equilibrium payoff profile is $(V_C,V_P)=(\alpha(u-p), \alpha p -c)$, which is pareto-optimal and socially efficient. Second, we can prove that the client never reports negative feedback in any pareto-optimal PPE, regardless the value of the discount factor. The restriction to pareto-optimal is justifiable by practical reasons: assuming that the client and the provider can somehow negotiate the equilibrium they are going to play, it makes most sense to choose one of the pareto-optimal equilibria. The probability that the client reports negative feedback on the equilibrium path of any pareto-optimal PPE strategy is zero. \[prop:noZero\] The full proof presented in Appendix \[ap:noZero\] follows the following steps. Step 1, all equilibrium payoffs can be expressed by adding the present round payoff to the discounted continuation payoff from the next round onward. Step 2, take the PPE payoff profile $V = (V_C,V_P)$, such that there is no other PPE payoff profile $V' = (V_C', V_P)$ with $V_C < V_C'$. The client never reports negative feedback in the first round of the equilibrium that gives $V$. Step 3, the equilibrium continuation payoff after the first round also satisfies the conditions set for $V$. Hence, the probability that the client reports negative feedback on the equilibrium path that gives $V$ is 0. Pareto-optimal PPE payoff profiles clearly satisfy the definition of $V$, hence the result of the proposition. The third result we want to mention here, is that there is an upper bound on the percentage of false reports recorded by the reputation mechanism in any of the pareto-optimal equilibria. The upper bound on the percentage of false reports recorded by the reputation mechanism in any PPE equilibrium is: $$\gamma \leq \left\{ \begin{array}{ll} \frac{(1-\alpha)(p-u) + p\rho}{p} & \mbox{if $p\rho \leq u(1-\alpha)$}; \\ \frac{p \rho}{u} & \mbox{if $p\rho > u(1-\alpha)$} \end{array} \right. \label{eq:boundNoRep}$$ \[prop:boundNoRep\] The full proof presented in Appendix \[ap:boundNoRep\] builds directly on the result of Proposition \[prop:noZero\]. Since clients never report negative feedback along pareto-optimal equilibria, the only false reports recorded by the reputation mechanism appear when the provider delivers low quality, and the client reports positive feedback. However, any PPE profile must give the client at least $\underline{V_C} = u - p(1+\rho)$, otherwise the client is better off by resorting to the outside option. Every round in which the provider deliberatively delivers low quality gives the client a payoff strictly smaller than $u - p(1+\rho)$. An equilibrium payoff greater than $\underline{V_C}$ is therefore possible only when the percentage of rounds where the provider delivers low quality is bounded. The same bound limits the percentage of false reports recorded by the reputation mechanism. For a more intuitive understanding of the results presented in this section, let us refer to the pizza delivery example detailed in Section \[setting\_example\]. The price of a home delivered pizza is $p=1$, while at the local restaurant the same pizza would cost $p(1+\rho) = 1.2$. The utility of a warm pizza to the client is $u=2$, the cost of delivery is $c=0.8$ and the probability that unexpected traffic conditions delay the delivery beyond the 30 minutes deadline (despite the best effort of the provider) is $1- \alpha = 0.01$. The client can secure a minimax payoff of $\underline{V_C} = u - p(1+\rho) = 0.8$ by always going out to the restaurant. However, the socially desired equilibrium happens when the client orders pizza at home, and the pizza service exerts effort to deliver pizza in time: in this case the payoff of the client is $V_C = \alpha (u - p) = 0.99$, while the payoff of the provider is $V_P = \alpha p - c = 0.19$. Proposition \[prop:lowerBoundDelta\] gives a lower bound on the discount factor of the pizza delivery service such that repeated clients can expect the socially desired equilibrium. This bound is $\delta = \frac{p}{p(1+\alpha) -c} = 0.84$; assuming that the daily discount factor of the pizza service is $\hat{\delta} = 0.996$, the same client must order pizza at home at least once every 6 weeks. The values of the discount factors can also be interpreted in terms of the minimum number of rounds the client (and the provider) will likely play the game. For example, the discount factor can be viewed as the probability that the client (respectively the provider) will “live” for another interaction in the market. It follows that the average lifetime of the provider is at least $1/(1-\hat{\delta}) = 250$ interactions (with all clients), while the average lifetime of the client is at least $1/(1-\delta) = 7$ interactions (with the same pizza delivery service). These are clearly realistic numbers. Proposition \[prop:boundNoRep\] gives an upper bound on the percentage of false reports that our mechanism may record in equilibrium from the clients. As $u(1-\alpha) = 0.02 < 0.2 = p\rho$, this limit is: $$\gamma = \frac{p\rho}{u} = 0.1;$$ It follows that at least $90\%$ of the reports recorded by our mechanism (in any equilibrium) are correct. The false reports (false positive reports) result from rare cases where the pizza delivery is intentionally delayed to save some cost but clients do not complain. The false report can be justified, for example, by the provider’s threat to refuse future orders from clients that complain. Given that late deliveries are still rare enough, clients are better off with the home delivery than with the restaurant, hence they accept the threat. As other options become available to the clients (e.g., competing delivery services) the bound $\gamma$ will decrease. Please note that the upper bound defined by Proposition \[prop:boundNoRep\] only depends on the outside alternative available to the provider, and is not influenced by the punishment $\bar{\varepsilon}$ introduced by the reputation mechanism. This happens because the revenue of a client is independent of the interactions of other clients, and therefore, on the reputation information as reported by other clients. Equilibrium strategies are exclusively based on the direct experience of the client. In the following section, however, we will refine this bound by considering that clients can build a reputation for reporting honestly. There, the punishment $\bar{\varepsilon}$ plays an important role. Building a Reputation for Truthful Reporting {#buidingReputation} ============================================ An immediate consequence of Propositions \[prop:noZero\] and \[prop:boundNoRep\] is that the provider can extract all of the surplus created by the transactions by occasionally delivering low quality, and convincing the clients not to report negative feedback (providers can do so by promising sufficiently high continuation payoffs that prevent the client to resort to the outside provider). Assuming that the provider has more “power” in the market, he could influence the choice of the equilibrium strategy to one that gives him the most revenue, and holds the clients close to the minimax payoff $\underline{V_C} = u - p(1+\rho)$ given by the outside option.[^5] However, a client who could commit to report honestly, (i.e., commit to play the strategy $s_C^* = in 0^{q_0} 1^{q_1}$) would benefit from cooperative trade. The provider’s best response against $s^*_C$ is to play $e_1 l^{q_0} d^{q_1}$ repeatedly, which leads the game to the socially efficient outcome. Unfortunately the commitment to $s_C^*$ is not credible in the complete information game, for the reasons explained in Section \[eqAnalysis\]. Following the results of , and we know that such honest reporting commitments may become credible in a game with incomplete information. Suppose that the provider has incomplete information in $G^{\infty}$, and believes with non-negative probability that he is facing a committed client that always reports the truth. A rational client can then “fake” the committed client, and “build a reputation” for reporting honestly. When the reputation becomes credible, the provider will play $e_1 l^{q_0} d^{q_1}$ (the best response against $s_C^*$), which is better for the client than the payoff she would obtain if the provider knew she was the “rational” type. As an effect of reputation building, the set of equilibrium points is reduced to a set where the payoff to the client is higher than the payoff obtained by a client committed to report honestly. As anticipated from Proposition \[prop:boundNoRep\], a smaller set of equilibrium points also reduces the bound of false reports recorded by the reputation mechanism. In certain cases, this bound can be reduced to almost zero. Formally, incomplete information can be modeled by a perturbation of the complete information repeated game $G^{\infty}$ such that in period 0 (before the first round of the game is played) the “type” of the client is drawn by nature out of a countable set $\Theta$ according to the probability measure $\mu$. The client’s payoff now additionally depends on her type. We say that in the perturbed game $G^{\infty}(\mu)$ the provider has incomplete information because he is not sure about the true type of the client. Two types from $\Theta$ have particular importance: - The “normal” type of the client, denoted by $\theta_0$, is the rational client who has the payoffs presented in Figure \[fig:game\]. - The “commitment” type of the client, denoted by $\theta^*$, always prefers to play the commitment strategy $s_C^*$. From a rational perspective, the commitment type client obtains an arbitrarily high supplementary reward for reporting the truth. This external reward makes the strategy $s_C^*$ the dominant strategy, and therefore, no commitment type client will play anything else than $s_C^*$. In Theorem \[th:equilibrium\] we give an upper bound $k_P$ on the number of times the provider delivers low quality in $G^{\infty}(\mu)$, given that he always observes the client reporting honestly. The intuition behind this result is the following. The provider’s best response to a honest reporter is $e_1 l^{q_0} d^{q_1}$: always exert high effort, and deliver only when the quality is high. This gives the commitment type client her maximum attainable payoff in $G^{\infty}(\mu)$, corresponding to the socially efficient outcome. The provider, however, would be better off by playing against the normal type client, against whom he can obtain an expected payoff greater than $\alpha p -c$. The normal type client may be distinguished from a commitment type client only in the rounds when the provider delivers low quality: the commitment type always reports negative feedback, while the normal type might decide to report positive feedback in order to avoid the penalty $\varepsilon$. The provider can therefore decide to deliver low quality to the client in order to test her real type. The question is, how many times should the provider test the true type of the client. Every failed test (i.e., the provider delivers low quality and the client reports negative feedback) generates a loss of $-\bar{\varepsilon}$ to the provider, and slightly enforces the belief that the client reports honestly. Since the provider cannot wait infinitely for future payoffs, there must be a time when the provider will stop testing the type of the provider, and accepts to play the socially efficient strategy, $e_1 l^{q_0} d^{q_1}$. The switch to the socially efficient strategy is not triggered by a revelation of the client’s type. The provider believes that the client *behaves* as if she were a commitment type, not that the client *is* a commitment type. The client may very well be a normal type who chooses to mimic the commitment type, in the hope that she will obtain better service from the provider. However, further trying to determine the true type of the client is too costly for the provider. Therefore, the provider chooses to play $e_1 l^{q_0} d^{q_1}$, which is the best response to the commitment strategy $s_C^*$. If the provider has incomplete information in $G^{\infty}$, and assigns positive probability to the normal and commitment type of the client ($\mu(\theta_0)>0$, $\mu_0^* = \mu(\theta^*)>0$), there is a finite upper bound, $k_P$, on the number of times the provider delivers low quality in any equilibrium of $G^{\infty}(\mu)$. This upper bound is: $$k_P = \left\lfloor \frac{\ln(\mu_0^*)}{\ln \left( \frac{\delta (\overline{V_P} - \alpha p + c) + (1-\delta)p} {\delta (\overline{V_P} - \alpha p + c) + (1-\delta)\bar{\varepsilon}} \right)} \right\rfloor \label{eq:k_P}$$ \[th:equilibrium\] First, we use an important result obtained by about statistical inference (Lemma 1): If every previously delivered low quality service was sanctioned by a negative report, the provider must expect with increasing probability that his next low quality delivery will also be sanctioned by negative feedback. Technically, for any $\pi < 1$, the provider can deliver at most $n(\pi)$ low quality services (sanctioned by negative feedback) before expecting that the $n(\pi)+1$ low quality delivery will also be sanctioned by negative feedback with probability greater then $\pi$. This number equals to: $$n(\pi) = \left \lfloor \frac{\ln \mu^*}{\ln \pi} \right \rfloor;$$ As stated earlier, this lemma does not prove that the provider will become convinced that he is facing a commitment type client. It simply proves that after a finite number of rounds the provider becomes convinced that the client is playing as if she were a commitment type. Second, if $\pi > \frac{\delta \overline{V_P}}{\delta \overline{V_P} + (1-\delta)\bar{\varepsilon}}$ but is strictly smaller than $1$, the rational provider does not deliver low quality (it is easy to verify that the maximum discounted future gain does not compensate for the risk of getting a negative feedback in the present round). By the previously mentioned lemma, it must be that in any equilibrium, the provider delivers low quality a finite number of times. Third, let us analyze the round, $\bar{t}$, when the provider is about to deliver a low quality service (play $d^{q_0}$) for the last time. If $\pi$ is the belief of the provider that the client reports honestly in round $\bar{t}$, his expected payoff (just before deciding to deliver the low quality service) can be computed as follows: - with probability $\pi$ the client reports 0. Her reputation for reporting honestly becomes credible, so the provider plays $e_1 l^{q_0} d^{q_1}$ in all subsequent rounds. The provider gains $p-\bar{\varepsilon}$ in the current round, and expects $\alpha p - c$ for the subsequent rounds; - with probability $1-\pi$, the client reports $1$ and deviates from the commitment strategy, the provider knows he is facing a rational client, and can choose a continuation PPE strategy from the complete information game. He gains $p$ in the current round, and expects at most $\overline{V_P}$ in the subsequent rounds; $$V_P \leq (1-\delta) (p - \pi \bar{\varepsilon}) + \delta (\pi (\alpha p -c) + (1-\pi) \overline{V_P})$$ On the other hand, had the provider acknowledged the low quality and rolled back the transaction (i.e., play $l^{q_0}$), his expected payoff would have been at least: $$V_P' \geq (1-\delta) 0 + \delta (\alpha p -c)$$ Since the provider chooses nonetheless to play $d^{q_0}$ it must be that $V_P \geq V_P'$ which is equivalent to: $$\pi \leq \overline{\pi} = \frac{\delta (\overline{V_P} - \alpha p + c) + (1-\delta)p} {\delta (\overline{V_P} - \alpha p + c) + (1-\delta)\bar{\varepsilon}} \label{eq:overlinePi}$$ Finally, by replacing Equation (\[eq:overlinePi\]) in the definition of $n(\pi)$ we obtain the upper bound on the number of times the provider delivers low quality service to a client committed to report honestly. The existence of $k_P$ further reduces the possible equilibrium payoffs a client can get in $G^\infty(\mu)$. Consider a rational client who receives for the first time low quality. She has the following options: - report negative feedback and attempt to build a reputation for reporting honestly. Her payoff for the current round is $-p-\varepsilon$. Moreover, her worst case expectation for the future is that the next $k_P -1$ rounds will also give her $-p-\varepsilon$, followed by the commitment payoff equal to $\alpha(u-p)$: $$V_C|0 = (1-\delta) (-p -\varepsilon) + \delta(1 - \delta^{k_P-1})(-p-\varepsilon) + \delta^{k_P} \alpha (u-p);$$ - on the other hand, by reporting positive feedback she reveals to be a normal type, loses only $p$ in the current round, and expects a continuation payoff equal to $\hat{V}_C$ given by a PPE strategy profile of the complete information game $G^\infty$: $$V_C|1 = (1-\delta) (-p) + \delta \hat{V}_C;$$ The reputation mechanism records false reports only when clients do not have the incentive to build a reputation for reporting honestly, and $V_C|1 > V_C|0$; this is true for: $$\hat{V}_C > \delta^{k_P -1} \alpha (u-p) - (1-\delta^{k_P-1})(p+\varepsilon) - \frac{1-\delta}{\delta} \varepsilon;$$ Following the argument of Proposition \[prop:boundNoRep\] we can obtain a bound on the percentage of false reports recorded by the reputation mechanism in a pareto-optimal PPE that gives the client at least $\hat{V}_C$: $$\hat{\gamma} = \left\{ \begin{array}{ll} \frac{\alpha(u-p) -\hat{V}_C}{p} & \mbox{if $\hat{V}_C \geq \alpha u -p$}; \\ \frac{u-p - \hat{V}_C}{u} & \mbox{if $\hat{V}_C < \alpha u -p$} \end{array} \right. \label{eq:boundWithRep}$$ Of particular importance is the case when $k_P = 1$. $\hat{V}_C$ and $\hat{\gamma}$ become: $$\hat{V}_C = \alpha(u-p) - \frac{1-\delta}{\delta} \varepsilon; \quad \hat{\gamma} = \frac{(1-\delta)\varepsilon}{\delta p};$$ so the probability of recording a false report (after the first one) can be arbitrarily close to 0 as $\varepsilon \rightarrow 0$. For the pizza delivery example introduced in Section \[setting\_example\], Figure \[fig:k\_P\] plots the bound, $k_P$, defined in Theorem \[th:equilibrium\], as a function of the prior belief ($\mu^*_0$) of the provider that the client is an honest reporter. We have used a value of the discount factor equal to $\delta = 0.95$, such that on average, every client interacts $1/(1- \delta) = 20$ times with the same provider. The penalty for negative feedback was taken $\bar{\varepsilon} = 2.5$. When the provider believes that $20\%$ of the clients always report honestly, he will deliver at most 3 times low quality. When the belief goes up to $\mu^*_0 = 40\%$ no rational provider will deliver low quality more than once. ![The upper bound $k_P$ as a function of the prior belief $\mu_0^*$.[]{data-label="fig:k_P"}](figs/k_P.eps){width="0.6\columnwidth"} In Figure \[fig:gamma\] we plot the values of the bounds $\gamma$ (Equation (\[eq:boundNoRep\])) and $\hat{\gamma}$ (Equation (\[eq:boundWithRep\])) as a function of the prior belief $\mu_0^*$. The bounds simultaneously hold, therefore the maximum percentage of false reports recorded by the reputation mechanism is the minimum of the two. When $\mu_0^*$ is less $0.25$, $k_P \geq 2$, $\gamma \leq \hat{\gamma}$, and the reputation effect does not significantly reduce the worst case percentage of false reports recorded by the mechanism. However, when $\mu_0^* \in (0.25, 0.4)$ the reputation mechanism records (in the worst case) only half as many false reports, and as $\mu_0^* > 0.4$, the percentage of false reports drops to $0.005$. This probability can be further decreased by decreasing the penalty $\varepsilon$. In the limit, as $\varepsilon$ approaches 0, the reputation mechanism will register a false report with vanishing probability. The result of Theorem \[th:equilibrium\] has to be interpreted as a worst case scenario. In real markets, providers that already have a small predisposition to cooperate will defect fewer times. Moreover, the mechanism is self enforcing, in the sense that the more clients act as commitment types, the higher will be the prior beliefs of the providers that new, unknown clients will report truthfully, and therefore the easier it will be for the new clients to act as truthful reporters. As mentioned at the end of Section \[eqAnalysis\], the bound $\hat{\gamma}$ strongly depends on the punishment $\bar{\varepsilon}$ imposed by the reputation mechanism for a negative feedback. The higher $\bar{\varepsilon}$, the easier it is for clients to build a reputation, and therefore, the lower the amount of false information recorded by the reputation mechanism. ![The maximum probability of recording a false report as a function of the prior belief $\mu_0^*$.[]{data-label="fig:gamma"}](figs/gamma.eps){width="0.6\columnwidth"} The Threat of Malicious Clients {#evilBuyers} =============================== The mechanism described so far encourages service providers to do their best and deliver good service. The clients were assumed rational, or committed to report honestly, and in either case, they never report negative feedback unfairly. In this section, we investigate what happens when clients explicitly try to “hurt” the providers by submitting fake negative ratings to the reputation mechanism. An immediate consequence of fake negative reports is that clients lose money. However, the costs $\varepsilon$ of a negative report would probably be too small to deter clients with separate agendas from hurting the provider. Fortunately, the mechanism we propose naturally protects service providers from consistent attacks initiated by malicious clients. Formally, a *malicious type* client, $\theta_\beta \in \Theta$, obtains a supplementary (external) payoff $\beta$ for reporting negative feedback. Obviously, $\beta$ has to be greater than the penalty $\varepsilon$, otherwise the results of Proposition \[prop:noZero\] would apply. In the incomplete information game $G^\infty(\mu)$, the provider now assigns non-zero initial probability to the belief that the client is malicious. When only the normal type, $\theta_0$, the honest reporter type $\theta^*$ and the malicious type $\theta_\beta$ have non-zero initial probability, the mechanism we describe is robust against unfair negative reports. The first false negative report exposes the client as being malicious, since neither the normal, nor the commitment type report $0$ after receiving high quality. By Bayes’ Law, the provider’s updated belief following a false negative report must assign probability 1 to the malicious type. Although providers are not allowed to refuse service requests, they can protect themselves against malicious clients by playing $e_0 l$: i.e., exert low effort and reimburse the client afterwards. The RM records neutral feedback in this case, and does not sanction the provider. Against $e_0 l$, malicious clients are better off by quitting the market (opt $out$), thus stopping the attack. The RM records at most one false negative report for every malicious client, and assuming that identity changes are difficult, providers are not vulnerable to unfair punishments. When other types (besides $\theta_0, \theta^*$ and $\theta_\beta$) have non-zero initial probability, malicious clients are harder to detect. They could masquerade client types that are normal, but accidentally misreport. It is not rational for the provider to immediately exclude (by playing $e_0 l$) normal clients that rarely misreport: the majority of the cooperative transactions rewarded by positive feedback still generate positive payoffs. Let us now consider the client type $\theta_0(\nu) \in \Theta$ that behaves exactly like the normal type, but misreports $0$ instead of $1$ independently with probability $\nu$. When interacting with the client type $\theta_0(\nu)$, the provider receives the maximum number of unfair negative reports when playing the efficient equilibrium: i.e., $e_1 l^{q_0} d^{q_1}$. In this case, the provider’s expected payoff is: $$V_P = \alpha p - c - \nu \bar{\varepsilon};$$ Since $V_P$ has to be positive (the minimax payoff of the provider is 0, given by $e_0 l$), it must be that $\nu \leq \frac{\alpha p - c}{\bar{\varepsilon}}$. The maximum value of $\nu$ is also a good approximation for the maximum percentage of false negative reports the malicious type can submit to the reputation mechanism. Any significantly higher number of harmful reports exposes the malicious type and allows the provider to defend himself. Note, however, that the malicious type can submit a fraction $\nu$ of false reports only when the type $\theta_0(\nu)$ has positive prior probability. When the provider does not believe that a normal client can make so many mistakes (even if the percentage of false reports is still low enough to generate positive revenues) he attributes the false reports to a malicious type, and disengages from cooperative behavior. Therefore, one method to reduce the impact of malicious clients is to make sure that normal clients make few or no mistakes. Technical means (for example by providing automated tools for formatting and submitting feedback), or improved user interfaces (that make it easier for human users to spot reporting mistakes) will greatly limit the percentage of mistakes made by normal clients, and therefore, also reduce the amount of harm done by malicious clients. One concrete method for reducing mistakes is to solicit only negative feedback from the clients (the principle that no news is good news, also applied by ). As reporting involves some conscious decision, mistakes will be less frequent. On the other hand, the reporting effort will add to the penalty for a negative report, and makes it harder for normal clients to establish a reputation for honest reporters. Alternative methods for reducing the harm done by malicious clients (like filtering mechanisms, etc., ) as well as tighter bounds on the percentage of false reports introduced by such clients will be further addressed in future work. Discussion and Future Work {#future_work} ========================== Further benefits can be obtained if the clients’ reputation for reporting honestly is shared within the market. The reports submitted by a client while interacting with other providers will change the initial beliefs of a new provider. As we have seen in Section \[buidingReputation\], providers cheat less if they a priory expect with higher probability to encounter honest reporting clients. A client that has once built a reputation for truthfully reporting the provider’s behavior will benefit from cooperative trade during her entire lifetime, without having to convince each provider separately. Therefore the upper bound on the loss a client has to withstand in order to convince a provider that she is a commitment type, becomes an upper bound on the total loss a client has to withstand during her entire lifetime in the market. How to effectively share the reputation of clients within the market remains an open issue. Correlated with this idea is the observation that clients that use our mechanism are motivated to keep their identity. In generalized markets where agents are encouraged to play both roles (e.g. a peer-2-peer file sharing market where the fact that an agent acts only as “provider” can be interpreted as a strong indication of “double identity” with the intention of cheating) our mechanism also solves the problem signaled by related to cheap online pseudonyms. The price to pay for the new identity is the loss due to building a reputation as truthful reporter when acting as a client. Unlike incentive-compatible mechanism that pay reporters depending on the feedback provided by peers, the mechanism described here is less vulnerable to collusion. The only reason individual clients would collude is to badmouth (i.e., artificially decrease the reputation of) a provider. However, as long as the punishment for negative feedback is not super-linear in the number of reports (this is usually the case), coordinating within a coalition brings no benefits for the colluders: individual actions are just as effective as the actions when part of a coalition. The collusion between the provider and client can only accelerate the synchronization of strategies on one of the PPE profiles (collusion on a non-PPE strategy profile is not stable), which is rather desirable. The only profitable collusion can happen when competitor providers incentivize normal clients to unfairly downrate their current provider. Colluding clients become *malicious* in this case, and the limits on the harm they can do are presented in Section \[evilBuyers\]. The mechanism we describe here is not a general solution for all online markets. In general retail e-commerce, clients don’t usually interact with the same service provider more than once. As we have showed along this paper, the assumption of a repeated interaction is crucial for our results. Nevertheless, we believe there are several scenarios of practical importance that do meet our requirements (e.g., interactions that are part of a supply chain). For these, our mechanism can be used in conjunction with other reputation mechanisms to guarantee reliable feedback and improve the overall efficiency of the market. Our mechanism can be further criticized for being centralized. The reputation mechanism acts as a central authority by supervising monetary transactions, collecting feedback and imposing penalties on the participants. However, we see no problem in implementing the reputation mechanism as a distributed system. Different providers can use different reputation mechanisms, or, can even switch mechanisms given that some safeguarding measures are in place. Concrete implementations remain to be addressed by future work. Although we present a setting where the service always costs the same amount, our results can be easily extended to scenarios where the provider may deliver different kinds of services, having different prices. As long as the provider believes that requests are randomly drawn from some distribution, the bounds presented above can be computed using the average values of $u$, $p$ and $c$. The constraint on the provider’s belief is necessary in order to exclude some unlikely situations where the provider cheats on a one time high value transaction, knowing that the following interactions carry little revenue, and therefore, cannot impose effective punishments. In this paper, we systematically overestimate the bounds on the worst case percentage of false reports recorded by the mechanism. The computation of tight bounds requires a precise quantitative description of the actual set of PPE payoffs the client and the provider can have in $G^\infty$. and pose the theoretical grounds for computing the set of PPE payoffs in an infinitely repeated game with discount factors strictly smaller than 1. However, efficient algorithms that allow us to find this set are still an open question. As research in this domain progresses, we expect to be able to significantly lower the upper bounds described in Sections \[GTanalysis\] and \[buidingReputation\]. One direction of future research is to study the behavior of the above mechanism when there is two-sided incomplete information: i.e. the client is also uncertain about the type of the provider. A provider type of particular importance is the “greedy” type who always likes to keep the client to a continuation payoff arbitrarily close to the minimal one. In this situation we expect to be able to find an upper bound $k_C$ on the number of rounds in which a rational client would be willing to test the true type of the provider. The condition $k_P < k_C$ describes the constraints on the parameters of the system for which the reputation effect will work in the favor of the client: i.e. the provider will give up first the “psychological” war and revert to a cooperative equilibrium. The problem of involuntary reporting mistakes briefly mentioned in Section \[evilBuyers\] needs further addressing. Besides false negative mistakes (reporting $0$ instead of $1$), normal clients can also make false positive mistakes (report $1$ instead of the intended $0$). In our present framework, one such mistake is enough ro ruin the reputation of a normal type client to report honestly. This is one of the reasons why we chose a sequential model where the feedback of the client is not required if the provider acknowledges low quality. Once the reputation of the client becomes credible, the provider always rolls back the transactions that generate (accidentally or not) low quality, so the client is not required to continuously defend her reputation. Nevertheless, the consequences of reporting mistakes in the reputation building phase must be considered in more detail. Similarly, mistakes made by the provider, monitoring and communication errors will also influence the results presented here. Last, but not the least, practical implementations of the mechanism we propose must address the problem of persistent online identities. One possible attack created by easy identity changes has been mentioned in Section \[evilBuyers\]: malicious buyers can continuously change identity in order to discredit the provider. In another attack, the provider can use fake identities to increase his revenue. When punishments for negative feedback are generated endogenously by decreased prices in a fixed number of future transactions , the provider can adopt the following strategy: he cheats on all real customers, but generates a sufficient number of fake transactions in between two real transactions, such that the effect created by the real negative report disappears. An easy fix to this latter attack is to charge transaction or entrance fees. However, these measures also affect the overall efficiency of the market, and therefore, different applications will most likely need individual solutions. Conclusions =========== Effective reputation mechanisms must provide appropriate incentives in order to obtain honest feedback from self-interested clients. For environments characterized by adverse selection, direct payments can explicitly reward honest information by conditioning the amount to be paid on the information reported by other peers. The same technique unfortunately does not work when service providers have moral hazard, and can individually decide which requests to satisfy. Sanctioning reputation mechanisms must therefore use other mechanisms to obtain reliable feedback. In this paper we describe an incentive-compatible reputation mechanism when the clients also have a repeated presence in the market. Before asking feedback from the clients, we allow the provider to acknowledge failures and reimburse the price paid for service. When future transactions generate sufficient profit, we prove that there is an equilibrium where the provider behaves as socially desired: he always exerts effort, and reimburses clients that occasionally receive bad service due to uncontrollable factors. Moreover, we analyze the set of pareto-optimal equilibria of the mechanism, and establish a limit on the maximum amount of false information recorded by the mechanism. The bound depends both on the external alternatives available to clients and on the ease with which they can commit to reporting the truth. Proof of Proposition \[prop:noZero\] {#ap:noZero} ==================================== *The probability that the client reports negative feedback on the equilibrium path of any pareto-optimal PPE strategy is zero.* *Step 1*. Following the principle of dynamic programming [@Abreu/Pearce/Stacchetti:1990], the payoff profile $V = (V_C,V_P)$ is a PPE of $G^\infty$, if and only if there is a strategy profile $\sigma$ in $G$, and the continuation PPE payoffs profiles $\{W(y) | y \in Y\}$ of $G^\infty$, such that: - $V$ is obtained by playing $\sigma$ in the current round, and a PPE strategy that gives $W(y)$ as a continuation payoff, where $y$ is the public outcome of the current round, and $Pr[y|\sigma]$ is the probability of observing $y$ after playing $\sigma$: $$\begin{split} V_C &= (1-\delta)g_C(\sigma) + \delta \Big( \sum_{y \in Y} Pr[y|\sigma] \cdot W_C(y) \Big); \\ V_P &= (1-\delta)g_P(\sigma) + \delta \Big( \sum_{y \in Y} Pr[y|\sigma] \cdot W_P(y) \Big); \end{split}$$ - no player finds it profitable to deviate from $\sigma$: $$\begin{split} V_C & \geq (1-\delta)g_C \big( (\sigma_C',\sigma_P) \big) + \delta \Big( \sum_{y \in Y} Pr \big[y|(\sigma_C',\sigma_P)\big] \cdot W_C(y) \Big); \quad \forall \sigma_C' \neq \sigma_C \\ V_P & \geq (1-\delta)g_P \big( (\sigma_C,\sigma_P') \big) + \delta \Big( \sum_{y \in Y} Pr \big[y|(\sigma_C,\sigma_P')\big] \cdot W_P(y) \Big); \quad \forall \sigma_P' \neq \sigma_P\\ \end{split}$$ The strategy $\sigma$ and the payoff profiles $\{W(y) | y \in Y\}$ are said to *enforce* $V$. *Step 2.* Take the PPE payoff profile ${V} = (V_C,V_P)$, such that there is no other PPE payoff profile $V' = (V_C', V_P)$ with $V_C < V_C'$. Let $\sigma$ and $\{W(y) | y \in Y\}$ enforce $V$, and assume that $\sigma$ assigns positive probability $\beta_0 = Pr[q_0 0|\sigma] > 0$ to the outcome $q_0 0$. If $\beta_1 = Pr[q_0 1|\sigma]$ (possibly equal to 0), let us consider: - the strategy profile $\sigma' = (\sigma_C',\sigma_P)$ where $\sigma'_C$ is obtained from $\sigma_C$ by asking the client to report $1$ instead of $0$ when she receives low quality (i.e., $q_0$); - the continuation payoffs $\{W'(y) | y \in Y\}$ such that $W'_i(q_0 1) = \beta_0 W_i(q_0 0) + \beta_1 W_i(q_0 1)$ and $W'_i(y \neq q_0 1) = W_i(y)$ for $i \in \{C,P\}$. Since, the set of correlated PPE payoff profiles of $G^\infty$ is convex, if $W(y)$ are PPE payoff profiles, so are $W'(y)$. The payoff profile $(V'_C, V_P)$, $V'_C = V_C + (1-\delta)\beta_0 \varepsilon$ is a PPE equilibrium profile because it can be enforced by $\sigma'$ and $\{W'(y) | y \in Y\}$. However, this contradicts our assumption that $V'_C < V_C$, so $Pr[q_0 0|\sigma]$ must be 0. Following exactly the same argument, we can prove that $Pr[q_1 0|\sigma] = 0$. *Step 3.* Taking $V$, $\sigma$ and $\{W(y) | y \in Y\}$ from step 2, we have: $$V_C = (1-\delta)g_C(\sigma) + \delta \Big( \sum_{y \in Y} Pr[y|\sigma] \cdot W_C(y) \Big); \label{eq:step3}$$ If no other PPE payoff profile $V' = (V_C', V_P)$ can have $V'_C > V_C$, it must be that the continuation payoffs $W(y)$ satisfy the same property. (Assume otherwise that there is a PPE $(W_C'(y),W_P(y))$ with $W_C'(y) > W_C(y)$. Replacing $W'_C(y)$ in (\[eq:step3\]) we obtain $V'$ that contradicts the hypothesis). By continuing the recursion, we obtain that the client never reports $0$ on the equilibrium path that enforces a payoff profile as defined in Step 2. Pareto-optimal payoff profiles clearly enter this category, hence the result of the proposition. Proof of Proposition \[prop:boundNoRep\] {#ap:boundNoRep} ======================================== Since clients never report negative feedback along pareto-optimal equilibria, the only false reports recorded by the reputation mechanism appear when the provider delivers low quality, and the client reports positive feedback. Let $\sigma = (\sigma_C, \sigma_P)$ be a pareto-optimal PPE strategy profile. $\sigma$ induces a probability distribution over public histories and, therefore, over expected outcomes in each of the following transactions. Let $\mu_t$ be the probability distribution induced by $\sigma$ over the outcomes in round $t$. $\mu_t(q_0 0) = \mu_t(q_1 0) = 0$ as proven by Proposition \[prop:noZero\]. The payoff received by the client when playing $\sigma$ is therefore: $$V_C(\sigma) \leq (1-\delta) \sum_{t=0} ^\infty \delta^t \Big( \mu_t(q_0 1) (-p) + \mu_t(q_1 1) (u-p) + \mu_t(l) 0 + \mu_t(out) (u-p-p\rho) \Big);$$ where $\mu_t(q_0 1) + \mu_t(q_1 1) + \mu_t(l) + \mu_t(out) = 1$ and $\mu_t(q_0 1) + \mu_t(l) \geq (1-\alpha) \mu_t(q_1 1) / \alpha$, because the probability of $q_0$ is at least $(1-\alpha)/\alpha$ times the probability of $q_1$. When the discount factor, $\delta$, is the probability that the repeated interaction will stop after each transaction, the expected probability of the outcome $q_0 1$ is: $$\gamma = (1-\delta) \sum_{t=0}^\infty \delta^t \mu_t(q_0 1);$$ Since any PPE profile must give the client at least $\underline{V_C} = u - p(1+\rho)$, (otherwise the client is better off by resorting to the outside option), $V_C(\sigma) \geq \underline{V_C}$. By replacing the expression of $V_C(\sigma)$, and taking into account the constraints on the probability of $q_1$ we obtain: $$\gamma (-p) + (u-p) \cdot \min\big(1-\gamma, \alpha\big) \leq \underline{V_C};$$ $$\gamma \leq \left\{ \begin{array}{ll} \frac{(1-\alpha)(p-u) + p\rho}{p} & \mbox{if $p\rho \leq u(1-\alpha)$}; \\ \frac{p \rho}{u} & \mbox{if $p\rho > u(1-\alpha)$} \end{array} \right.$$ Abreu, P., Pearce, D.,  Stacchetti, E. 1990. , [58]{}(5), 1041 – 1063. Bernheim, B. D.   Ray, D. 1989. , [1]{}, 295–326. Birk, A. 2001. In Falcone, R., Singh, M.,  Tan, Y.-H., [Trust in Cyber-societies]{},  LNAI 2246,  133–144. Springer-Verlag, Berlin Heidelberg. Biswas, A., Sen, S.,  Debnath, S. 2000. , [14]{}, 785–797. Braynov, S.   Sandholm, T. 2002. In [Proceedings of the AAMAS]{}, Bologna, Italy. Cooke, R. 1991. . Oxford University Press: New York. Dellarocas, C. 2002. In Padget, J.   et al., ,  LNCS 2531,  238–252. Springer Verlag. Dellarocas, C. 2005. , [16]{}(2), 209–230. Farrell, J.   Maskin, E. 1989. , [1]{}, 327–360. Friedman, E.   Resnick, P. 2001. , [10(2)]{}, 173–199. Fudenberg, D.   Levine, D. 1989. , [57]{}, 759–778. Fudenberg, D., Levine, D.,  Maskin, E. 1994. , [62]{}(5), 997–1039. Harmon, A. 2004. . Houser, D.   Wooders, J. 2006. , [15]{}, 353–369. Jurca, R.   Faltings, B. 2006. In [Proceedings of the ACM Conference on Electronic Commerce (EC’06)]{},  190–199, Ann Arbor, Michigan, USA. Kreps, D. M., Milgrom, P., Roberts, J.,  Wilson, R. 1982. , [27]{}, 245–252. Kreps, D. M.   Wilson, R. 1982. , [27]{}, 253–279. Kuwabara, K. 2003. Working paper. Mailath, G.   Samuelson, L. 2006. . Oxford University Press. Milgrom, P.   Roberts, J. 1982. , [27]{}, 280–312. Miller, N., Resnick, P.,  Zeckhauser, R. 2005. , [51]{}, 1359 –1373. Papaioannou, T. G.   Stamoulis, G. D. 2005. In [Proceedings of IEEE/ACM CCGRID 2005]{}. Resnick, P.   Zeckhauser, R. 2002. In Baye, M., ,  11 of Advances in Applied Microeconomics. Elsevier Science, Amsterdam. Schillo, M., Funk, P.,  Rovatsos, M. 2000. , [14]{}, 825–848. Schmidt, K. M. 1993. , [61]{}, 325–351. Selten, R. 1978. , [9]{}, 127–159. Yu, B.   Singh, M. 2002. In [Proceedings of the AAMAS]{}, Bologna, Italy. Yu, B.   Singh, M. 2003. In [Proceedings of the AAMAS]{}, Melbourne, Australia. [^1]: www.ebay.com [^2]: www.amazon.com [^3]: In reality, the provider might also pay a penalty for rolling back the transaction. As long as this penalty is small, the qualitative results we present in this paper remain valid. [^4]: The reputation mechanism can buy and sell market licences [^5]: All pareto-optimal PPE payoff profiles are also renegotiation-proof [@Bernheim/Ray:1989; @Farrell/Maskin:1989]. This follows from the proof of Proposition \[eq:boundNoRep\]: the continuation payoffs enforcing a pareto-optimal PPE payoff profile are also pareto-optimal. Therefore, clients falsely report positive feedback even under the more restrictive notion of negotiation-proof equilibrium.
--- abstract: 'Following Sarason’s classification of the densely defined multiplication operators over the Hardy space, we classify the densely defined multipliers over the Sobolev space, $W^{1,2}[0,1]$. In this paper we find that the collection of such multipliers for the Sobolev space is exactly the Sobolev space itself. This sharpens a result of Shields concerning bounded multipliers. The densely defined multiplication operators over the subspace $W_0 = \{ f \in W^{1,2}[0,1] : f(0)=f(1)=0 \}$ are also classified. In this case the densely defined multiplication operators can be written as a ratio of functions in $W_0$ where the denominator is non-vanishing. This is proved using a contructive argument.' address: '358 Little Hall, PO Box 118105 Gainesville FL 32611-8105' author: - 'Joel A. Rosenfeld' bibliography: - 'sobolev.bib' date: 'June 7, 2013' title: Densely Defined Multiplication on the Sobolev Space --- Introduction ============ Recall that a reproducing kernel Hilbert space (RKHS), $H$, over a set $X$ is a Hilbert space of functions $f: X \to \mathbb{C}$ for which the evaluation functionals $E_x f = f(x)$ are bounded. By the Riesz representation theorem, this tells us that for each $x \in X$ there is a function $k_x \in H$ such that $f(x) = {\langlef | k_x\rangle}$ for all $f \in H$. Given a RKHS over a set $X$, $H$, a function $\phi: X \to H$ is a densely defined multiplier if the set $$D(M_\phi) = \{ f \in H : \phi f \in H \}$$ is dense in $H$. The multiplication operator, $M_\phi f = \phi f$, is a closed operator [@franek], so if $D(M_\phi) = H$, $M_\phi$ is a bounded operator by the closed graph theorem. Moreover, for all $x \in X$, the reproducing kernel $k_x$ is an eigenvector for the (densely defined) adjoint, $M_\phi^*$, with eigenvalue $\overline{\phi(x)}$ [@franek]. Bounded multiplication operators are a well studied class of operators in Operator Theory. They provide straightforward examples for use in spectral theory, are viewed as transfer functions for linear systems, and in reproducing kernel Hilbert spaces, they interact nicely with the reproducing kernels. The classification of these operators is an important part of operator theory, and this classification has been carried out for the Hardy space [@aglermccarthy], Fock space [@aglermccarthy], Dirichlet space [@stegenga], and the Bergman space [@luecking]. For the classical Hardy space, $H^2$, the bounded multiplication operators are those operators with symbol $\phi \in H^\infty$, for example. Bounded multipliers have also been classified in the case of the Sobolev Space, $$W^{1,2}[0,1] = \{ f: [0,1] \to \mathbb{C} : f \text{ is absolutely continuous }, f' \in L^2[0,1] \}$$ equipped with the inner product: $${\langlef | g\rangle} = \int_0^1 f(x)\overline{g(x)} + f'(x)\overline{g'(x)} dx.$$ Alan Shields showed that the Sobolev space is a space of functions where the collection of multipliers is exactly the space itself [@halmos]. This contrasts with the Hardy space, where the collection of multipliers is strictly contained within the Hardy space. Jim Agler in [@agler], showed that like the multipliers for the Hardy space, the multiplication operators for the Sobolev space have the Nevanlinna-Pick property. A thorough investigation of the bounded multipliers of Sobolev spaces on spaces of higher dimensions is carried out in [@mazya]. In the present work, densely defined multiplication operators are investigated over the RKHS $W^{1,2}[0,1]$. $W^{1,2}[0,1]$ is a special case of Sobolev spaces that has a reproducing kernel, since in general, elements of a Sobolev space are equivalence classes of functions that agree almost everywhere [@evans]. The main result of this paper, Theorem \[allbdd\], states that Shields’ result holds when you relax bounded to densely defined. That is the collection of *densely defined* multipliers over the Sobolev space is the space itself. In particular, the Sobolev space provides an example of a space where every densely defined multiplier is in fact bounded. In contrast, the densely defined multipliers over the subspace $$W_0 = \{ f \in W^{1,2}[0,1] : f(0)=f(1)=0 \}$$ are not necessarily bounded operators, Theorem \[ddms\]. It is demonstrated in Theorem \[zerofree\] that every function $\phi$ that is a symbol for a densely defined multiplication operator over $W_0$ can be written as a ratio of functions $h, f \in W_0$ where $f$ does not vanish on $(0,1)$. This can be compared to Sarason’s characterization of densely defined multipliers over the Hardy space $H^2$, where the densely defined multipliers are those functions in the Smirnov class, $N^{+} = \{ b/a : b, a \in H^\infty, a \text{ outer} \}$ [@sarason]. In particular densely defined multipliers over the Hardy space can be expressed as a ratio of functions in $H^2$ such that the denominator is nonvanishing. Densely Defined Multipliers for the Sobolev Space ================================================= As we will see in the following theorem, the densely defined multipliers of $W_0$ are those functions that are well behaved everywhere but the endpoints of $[0,1]$. Take for instance the topologist’s sine curve $\phi(x) = \sin(1/x)$. On any interval bounded away from zero, $\sin(1/x)$ is smooth. To determine that $D(M_\phi)$ is dense, it is enough to recognize that the set of functions that vanish in a neighborhood of zero are in $D(M_\phi)$ and this collection of functions is dense in $W_0$. We can apply the same reasoning to show that two other poorly behaved functions are symbols for densely defined multiplication operators: $\phi(x) = 1/x$ and $\exp(1/x)$. Here $1/x$ is has a simple pole at $0$ and $e^{1/x}$ has an essential singularity. The following theorem can be compared to Theorem 2.3.2 in [@mazya] for bounded multiplication operators. \[ddms\] A function $\phi: (0,1) \to {\mathbb{C}}$ is the symbol for a densely defined multiplier on $W_0$ iff $\phi \in W^{1,2}[a,b]$ for all $[a,b] \subset (0,1)$. First suppose that $\phi$ is a densely defined multiplier on $W_0$. For each $x_0 \in (0,1)$ there is a function $f \in D(M_\phi)$ such that $f(x_0) \neq 0$, this follows from the density of the domain. If all functions in $f \in D(M_\phi)$ vanished at $x_0$, then $D(M_\phi) \subset \{ k_{x_0}\}^\perp$. This would mean that $\overline{D(M_\phi)} = \left(D(M_\phi)^\perp\right)^\perp \subset \{k_{x_0}\}^\perp$, and this contradicts the density of the domain. Let $h = M_\phi f$, so that $\phi(x) = h(x)/f(x)$ in a neighborhood of $x_0$. The functions $h$ and $f$ are differentiable almost everywhere in a neighborhood of $x_0$, so then is $\phi$. Since $x_0$ is arbitrary, $\phi$ is differentiable almost everywhere on $(0,1)$. Fix $[a,b] \subset (0,1)$. By way of compactness, there exists a finite collection of functions $\{f_1, f_2, ..., f_k\} \subset D(M_g)$ together with subsets $$[a,t_1), \ (s_2,t_2), \ (s_3,t_3), \ ... , \ (s_{k-1}, t_{k-1}), \ (s_k, b]$$ so that the subsets cover $[a,b]$ and $f_i$ is bounded away from zero on $[s_i,t_i]$, here we take $s_1 = a$ and $t_k = b$. Since $f_i$ is bounded away from zero on $[s_i, t_i]$, $\phi$ is absolutely continuous on each $[s_i, t_i]$ and hence on $[a,b]$. We wish to show that $\phi \in W^{1,2}[a,b]$ so we set out to show $\phi$ and $\phi^\prime$ are in $L^2[a,b]$. Set $h_i = \phi f_i$ and by the product rule we find $h_i^\prime = \phi^\prime f_i + f_i^\prime \phi$ almost everywhere. Since $\phi$ is continuous on $[s_i, t_i]$, we have $\phi \in L^2[s_i,t_i]$. The function $f^\prime$ is also in $L^2[s_i,t_i]$ which implies $\phi f^\prime \in L^2[s_i,t_i]$, because $\phi$ is bounded. Therefore $h^\prime_i-\phi f^\prime_i = \phi^\prime f_i \in L^2[s_i,t_i]$. By construction, $f_i$ does not vanish on $[s_i,t_i]$, so $$\inf_{[s_i, t_i]} |f_i(x)|^2\int_0^1 |\phi^\prime|^2 dx \le \int_0^1 |\phi^\prime f_i|^2 dx < \infty.$$ Thus $\phi^\prime \in L^2[s_i,t_i]$ for each $i=1,2,...,k$, and $\phi \in W^{1,2}[a,b]$. For the other direction, suppose that $\phi \in W^{1,2}[a,b]$ for all $[a,b] \subset (0,1)$. Let $f \in W_0$ such that $f$ has compact support in $(0,1)$. Let $[a,b]$ be a compact subset of $(0,1)$ containing the support of $f$. Outside of $[a,b]$, $f$ is identically zero and so $f^\prime \equiv 0$ as well. The function $\phi f$ is in $L^2[a,b]$ since it is continuous. Also the function $\phi f^\prime \in L^2[a,b]$ since $\phi$ is continuous and $f^\prime \in L^2[a,b]$, and $\phi^\prime f \in L^2[a,b]$ for the opposite reason. Thus $h := \phi f \in W^{1,2}[a,b]$, and since it vanishes outside the interval, $h \in W_0$. Therefore $f \in D(M_\phi)$, and compactly supported functions are dense in $W_0$. Thus $\phi$ is a densely defined multiplication operator. This proof can be extended to prove the following: \[allbdd\]For the Sobolev space, $W^{1,2}[0,1]$, the collection of symbols of densely defined multipliers is $W^{1,2}[0,1]$. In particular, all the densely defined multipliers are bounded. Taking advantage of compactness, we can find a finite collection of functions $f_1, f_2, ... , f_k \in D(M_\phi)$ and intervals $$[0,t_1), \ (s_2,t_2), \ (s_3,t_3), \ ... , \ (s_{k-1}, t_{k-1}), \ (s_k, 1]$$ that cover $[0,1]$ for which $f_i$ is bounded away from zero on $[s_i, t_i]$. (Taking $s_1 = 0$ and $t_k = 1$.) This time we are allowed to let $s_1=0$ and $t_k=1$, since the Sobolev space does not require that functions vanish at the endpoints. This means we can find two functions that do not vanish at $0$ and $1$ respectively. Running the same argument as in the previous theorem we can see that $\phi \in W^{1,2}[0,1]$. This means that $D(M_\phi) = W^{1,2}[0,1]$ and $M_\phi$ is a bounded multiplication operator. For the Hardy space there are many more densely defined multipliers than bounded ones [@sarason]. In fact the Hardy space is properly contained inside of its collection of densely defined multipliers, the Smirnov class. In the Sobolev space we see that the collection of densely defined multipliers is exactly the Sobolev space itself, and they are all bounded. The same methods can be used to show the following corollary: Given the Sobolev space $W^{1,2}({\mathbb{R}})$, a function $\phi$ is a densely defined multiplier for $W^{1,2}({\mathbb{R}})$ iff $\phi \in W^{1,2}(E)$ for all compact intervals $E$ of ${\mathbb{R}}$. Local to Global Non-Vanishing Denominator ========================================= We saw in Theorem \[ddms\] that for any point $x \in (0,1)$ we can find a function in the domain that does not vanish in a neighborhood of that point. In other words, we used a local non-vanishing property. Now that we have an explicit description of the densely defined multipliers of $W_0$, we can sharpen this to finding a globally nonvanishing function inside of the domain. This means that a densely defined multiplication operator, $\phi$, on $W_0$ can be expressed as a ratio of two functions in $W_0$ where the denominator is non-vanishing. Ideally given any densely defined multiplication operator over a Hilbert function space $H$, we would like to express its symbol as a ratio of two functions from $H$ such that the denominator is non-vanishing. As we saw in the proofs of Theorems \[ddms\] and \[allbdd\], we can always do this locally. In [@sarason], this was achieved for the Hardy space through an application of the inner-outer factorization, but there is no such factorization theorem for functions in the Sobolev space. This means we need to try something a little different. Looking at our three poorly behaved functions we can rewrite them as quotients of functions in $W_0$ as follows: $$\begin{array}{ccc} \sin(1/x)&=&{\displaystyle}\frac{x^2(1-x)\sin(1/x)}{x^2(1-x)}\vspace{.2in}\\ {\displaystyle}1/x&=&{\displaystyle}\frac{x(1-x)}{x^2(1-x)}\vspace{.2in}\\ \exp(1/x)&=&{\displaystyle}\frac{x(1-x)}{x(1-x)\exp(-1/x)}\end{array}$$ In the following theorem, a constructive method is described that finds such a ratio of $W_0$ functions. \[zerofree\] If $\phi$ is a densely defined multiplier for $W_0$, then there exists $f \in D(M_\phi)$ such that $f(x) \neq 0$ on $(0,1)$. First if $\phi \in W^{1,2}[0,1]$, we are finished trivially by writing $\phi(x)= \frac{x(1-x)g(x)}{x(1-x)}$. We will assume that $\phi \not\in W^{1,2}[0,1]$. First by Theorem \[ddms\] $\phi$ is locally absolutely continuous on $(0,1)$, so if $\phi \not \in W^{1,2}[0,1]$ then either $\phi \not \in W^{1,2}[0,1/2]$ and/or $\phi \not \in W^{1,2}[1/2,1]$. Assume without loss of generality that $\phi \not \in W^{1,2}[0,1/2]$. The function $\phi$ is absolutely continuous on $(0,1/2)$ but not in $W^{1,2}[0,1/2]$ which means either $\phi$ or $\phi^\prime$ is not in $L^2[0,1/2]$. We know by Theorem \[ddms\] that $\phi \in W^{1,2}[a,\frac12]$ for each $a > 0$, but $\phi \not\in W^{1,2}[0,1]$. Both of the functions $\phi$ and $\phi^\prime$ are in $L^2[a, \frac12]$ for all $a > 0$: $\int_a^{.5} |\phi|^2 + |\phi^\prime|^2 dx < \infty$. Construct the increasing sequence: $$a_n = \int_{\frac{1}{2^{n+1}}}^{\frac{1}{2^n}} |\phi|^2 + |\phi^\prime|^2 dx.$$ Define $b_n = \min\left\{ (a_n)^{-1}, (a_{n-1})^{-1}, 1\right\}$. Notice that $a_n b_{n+1}$, $a_n b_n$ and $b_n \le 1$ for all $n$. Now we can begin constructing our non-vanishing function $f$. Let $f$ be the function that linearly interpolates the points $\left\{(\frac{1}{2^n}, \frac{b_n}{2^{2n}})\right\}_{n=1}^{\infty}$. Also define $f(0)=0$, and note that $\lim_{x\to0^+} f(x) = 0$. To be more precise we define auxiliary functions $L_n(x)$ by: $$L_n(x) = \left\{ \begin{array}{lcl}\frac{4(b_n) - (b_{n+1})}{2^{n+1}}(x-2^{-n}) + (2^{-2n}(b_n)) &:& x \in (2^{-(n+1)}, 2^{-n}]\\ 0 &:& \text{otherwise} \end{array}\right.$$ Now $f$ can be written as $f = \sum_{n=1}^{\infty} L_n(x)$. The function $f$ is continuous on $[0,\frac12]$ and differentiable almost everywhere. By using calculations with $L_n$ it is straightforward to show $f, f^\prime \in L^2[0,\frac12]$, since the slopes of each of these functions was chosen to decrease geometrically. Thus $f \in W^{1,2}[0,\frac12]$ and $f(0) = 0$. The function $\phi f$ is continuous on $(0,1/2)$ and differentiable almost everywhere. We wish to show that both $\phi f$ and $(\phi f)^\prime = \phi^\prime f + f^\prime \phi$ are in $L^2[0,\frac12]$. First we have: $$\begin{array}{rcl}{\displaystyle}\int_0^{0.5}|\phi f|^2dx &=&{\displaystyle}\sum_{n=1}^{\infty}\int_{1/2^{n+1}}^{1/2^n}|\phi L_n|^2dx\\ &\le&{\displaystyle}\sum_{n=1}^{\infty} a_n \max\left\{\left(\frac{b_n}{2^{2n}}\right)^2, \left(\frac{b_{n+1}}{2^{2(n+1)}}\right)^2\right\}<\infty.\end{array}$$ Similarly $$\begin{array}{rcl}{\displaystyle}\int_0^{0.5}|\phi^\prime f|^2dx &=&{\displaystyle}\sum_{n=1}^{\infty}\int_{1/2^{n+1}}^{1/2^n}|\phi^\prime L_n|^2dx\\ &\le&{\displaystyle}\sum_{n=1}^{\infty} a_n \max\left\{\left(\frac{b_n}{2^{2n}}\right)^2, \left(\frac{b_{n+1}}{2^{2(n+1)}}\right)^2\right\}<\infty,\end{array}$$ and $$\begin{array}{rcl}{\displaystyle}\int_0^{0.5}|\phi f^\prime|^2dx &=&{\displaystyle}\sum_{n=1}^{\infty}\int_{1/2^{n+1}}^{1/2^n}|\phi L_n^\prime|^2dx\\ &\le&{\displaystyle}\sum_{n=1}^\infty a_n \left( \frac{4(b_n) - (b_{n+1})}{2^{n+1}} \right)^2 < \infty.\end{array}$$ Here we see each integral is dominated by a geometric series, and so $\phi f, (\phi f)^\prime \in L^2[0,1/2]$. Next show that $\phi f$ is absolutely continuous on $[0,1]$. Notice that $\phi f$ and $(\phi f)^\prime$ are in $L^2[0,1/2]$, and $\phi f$ is absolutely continuous on $[a,1/2]$ for any $0<a<1/2$. For the moment call $h = \phi f$. We will show that $xh(x)$ is in $W_0$. On every interval $(a,1/2]$, $xh(x)$ is absolutely continuous, which is equivalent to $$ah(a) = \frac12 h\left( \frac12 \right) + \int_{a}^{\frac12} \frac{d}{dt} (t h(t) ) dt$$ for all $0<a<0.5$. Showing that this holds for $a=0$, demonstrates that $xh(x)$ is absolutely continuous on $[0,1]$. First note that $$\begin{array}{rcl}{\displaystyle}\lim_{x\to0^+} |x h(x)| &=&{\displaystyle}\lim_{x\to0^+} \left|x h\left( \frac12 \right) + x \int_{\frac12}^{x} h^\prime(t) dt \right| \vspace{.05in}\\ &\le&{\displaystyle}\lim_{x\to0^+} \left( |x| \left|h\left( \frac12 \right)\right| + |x| \int_x^{\frac12} |h^\prime(t)|dt\right).\end{array}$$ The last limit is zero, since $h^\prime \in L^2[0,1/2] \subset L^1[0,1/2]$. Thus $xh(x)$ can be defined as a continuous function on $[0,1]$ by declaring this function as zero at $x=0$. Finally, $$\begin{array}{rcl}{\displaystyle}\frac12 h\left(\frac12\right) + \int_{0}^{1/2} \frac{d}{dt} \left( th(t) \right) dt &=&{\displaystyle}\lim_{a \to 0^+} \left(\frac12 h\left(\frac12\right) + \int_{a}^{1/2} \frac{d}{dt} \left( th(t) \right) dt\right) \vspace{.1in}\\ &=&{\displaystyle}\lim_{a \to 0^+} \left(\frac12 h \left( \frac12 \right) + \left( ah(a) - \frac12 h\left( \frac12 \right) \right)\right) \vspace{.1in}\\ &=&{\displaystyle}\lim_{a \to 0^+} ah(a) = 0\end{array}.$$ This proves that $xh(x)$ is absolutely continuous on $[0,1/2]$. Therefore, the function $xh$ is in $W^{1,2}[0,1/2]$ and $(xh)(0)=0$. Thus if we construct $f$ as above for both the left half of $[0,1]$ and the right half, then the function $x(1-x) f$ does not vanish on $(0,1)$ and is in $D(M_\phi) \subset W_0$. Remarks ======= We leave with one last note concerning densely defined multipliers on the Sobolev space. We know that if a multiplier is bounded, then its symbol is bounded by the norm of the operator. The question arrises, if $g$ is known to be a densely defined multiplication operator over $W_0$ and $\sup_{x\in (0,1)} |\phi(x)| < \infty$ is $M_\phi$ a bounded multiplier? The answer is: not necessarily. We can produce a counterexample by examining $$\phi(x) = \sqrt{1/4-(x-1/2)^2}$$ which is bounded on $[0,1]$ by 1/2. By the work above, we know that $M_\phi$ is a densely defined multiplier, since $\phi \in W^{1,2}[a,b]$ for every $[a,b] \subset (0,1)$. However, since $\phi^\prime$ is not bounded on $[0,1]$, $M_\phi 1 = \phi\cdot 1 \not \in W_0$. Therefore even though $\phi$ is a bounded function, the multiplier $M_\phi$ is not bounded.
--- abstract: | A quantum circuit is a computational unit that transforms an input quantum state to an output one. A natural way to reason about its behavior is to compute explicitly the unitary matrix implemented by it. However, when the number of qubits increases, the matrix dimension grows exponentially and the computation becomes intractable. In this paper, we propose a symbolic approach to reasoning about quantum circuits. It is based on a small set of laws involving some basic manipulations on vectors and matrices. This symbolic reasoning scales better than the explicit one and is well suited to be automated in Coq, as demonstrated with some typical examples. author: - Wenjun Shi - Qinxiang Cao - | \ Yuxin Deng - Hanru Jiang - Yuan Feng title: Symbolic Reasoning about Quantum Circuits in Coq --- Introduction ============ Quantum circuit is a natural model of quantum computation [@NC11]. It is a computational unit that transforms an input quantum state to an output one. Once a quantum circuit is designed to implement an algorithm, it is indispensable to analyze the circuit and ensure that it indeed conforms to the requirements of the algorithm. When a large number of qubits are involved, manually reasoning about a circuit’s behavior is tedious and error-prone. One way of reasoning about quantum circuits (semi-) automatically and reliably is to mechanize the reasoning procedure in an interactive theorem prover, such as the Coq proof assistant [@Coq]. For example, Rand et al. [@RPZ18] verified a few quantum algorithms in Coq, using some semi-automated strategies to generate machine checkable proofs. Existing approaches have apparent drawbacks in both efficiency and human readability. Quantum states and operations are represented and computed using matrices explicitly, and their comparison is done in an element-wise way, thus is highly non-scalable with respect to qubit numbers. Furthermore, as the system dimension grows, it is almost impossible for human beings to read the matrices printed by the theorem prover. In this paper, we propose a symbolic approach for reasoning about the behavior of quantum circuits in Coq, which improves both efficiency of the reasoning procedure and readability of matrix representations. The main contributions of this paper include: - A matrix representation in Coq using the Dirac notation [@Dir], which is commonly used in quantum mechanics. Matrices are represented as combinations of $\ket{0}$, $\ket{1}$, scalars, and a set of basic operators such as tensor product and matrix multiplication. Here $\ket{0}$ and $\ket{1}$ are the Dirac notation for 2-dimensional column vectors $[1\ 0]^T$ and $[0\ 1]^T$, respectively. In this way, we have a concise representation for sparse matrices which are common in quantum computation. - A tactic library for (semi-)automated symbolic reasoning about matrices. The tactics are based on a small set of inference laws (lemmas in Coq). The key idea is to reduce matrix multiplications in the form of $\braket{i|j}$ into scalars, and simplify the matrix representation by absorbing ones and eliminating zeros. In this way, our approach reasons about matrices by (semi-)automated rewriting instead of actually computing the matrices, and outperforms the computational approach of Rand et al. [@RPZ18], as shown in proving the functional correctness of some typical quantum algorithms in Section \[sec:casestudy\]. We illustrate the intuition of our tactics by the following simple example which computes the result of applying the Pauli-X gate to the $\ket{0}$ state. In an explicit matrix-vector multiplication form, it reads as follows: $$X\ket{0} = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} 1 \\ 0 \end{bmatrix} = \begin{bmatrix} 0 \times 1 + 1 \times 0 \\ 1 \times 1 + 0 \times 0 \end{bmatrix} = \begin{bmatrix} 0 \\ 1 \end{bmatrix} = \ket{1}$$ and four multiplications are required for the whole computation. By contrast, if we use the Dirac notation for $X$ and apply distribution and associativity laws, then $$\begin{array}{rcl} X\ket{0} & = & \left(\ket{0}\bra{1} + \ket{1}\bra{0}\right)\ket{0} \\ & = & \ket{0}\braket{1|0} + \ket{1}\braket{0|0} \\ & = & 0 \ket{0} + 1 \ket{1} \\ & = & \ket{1}. \end{array}$$ Note that the two terms $\braket{1|0}$ and $\braket{0|0}$ are reduced (symbolically) to 0 and 1, respectively. Consequently, no multiplication is required at all. The rest of the paper is structured as follows. In Section \[sec:pre\] we recall some basic notation from linear algebra and quantum mechanics. In Section \[sec:symbolic\] we introduce a symbolic approach to reasoning about quantum circuits. In Section \[sec:prob\] we discuss some problems of representing matrices using Coq’s type system and our solutions. In Section \[sec:cir\] we propose two notions of equivalence for quantum circuits. In Section \[sec:casestudy\] we conduct a few case studies. In Section \[sec:related\] we discuss some related works. Finally, we conclude in Section \[sec:concl\]. The Coq scripts of our tactic library and the examples used in our case studies are available at the following link <https://github.com/Vickyswj/DiracRepr>. Preliminaries {#sec:pre} ============= For the convenience of the reader, we briefly recall some basic notions from linear algebra and quantum theory which are needed in this paper. For more details, we refer to [@NC11]. Basic linear algebra -------------------- A [*Hilbert space*]{} $\h$ is a complete vector space equipped with an inner product $$\langle\cdot|\cdot\rangle:\h\times \h\rightarrow \mathbb{C}$$ such that 1. $\langle\psi|\psi\rangle\geq 0$ for any $|\psi{\rangle}\in\h$, with equality if and only if $|\psi\rangle =0$; 2. $\langle\phi|\psi\rangle=\langle\psi|\phi\rangle^{\ast}$; 3. $\langle\phi|\sum_i c_i|\psi_i\rangle= \sum_i c_i\langle\phi|\psi_i\rangle$, where $\mathbb{C}$ is the set of complex numbers, and for each $c\in \mathbb{C}$, $c^{\ast}$ stands for the complex conjugate of $c$. A vector $|\psi\rangle\in\h$ is [*normalised*]{} if its length $\sqrt{\langle\psi|\psi\rangle}$ is equal to $1$. Two vectors $|\psi{\rangle}$ and $|\phi{\rangle}$ are [*orthogonal*]{} if ${\langle}\psi|\phi{\rangle}=0$. An [*orthonormal basis*]{} of a Hilbert space $\h$ is a basis $\{|i\rangle\}$ where each $|i{\rangle}$ is normalised and any pair of them are orthogonal. Let $\lh$ be the set of linear operators on $\h$. For any $A\in \lh$, $A$ is [*Hermitian*]{} if $A^\dag=A$ where $A^\dag$ is the adjoint operator of $A$ such that ${\langle}\psi|A^\dag|\phi{\rangle}={\langle}\phi|A|\psi{\rangle}^*$ for any $|\psi{\rangle},|\phi{\rangle}\in\h$. A linear operator $A\in \lh$ is [*unitary*]{} if $A^\dag A=A A^\dag=I_\h$ where $I_\h$ is the identity operator on $\h$. The [*trace*]{} of $A$ is defined as ${{\rm tr}}(A)=\sum_i {\langle}i|A|i{\rangle}$ for some given orthonormal basis $\{|i{\rangle}\}$ of $\h$. It is worth noting that the trace function is actually independent of the orthonormal basis selected. It is also easy to check that the trace function is linear and ${{\rm tr}}(AB)={{\rm tr}}(BA)$ for any $A,B\in \lh$. Let $\h_1$ and $\h_2$ be two Hilbert spaces. Their [*tensor product*]{} $\h_1\otimes \h_2$ is defined as a vector space consisting of linear combinations of the vectors $|\psi_1\psi_2\rangle=|\psi_1{\rangle}|\psi_2\rangle =|\psi_1{\rangle}\otimes |\psi_2{\rangle}$ with $|\psi_1\rangle\in \h_1$ and $|\psi_2\rangle\in \h_2$. Here the tensor product of two vectors is defined by a new vector such that $$\left(\sum_i \lambda_i |\psi_i{\rangle}\right)\otimes \left(\sum_j\mu_j|\phi_j{\rangle}\right)=\sum_{i,j} \lambda_i\mu_j |\psi_i{\rangle}\otimes |\phi_j{\rangle}.$$ Then $\h_1\otimes \h_2$ is also a Hilbert space where the inner product is defined in the following way: for any $|\psi_1{\rangle},|\phi_1{\rangle}\in\h_1$ and $|\psi_2{\rangle},|\phi_2{\rangle}\in \h_2$, $${\langle}\psi_1\otimes \psi_2|\phi_1\otimes\phi_2{\rangle}={\langle}\psi_1|\phi_1{\rangle}_{\h_1}{\langle}\psi_2|\phi_2{\rangle}_{\h_2}$$ where ${\langle}\cdot|\cdot{\rangle}_{\h_i}$ is the inner product of $\h_i$. For any $A_1\in \mathcal{L}(\h_1)$ and $A_2\in \mathcal{L}(\h_2)$, $A_1\otimes A_2$ is defined as a linear operator in $\mathcal{L}(\h_1 \otimes \h_2)$ such that for each $|\psi_1\rangle \in \h_1$ and $|\psi_2\rangle \in \h_2$, $$(A_1\otimes A_2)|\psi_1\psi_2\rangle = A_1|\psi_1\rangle\otimes A_2|\psi_2\rangle.$$ Basic quantum mechanics ----------------------- According to von Neumann’s formalism of quantum mechanics [@vN55], an isolated physical system is associated with a Hilbert space which is called the [*state space*]{} of the system. A [*pure state*]{} of a quantum system is a normalised vector in its state space, and a [*mixed state*]{} is represented by a density operator on the state space. Here a density operator $\rho$ on Hilbert space $\h$ is a positive linear operator such that ${{\rm tr}}(\rho)= 1$. Another equivalent representation of a density operator is a probabilistic ensemble of pure states. In particular, given an ensemble $\{(p_i,|\psi_i\rangle)\}$ where $p_i \geq 0$, $\sum_{i}p_i=1$, and $|\psi_i\rangle$ are pure states, then $\rho=\sum_{i}p_i|\psi_i{\rangle}\langle\psi_i|$ is a density operator. Conversely, each density operator can be generated by an ensemble of pure states in this way. Finally, a pure state can be regarded as a special mixed state. The state space of a composite system (for example, a quantum system consisting of many qubits) is the tensor product of the state spaces of its components. Note that in general, the state of a composite system cannot be decomposed into the tensor product of the reduced states on its component systems. A well-known example is the 2-qubit state $$|\Psi{\rangle}=\frac{1}{\sqrt{2}}(|00{\rangle}+|11{\rangle}) .$$ This kind of state is called an [*entangled state*]{}. Entanglement is an outstanding feature of quantum mechanics which has no counterpart in the classical world, and is the key to many quantum information processing tasks. The evolution of a closed quantum system is described by a unitary operator on its state space. If the states of the system at times $t_1$ and $t_2$ are $|\psi_1{\rangle}$ and $|\psi_2{\rangle}$, respectively, then $|\psi_2{\rangle}=U |\psi_1{\rangle}$ for some unitary operator $U$ which depends only on $t_1$ and $t_2$. A convenient way to understand unitary operators is in terms of their matrix representations. In fact, the unitary operator and matrix viewpoints turn out to be completely equivalent. An $m$ by $n$ complex unitary matrix $U$ with entries $U_{ij}$ can be considered as a unitary operator sending vectors in the vector space $\mathbb{C}^n$ to the vector space $\mathbb{C}^m$, under matrix multiplication of the matrix $U$ by a vector in $\mathbb{C}^n$. We often denote a single qubit as a vector $|\psi{\rangle}= a|0{\rangle}+ b|1{\rangle}$ parameterized by two complex numbers satisfying $|a|^2+|b|^2=1$. A unitary operator for a qubit is then described by a $2\times 2$ unitary matrices. Quantum circuits are a popular model for quantum computation, where quantum gates usually stand for basic unitary operators whose mathematical meanings are given by appropriate unitary matrices. Some commonly used quantum gates to appear in the current work include the $1$-qubit Hadamard gate $H$, the Pauli gates $I_2, X, Y, Z$, and the controlled-NOT gate $CX$ performed on two qubits. Their matrix representation is given below: $$I_2=\left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \\ \end{array} \right) , \qquad X=\left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \\ \end{array} \right), \qquad Y=\left( \begin{array}{cc} 0 & -i \\ i & 0 \\ \end{array} \right), \qquad Z=\left( \begin{array}{cc} 1 & 0 \\ 0 & -1 \\ \end{array} \right).$$ $$H=\frac{1}{\sqrt{2}}\left( \begin{array}{cc} 1 & 1 \\ 1 & -1 \\ \end{array} \right), \qquad \qquad CX=\left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{array} \right) .$$ For example, in Figure \[GHZ\] we display a circuit that can generate the $3$-qubit GHZ state [@GHZ]. In the circuit, a Hadamard gate is applied on the first qubit, then two controlled-NOT gates are used, with the first qubit controlling the second, which in turn controls the third. $$\Qcircuit @C=.8em @R=1.7em { \lstick{\ket{0}} & \qw & \gate{H} & \qw & \ctrl{1} & \qw & \qw & \qw & \qw \\ \lstick{\ket{0}} & \qw & \qw & \qw & \targ & \qw & \ctrl{1} & \qw & \qw \\ \lstick{\ket{0}} & \qw & \qw & \qw & \qw & \qw & \targ & \qw & \qw \\ }$$ A quantum [*measurement*]{} is described by a collection $\{M_m\}$ of measurement operators, where the indices $m$ refer to the measurement outcomes. It is required that the measurement operators satisfy the completeness equation $\sum_{m}M_m^{\dag}M_m=I_\h$. If the state of the quantum system is $|\psi{\rangle}$ immediately before the measurement, then the probability that result $m$ occurs is given by $$p(m)={\langle}\psi| M_m^\dag M_m|\psi{\rangle},$$ and the state of the system after the measurement is $\frac{M_m |\psi{\rangle}}{\sqrt{p(m)}} .$ If the states of the system at times $t_1$ and $t_2$ are mixed, say $\rho_1$ and $\rho_2$, respectively, then $\rho_2=U\rho_1U^{\dag}$ after the unitary operation $U$ is applied on the system. For the same measurement $\{M_m\}$ as above, if the system is in the mixed state $\rho$, then the probability that measurement result $m$ occurs is given by $$p(m)={{\rm tr}}(M_m^{\dag}M_m\rho),$$ and the state of the post-measurement system is $\frac{M_m\rho M_m^{\dag}}{p(m)}$ provided that $p(m)>0$. Symbolic reasoning {#sec:symbolic} ================== \[sec:sym\] ---------------- ---------------------------------------------------------------------------------------------------------------------------------------------- Scalars: $\mathbb{C}$ Basic vectors: $|0{\rangle}$, $|1{\rangle}$ Operators: ${\cdot}$, $\times$, $+$, $\otimes$, $\dag$ Laws: [[**L**]{}]{}1${\langle}0|0{\rangle}= {\langle}1|1{\rangle}= 1$,  ${\langle}0|1{\rangle}= {\langle}1|0{\rangle}= 0$ [[**L**]{}]{}2Associativity of ${\cdot},\ \times,\ +,\ \otimes$ [[**L**]{}]{}3$0 {\cdot}A_{m\times n} = {\textbf{0}}_{m\times n}$,  $c{\cdot}{\textbf{0}}={\textbf{0}}$,  $1 {\cdot}A = A$ [[**L**]{}]{}4$c {\cdot}(A + B) = c {\cdot}A + c {\cdot}B$ [[**L**]{}]{}5$c {\cdot}(A \times B) = (c {\cdot}A)\times B = A \times (c {\cdot}B)$ [[**L**]{}]{}6$c {\cdot}(A \otimes B) = (c {\cdot}A)\otimes B = A \otimes (c {\cdot}B)$ [[**L**]{}]{}7${\textbf{0}}_{m\times n} \times A_{n\times p} = {\textbf{0}}_{m\times p} = A_{m\times n} \times {\textbf{0}}_{n\times p}$ [[**L**]{}]{}8$I_m \times A_{m\times n} =A_{m\times n} = A_{m\times n}\times I_n $,   $I_m\otimes I_n = I_{mn}$ [[**L**]{}]{}9${\textbf{0}}+ A = A = A + {\textbf{0}}$ [[**L**]{}]{}10${\textbf{0}}_{m\times n} \otimes A_{p\times q} = {\textbf{0}}_{mp\times nq} = A_{p\times q}\otimes {\textbf{0}}_{m\times n}$ [[**L**]{}]{}11$(A + B) \times C = A \times C + B\times C$,  $C\times (A + B) = C \times A + C \times B$ [[**L**]{}]{}12$(A + B) \otimes C = A \otimes C + B\otimes C$,  $C\otimes (A + B) = C \otimes A + C \otimes B$ [[**L**]{}]{}13$(A\otimes B)\times (C\otimes D) = (A\times C)\otimes (B \times D)$ [[**L**]{}]{}14$(c {\cdot}A)^\dag = c^* {\cdot}A^\dag$,  $(A \times B)^\dag = B^\dag \times A^\dag$ [[**L**]{}]{}15$(A + B)^\dag = A^\dag + B^\dag$,  $(A \otimes B)^\dag = A^\dag \otimes B^\dag$ [[**L**]{}]{}16$(A^\dag)^\dag = A$ ---------------- ---------------------------------------------------------------------------------------------------------------------------------------------- : Terms and laws[]{data-label="t:core"} Our symbolic reasoning is based on terms constructed from scalars and basic vectors using some constructors: - Scalars are complex numbers. We write $\mathbb{C}$ for the set of complex numbers. Our formal treatment of complex numbers is based on the definitions and rewriting tactics from the Coquelicot [@BLM] library of Coq. - Basic vectors are the base states of a qubit, i.e., $|0{\rangle}$ and $|1{\rangle}$ in the Dirac notation. Mathematically, $|0{\rangle}$ stands for the vector $[1\ 0]^T$ and $|1{\rangle}$ for $[0\ 1]^T$. - Constructors include the scalar product ${\cdot}$, matrix product $\times$, matrix addition $+$, tensor product $\otimes$, and the conjugate transpose $A^\dag$ of a matrix $A$. In Dirac notations, ${\langle}0|$ represents the dual of $|0{\rangle}$, i.e. $|0{\rangle}^\dag$; similarly for ${\langle}1|$. The term ${\langle}j|\times |k{\rangle}$ is abbreviated into ${\langle}j|k{\rangle}$, for any $j,k\in\{0,1\}$. This notation introduces an intuitive explanation of quantum operation. For example, the effect of the $X$ operator is to map $ |0{\rangle}$ into $|1{\rangle}$ and $|1{\rangle}$ into $|0{\rangle}$. Thus we define $X$ in Coq as $|0{\rangle}{\langle}1| + |1{\rangle}{\langle}0|$, instead of $\begin{bmatrix} 0\, 1\\ 1\, 0\end{bmatrix}$. Then it is obvious that $X |0{\rangle}= |1{\rangle}{\langle}0|0{\rangle}+ |0{\rangle}{\langle}1|0{\rangle}= |1{\rangle}$ and similarly for $X|1{\rangle}$. Some commonly used vectors and gates can be derived from the basic terms. For example, we define the vectors $|+{\rangle},\ |-{\rangle}$, the Hadamard matrix $H$, the Pauli-X gate, and the controlled-NOT gate $CX$ as follows: $$\begin{array}{rcl} |+{\rangle}& = & \frac{1}{\sqrt{2}} {\cdot}|0{\rangle}+ \frac{1}{\sqrt{2}} {\cdot}|1{\rangle}\\ |-{\rangle}& = & \frac{1}{\sqrt{2}} {\cdot}|0{\rangle}+ (-\frac{1}{\sqrt{2}}) {\cdot}|1{\rangle}\\ B_0 & = & |0{\rangle}\times {\langle}0|\\ B_1 & = & |0{\rangle}\times {\langle}1|\\ B_2 & = & |1{\rangle}\times {\langle}0| \\ B_3 & = & |1{\rangle}\times {\langle}1|\\ H & = & \frac{1}{\sqrt{2}} {\cdot}B_0 + \frac{1}{\sqrt{2}} {\cdot}B_1 + \frac{1}{\sqrt{2}} {\cdot}B_2 + (-\frac{1}{\sqrt{2}}) {\cdot}B_3 \\ X & = & B_1 + B_2 \\ CX & = & B_0 \otimes I_2 + B_3\otimes X \end{array}$$ Notice that using the matrices $B_j$ ($j\in \{0,1,2,3\}$), linear combination and tensor product, we can represent any matrix implemented by a quantum circuit without measurements. Suppose the state of a quantum system is represented by a vector. The central idea of our symbolic reasoning is to employ the laws in Table \[t:core\] to rewrite terms, trying to put together the basic vectors and simplify them using the laws in [[[**L**]{}]{}]{}1. Technically, we design a series of strategies for that purpose. Firstly, we define a strategy called [orthogonal$\_$reduce]{} to verify that the laws in [[[**L**]{}]{}]{}1 are sound. In this case, we explicitly represent $|0{\rangle}$ and $|1{\rangle}$ as matrices. Both of them are $2\times 1$ matrices, so we can use matrix multiplication to prove that the elements in the matrices on both sides of the equation are consistent. For example, the law ${\langle}0|0{\rangle}= 1$ is actually shown via $[1\ 0] \begin{bmatrix} 1 \\ 0\end{bmatrix} = 1$. We add the laws in [[[**L**]{}]{}]{}1 to the set named [S\_db]{}. The laws in [[[**L**]{}]{}]{}2-16 are also proved through explicit matrix representation. We establish their soundness and then collect them in a library that contains many useful properties about matrices. Secondly, we design a strategy called [base$\_$reduce]{} to prove some equations about the four basic matrices $B_0, ..., B_3$ acting on base states $|0{\rangle}$ and $|1{\rangle}$. For example, let us consider the commonly used equation $B_0 \times |0{\rangle}= |0{\rangle}$. We first represent $B_0$ by $|0{\rangle}\times {\langle}0|$, then use the associativity of matrix multiplication to form the subterm ${\langle}0| \times |0{\rangle}$. Now we can use the proved laws in [[[**L**]{}]{}]{}1 to rewrite ${\langle}0| \times |0{\rangle}$ into $1$. The last step is to deal with scalar multiplications. We add these equations to the set named [B\_db]{}. Thirdly, we introduce a strategy called [gate$\_$reduce]{} to prove some equations about the matrices $I,X,Y,Z,H$ acting on base states. For example, consider the equation $X \times |0{\rangle}= |1{\rangle}$. We first expand $X$ into $B_1 + B_2$. In order to prove the equation $(B_1 + B_2) \times |0{\rangle}= |1{\rangle}$, we use the distributivity of matrix multiplication over addition to rewrite the left hand side of the equation into the sum of $B_1 \times |0{\rangle}$ and $B_2 \times |0{\rangle}$. Then we employ the proved laws in [B\_db]{} to rewrite them. Eventually, we deal with scalar multiplications and cancel zero matrices. We add these equations to the set named [G\_db]{}. Lastly, we propose a strategy called [operate$\_$reduce]{} that puts together all the results above to reason about circuits applied to input states represented in vector form. For example, let us revisit the 3-qubit GHZ state. The state can be generated by applying the circuit in Figure \[GHZ\] to the initial state $|0{\rangle}\otimes |0{\rangle}\otimes |0{\rangle}$. We would like to verify that the output state is indeed what we expect by establishing the following equation. $$\label{e:i}\begin{array}{rl} & (I_2 \otimes CX) \times (CX \otimes I_2) \times (H \otimes I_2 \otimes I_2) \times (|0{\rangle}\otimes |0{\rangle}\otimes |0{\rangle}) \\ = & \frac{1}{\sqrt{2}} {\cdot}(|0{\rangle}\otimes |0{\rangle}\otimes |0{\rangle}) + \frac{1}{\sqrt{2}} {\cdot}(|1{\rangle}\otimes |1{\rangle}\otimes |1{\rangle}) \end{array}$$ We first expand $CX$ into $B_0 \otimes I_2 + B_3\otimes X$. So the left hand side of the equation turns into $$\label{e:ii}\begin{array}{l} (I_2 \otimes (B_0 \otimes I_2 + B_3\otimes X)) \times ((B_0 \otimes I_2 + B_3\otimes X) \otimes I_2) \\ \times (H \otimes I_2 \otimes I_2) \times (|0{\rangle}\otimes |0{\rangle}\otimes |0{\rangle}). \end{array}$$ Next, we try all the distribution laws for matrix addition to rewrite (\[e:ii\]) into a sum of matrices without addition operator. So it takes the following form: $$\label{e:iii} \begin{array}{l} ((I_2 \otimes B_0 \otimes I_2) \times (B_0 \otimes I_2 \otimes I_2) \times (H \otimes I_2 \otimes I_2) \times (|0{\rangle}\otimes |0{\rangle}\otimes |0{\rangle}) \\ + ((I_2 \otimes B_0 \otimes I_2) \times (B_3 \otimes X \otimes I_2) \times (H \otimes I_2 \otimes I_2) \times (|0{\rangle}\otimes |0{\rangle}\otimes |0{\rangle}) \\ + ((I_2 \otimes B_3 \otimes X) \times (B_0 \otimes I_2 \otimes I_2) \times (H \otimes I_2 \otimes I_2) \times (|0{\rangle}\otimes |0{\rangle}\otimes |0{\rangle}) \\ + ((I_2 \otimes B_3 \otimes X) \times (B_3 \otimes X \otimes I_2) \times (H \otimes I_2 \otimes I_2) \times (|0{\rangle}\otimes |0{\rangle}\otimes |0{\rangle}). \end{array}$$ Then we use the distributivity law for scalar product to pull scalars to the front of each summand. In this simple example, the operation changes nothing. But in more complex algorithms, we find this step necessary. To continue the reasoning, we use associativity laws to do matrix-vector multiplications from right to left. We also exploit the law in [[[**L**]{}]{}]{}13 to change a matrix product of two tensored terms into a tensor product of two matrix multiplications. To illustrate the idea, we show a few intermediate steps in simplifying the first summand in (\[e:iii\]); the other summands are similar. $$\begin{array}{rl} & ((I_2 \otimes B_0 \otimes I_2) \times (B_0 \otimes I_2 \otimes I_2) \times (H \otimes I_2 \otimes I_2) \times (|0{\rangle}\otimes |0{\rangle}\otimes |0{\rangle}) \\ = & ((I_2 \otimes (B_0 \otimes I_2)) \times (B_0 \otimes (I_2 \otimes I_2)) \times (H \otimes (I_2 \otimes I_2)) \times (|0{\rangle}\otimes (|0{\rangle}\otimes |0{\rangle})) \\ = & (I_2 \times (B_0 \times (H \times |0{\rangle}))) \otimes (B0 \times (I_2 \times (I_2 \times |0{\rangle}))) \otimes (I_2 \times (I_2 \times (I_2 \times |0{\rangle}))) \\ = & \frac{1}{\sqrt{2}} {\cdot}(|0{\rangle}\otimes |0{\rangle}\otimes |0{\rangle}). \end{array}$$ In the above reasoning, we have employed the laws established in [G\_db]{} and [B\_db]{} to rewrite terms. We also use scalar multiplications and the cancellation of zero matrices. The above steps appear a bit complex, but they can be fully automated in Coq, fortunately. The script for implementing the strategy [operate$\_$reduce]{} is as follows: Ltac operate\_reduce := autounfold with G2\_db; distribute\_plus; isolate\_scale; assoc\_right; repeat mult\_kron; repeat autorewrite with G\_db; reduce\_scale. In summary, using the strategy [operate$\_$reduce]{}, we can simplify the first summand in (\[e:iii\]) into $\frac{1}{\sqrt{2}} {\cdot}(|0{\rangle}\otimes |0{\rangle}\otimes |0{\rangle})$. By the same strategy, the second and third summands turn out to be ${\textbf{0}}$, and the fourth one becomes $\frac{1}{\sqrt{2}} {\cdot}(|1{\rangle}\otimes |1{\rangle}\otimes |1{\rangle})$. As a result, we have formally proved the equation in (\[e:i\]). Although it is very intuitive to represent pure quantum states by vectors, there is inconvenience. In quantum mechanics, the global phase of a qubit is often ignored. For example, we would not distinguish $|\psi{\rangle}$ from $e^{i\theta}|\psi{\rangle}$ for any $\theta$. However, when written in vector form, $|\psi{\rangle}$ and $e^{i\theta}|\psi{\rangle}$ may be different because of the coefficient $e^{i\theta}$ present in the latter but not in the former. Therefore, we use the symbol $\approx$ to denote such an equivalence, i.e. $e^{i\theta}|\psi{\rangle}\approx |\psi{\rangle}$. As a matter of fact, we can be more general and define the equivalence for matrices, as given below. Definition phase\_equiv [m n : nat]{} (A B : Matrix m n) : Prop := exists c : C, Cmod c = R1 /[$\backslash$]{}c .\* A = B. Infix “≈” := phase\_equiv. In the above definition, the condition [Cmod c = R1]{} means that the norm of the complex number [c]{} is one and [c .\* A = B]{} says that the matrix [A]{} is equal to [B]{} after a scalar product with the coefficient [c]{}. See Section \[sec:deu\] for more concrete examples that use the relation $\approx$. Note that if quantum states are represented by density matrices, we have $$(e^{i\theta}|\psi{\rangle})(e^{i\theta}|\psi{\rangle})^\dag ~=~ (e^{i\theta}|\psi{\rangle})(e^{-i\theta}{\langle}\psi|) ~=~ |\psi{\rangle}{\langle}\psi| .$$ Therefore, the discrepancy entailed by the global phase disappears and the two vectors correspond to the same density matrix. Representing states by density matrices is a small but useful trick in formal verification of quantum circuits, which does not seem to have been exploited in the literature. If the state of a quantum system is represented by a density matrix, the reasoning strategies discussed above can still be used. For instance, suppose a system is in the initial state given by density matrix $\rho$. After the execution of a quantum circuit implementing some unitary transformation $U$, the system changes into the new state $\rho'=U\rho U^\dag$. Let $\rho=\sum_j p_j|j{\rangle}{\langle}j|$ be its spectral decomposition, where $p_j$ are eigenvalues of $\rho$ and the vectors $|j{\rangle}$ the corresponding eigenvectors. It follows that $$\label{e:iv}\rho' ~=~ U(\sum_j p_j|j{\rangle}{\langle}j|)U^\dag ~=~ \sum_jp_j U|j{\rangle}(U|j{\rangle})^\dag\ .$$ Therefore, we can first simplify $U|j{\rangle}$ into a vector, take its dual and then obtain $\rho'$ easily. Our approach of symbolic reasoning also applies in this setting. We define two functions [density]{} and [super]{} in advance. The former converts states in the vector form into corresponding states in the density matrix form. The latter formalizes the transformation process between states in the density matrix form. Definition density [n]{} (φ : Matrix n 1) : Matrix n n := φ × φ†. Definition super [m n]{} (M : Matrix m n) : Matrix n n -&gt; Matrix m m := fun [$\rho$]{}=&gt; M × [$\rho$]{}× M†. We introduce the simplification strategy called [super$\_$reduce]{} for states in the density matrix form. Ltac super\_reduce:= unfold super,density; (\* Expand super and density \*) match goal with (\* Match the pattern of target | |-context \[ @Mmult ?n ?m ?n with U × φ × φ† × U† \*) (@Mmult ?n ?m ?m ?A ?B) (@adjoint ?n ?m ?A)\] =&gt; match B with | @Mmult ?m ?one ?m ?C (@adjoint ?m ?one ?C) =&gt; transitivity (@Mmult n one n (\* Cast uniform types \*) (@Mmult n m one A C) (@Mmult one m m (@adjoint m one C) (@adjoint n m A))) end end; \[repeat rewrite &lt;- Mmult\_assoc; reflexivity| ..\]; rewrite &lt;- Mmult\_adjoint; (\* Extract adjoint \*) let Hs := fresh “Hs” in match goal with | |-context \[ @Mmult ?n ?o ?n (@Mmult ?n ?m ?o ?A ?B) (@adjoint ?m ?o ?C ) = @Mmult ?n ?p ?n ?D (@adjoint ?n ?p ?D)\] =&gt; match C with | @Mmult ?n ?m ?o ?A ?B=&gt; assert (@Mmult n m o A B = D) as Hs end (\* Use operate\_reduce to end; prove vector states \[try reflexivity; try operate\_reduce | and rewrite it in repeat rewrite Hs; reflexivity\]. density matrix form \*) In the above strategy, we first expand the [density]{} and [super]{} functions in the target. Next, we match the pattern of the target to see whether it is in the right form $U \times |\psi{\rangle}\times {\langle}\psi| \times U^\dagger$ (the middle of the equation in (\[e:iv\])) and cast uniform types, for the reasons to be discussed in Section \[sec:prob\] . Then we exploit the law in [[[**L**]{}]{}]{}14 to extract adjoint of multiplication terms so the target becomes $U \times |\psi{\rangle}\times (U \times |\psi{\rangle})^\dagger$ (the right hand side of the equation in (\[e:iv\]). Finally, we use the strategy [operate$\_$reduce]{} to conduct the proof for states in vector form and rewrite it back in density matrix form. Problems from Coq’s type system and our solution {#sec:prob} ================================================ In principle, the Dirac notation is fully symbolic, i.e. no matter how we formalize it, the relevant laws and their proofs should remain unchanged. However, it turns out that different design choices in formalization do make a difference. Coq is a typed system. Thus, we should decide whether $2^{n+1}\times 2^{n+1}$ matrices and $2^{1+n}\times 2^{1+n}$ matrices, as two different Coq types, should be $\beta\eta$-reducible to each other. From proof engineering point of view, the answer should be yes. These two kinds of matrices are mathematically the same object, and they should be used interchangeably. However, $(1+n)$ and $(n+1)$ are not $\beta\eta$-reducible to each other in Coq for a general variable $n$. Thus, we have to carefully define the Coq type of matrices so that those two kinds above are reducible to each other. We define matrices (no matter how large it is) to be a function from two natural numbers (column numbers and row numbers) to complex numbers. But still, there are two problems that need to be solved. #### The elements outside the range of a matrix. We could choose to only reason about [*well-formed*]{} matrices whose “outside elements” are all zero. Rand et al. [@PRZ17] heavily used this approach in their work. However, having well-formed matrices imposes a heavy burden for formal proofs because the condition of well-formedness needs to be checked each time we manipulate matrices. In our development, we define a relaxed notion of matrix equivalence, so that two matrices are deemed to be equivalent if they are equal component-wisely within the dimensions, and outside the dimensions the corresponding elements can be different. With a slight abuse of notation, we still use the symbol $=$ to denote the newly defined matrix equivalence[^1], and prove its elementary properties about scale product, matrix product, matrix addition, tensor product and conjugate transpose. Reasoning about matrices modulo that equivalence turns out to be convenient in Coq. Specifically, the automation of the rewriting strategies mentioned above does not require side condition proofs about well-formedness. #### Coq type casting. {#par:type} In math, $|0{\rangle}\otimes|0{\rangle}$ is a $4\times 1$ matrix and it is only verbose to say it is a $(2\cdot2) \, \times (1\cdot1)$ matrix. Even though $(2\cdot2) \, \times (1\cdot1)$ is convertible to $4\times 1$, the two typing claims are different in Coq and this difference is significant in rewriting. For example, the associativity of multiplication says: forall m n o p (A: matrix m n) (B: matrix n o) (C: matrix o p), @Mmult m n p A (@Mmult n o p B C) = @Mmult m o p (@Mmult m n o A B) C. However, rewriting does not work in the following case: @Mmult 1 1 1 A (@Mmult (1\*1) (1\*1) (1\*1) B C) because rewriting uses an exact syntax match but not unification. This problem of type mismatch often occurs after we use the law [[[**L**]{}]{}]{}13 for rewriting. We choose to build a customized rewrite tactic to overcome this problem. Using the example above, we want to rewrite via the associativity of multiplication. We first do a pattern matching for expressions of the form @Mmult ?m ?n1 ?p1 ?A (@Mmult ?n2 ?o ?p2 ?B ?C) no matter whether [n1]{} and [p1]{} coincide with [n2]{} and [p2]{}. We then use Coq’s built-in unification to unify [n1]{}, [p1]{} with [n2]{}, [p2]{}. This unification must succeed or else the original expression of matrix computation is not well-formed. After the expression is changed to @Mmult ?m ?n1 ?p1 ?A (@Mmult ?n1 ?o ?p1 ?B ?C) we can use Coq’s original rewrite tactic via the associativity of multiplication. We handle the above mentioned type problems silently and whoever uses our system to formalize his/her own proof will not even feel these problems. Circuit equivalence {#sec:cir} =================== In order to judge whether two circuits have the same behavior, we need to formulate reasonable notions of circuit equivalence. We will propose two candidate relations: one is called matrix equivalence and the other operator equivalence. Matrix equivalence ------------------ A natural way of interpreting a quantum circuit without measurements is to consider each quantum gate as a unitary matrix and the whole circuit as a composition of matrices that eventually reduces to a single matrix. From this viewpoint, two circuits are equivalent if they denote the same unitary matrix, that is, matrix equivalence $=$ suffices to stand for circuit equivalence. ---------------- --- ------- ------------------------------------- --- ------------------ $X X$ = $I_2$ $\frac{1}{\sqrt{2}} {\cdot}(X + Z)$ = $H$ $Y Y$ = $I_2$ $H_2 × CX × H_2$ = $CZ$ $Z Z$ = $I_2$ $CX \times X_1 \times CX$ = $X_1 \times X_2$ $H H$ = $I_2$ $CX \times Y_1 \times CX$ = $Y_1 \times X_2$ $CX \times CX$ = $I_4$ $CX \times Z_1 \times CX$ = $Z_1$ $ HXH$ = $Z$ $CX \times X_2 \times CX$ = $X_2$ $ HYH$ = $-Y$ $CX \times Y_2 \times CX$ = $Z_1 \times Y_2$ $HZH$ = $X$ $CX \times Z_2 \times CX$ = $Z_1 \times Z_2$ ---------------- --- ------- ------------------------------------- --- ------------------ Directly showing that two matrices are equivalent requires to inspect their elements and compare them component-wisely. Instead, we can take a functional view of matrix equivalence. Let $A, B$ be two matrices, then $A= B$ if and only if $A|v{\rangle}= B|v{\rangle}$ for any vector $|v{\rangle}$. Lemma MatrixEquiv\_spec: forall [n]{} (A B: Matrix n n), A = B &lt;-&gt; (forall v: Vector n, A × v = B × v). In Figure \[fig:laws\] we list some laws that are often useful in simplifying circuits before showing that they are equivalent. Let us verify the validity of the laws. Take the first one as an example. Its validity is stated in Lemma [unit$\_$X]{}. In order to prove that lemma, we apply [MatrixEquiv$\_$spec]{} and reduce it to Lemma [unit$\_$X’]{}, which can be easily proved by the strategy [operate$\_$reduce]{}. Lemma unit\_X : X × X = I\_2. Lemma unit\_X’ : forall v : Vector 2, X × X × v = I\_2 × v. In the right column of Figure \[fig:laws\], the subscripts of $X,Y,Z$ and $H$ indicate on which qubits the quantum gates are applied. For example, $X_2$ means that the Pauli-X gate is applied on the second qubit. Thus, the operation $Y_1 \times X_2$ actually stands for $(Y \otimes I_2) \times (I_2 \otimes X)$. \ In Figure \[Cir\], we display some equivalent circuits. In diagram (a), on the right of = is a schematic specification of swapping two qubits, which is implemented by the circuit on the left. In diagram (b), there is a controlled operation performed on the second qubit, conditioned on the first qubit being set to zero. It is equivalent to a $CX$ gate enclosed by two Pauli-X gates on the first qubit. In diagram (c), the controlled phase shift gate on the left is equivalent to a circuit for two qubits on the right. In diagram (d), a controlled gate with two targets is equivalent to the concatenation of two $CX$ gates. They are formalized as follows and all can be proved by using the strategy [operate$\_$reduce]{} in conjunction with [MatrixEquiv$\_$spec]{}. Definition SWAP := B0 ⊗ B0 .+ B1 ⊗ B2 .+ B2 ⊗ B1 .+ B3 ⊗ B3. Definition XC := X ⊗ B3 .+ I\_2 ⊗ B0. Lemma Eq1 : SWAP = CX × XC × CX. Definition not\_CX := B0 ⊗ X .+ B3 ⊗ I\_2. Lemma Eq2 : not\_CX = (X ⊗ I\_2) × CX × (X ⊗ I\_2). Definition CE (u: R) := B0 ⊗ I\_2 .+ B3 ⊗ (Cexp u .\* B0 .+ Cexp u .\* B3). Lemma Eq3 : CE u = (B0 .+ Cexp u .\* B3) ⊗ I\_2. Definition CXX := B0 ⊗ I\_2 ⊗ I\_2 .+ B3 ⊗ X ⊗ X. Definition CIX := B0 ⊗ I\_2 ⊗ I\_2 .+ B3 ⊗ I\_2 ⊗ X. Lemma Eq4 : CXX = CIX × (CX ⊗ I\_2). \[page:cix\] In Section \[sec:symbolic\] we have formalized the preparation of the 3-qubit GHZ state (cf. Figure \[GHZ\]). Now let us have a look at the Bell states. Depending on the input states, the circuit in Figure \[Bell\] gives four possible output states. The correctness of the circuit is validated by the four lemmas below, where the states are given in terms of density matrices and the circuit is described by a super-operator. It is easy to to prove them by using our strategy [super$\_$reduce]{}. $$\Qcircuit @C=.8em @R=1.2em { & & & \\ \lstick{x} & \qw & \gate{H} & \qw & \ctrl{2} & \qw & \qw & \\ & & & & & & & \rstick{\beta_{xy}} \\ \lstick{y} & \qw & \qw & \qw & \targ & \qw & \qw & }$$ $$\begin{array}{rcl} |\beta_{00}{\rangle}& = & \frac{1}{\sqrt{2}} {\cdot}|0{\rangle}\otimes |0{\rangle}+ \frac{1}{\sqrt{2}} {\cdot}|1{\rangle}\otimes |1{\rangle}\\ |\beta_{01}{\rangle}& = & \frac{1}{\sqrt{2}} {\cdot}|0{\rangle}\otimes |1{\rangle}+ \frac{1}{\sqrt{2}} {\cdot}|1{\rangle}\otimes |0{\rangle}\\ |\beta_{10}{\rangle}& = & \frac{1}{\sqrt{2}} {\cdot}|0{\rangle}\otimes |0{\rangle}- \frac{1}{\sqrt{2}} {\cdot}|1{\rangle}\otimes |1{\rangle}\\ |\beta_{11}{\rangle}& = & \frac{1}{\sqrt{2}} {\cdot}|0{\rangle}\otimes |1{\rangle}- \frac{1}{\sqrt{2}} {\cdot}|1{\rangle}\otimes |0{\rangle}\end{array}$$ Definition b00 := /√2 .\* (∣0,0⟩) .+ /√2 .\* (∣1,1⟩). Definition b01 := /√2 .\* (∣0,1⟩) .+ /√2 .\* (∣1,0⟩). Definition b10 := /√2 .\* (∣0,0⟩) .+ (-/√2) .\* (∣1,1⟩). Definition b11 := /√2 .\* (∣0,1⟩) .+ (-/√2) .\* (∣1,0⟩). Lemma pb00 : super (CX × (H ⊗ I\_2)) (density ∣0,0⟩) = density b00. Lemma pb01 : super (CX × (H ⊗ I\_2)) (density ∣0,1⟩) = density b01. Lemma pb10 : super (CX × (H ⊗ I\_2)) (density ∣1,0⟩) = density b10. Lemma pb11 : super (CX × (H ⊗ I\_2)) (density ∣1,1⟩) = density b11. Operator equivalence -------------------- An alternative and abstract way of interpreting a quantum circuit without measurements is to consider it as an operator that changes input quantum states to output ones. Therefore, we present a notion of operator equivalence used for circuit equivalence. Formally, we define an operator for $n$-qubits as a square matrix of dimension $2^n$ and a state (in the vector form) for $n$-qubits as a $2^n$-dimensional vector. Applying an operator to a state yields another state. The relations operator equivalence [OperatorEquiv]{} and state equivalence [StateEquiv]{} are then defined in terms of the notion of matrix equivalence $\approx$ introduced in Section \[sec:symbolic\], because global phases are ignored as far as quantum states are concerned. Definition Operator n := Matrix (Nat.pow 2 n) (Nat.pow 2 n). Definition State n := Matrix (Nat.pow 2 n) 1. Definition Apply [n]{} (o: Operator n) (s: State n): State n := @Mmult (Nat.pow 2 n) (Nat.pow 2 n) 1 o s. Definition OperatorEquiv [n]{} (o1 o2: Operator n): Prop := @phase\_equiv (Nat.pow 2 n) (Nat.pow 2 n) o1 o2. Definition StateEquiv [n]{} (s1 s2: State n): Prop := @phase\_equiv (Nat.pow 2 n) 1 s1 s2. The following lemma provides a functional view of operator equivalence. It is a counterpart of [MatrixEquiv$\_$spec]{}. Let $A, B$ be two operators, we have $A\approx B$ if and only if $A|\psi{\rangle}\approx B|\psi{\rangle}$ for any state $|\psi{\rangle}$. Lemma OperatorEquiv\_spec: forall [n]{} (o1 o2: Operator n), OperatorEquiv o1 o2 &lt;-&gt; (forall s: State n, StateEquiv (Apply o1 s) (Apply o2 s)). Furthermore, two states are equivalent or equal modulo a global phase, i.e. $|\psi{\rangle}\approx |\phi{\rangle}$ if and only if their density matrices are exactly the same, i.e. $|\psi{\rangle}{\langle}\psi| = |\phi{\rangle}{\langle}\phi|$. Lemma StateEquiv\_spec: forall [n]{} (s1 s2: State n), StateEquiv s1 s2 &lt;-&gt; @Mmult (Nat.pow 2 n) 1 (Nat.pow 2 n) s1 (s1 †) = @Mmult (Nat.pow 2 n) 1 (Nat.pow 2 n) s2 (s2 †). Although both matrix equivalence $=$ and operator equivalence $\approx$ can be used for circuit equivalences, the former is strictly finer than the latter. Therefore, in the rest of the paper we relate circuits by $=$ whenever possible, as they are also related by $\approx$. Moreover, it is not difficult to see that $=$ is a congruence relation. For example, if $A, B$ are two quantum gates and $A=B$, then we can add a control qubit to form controlled-$A$ and controlled-$B$ gates, which are still identified by $=$. However, the relation $\approx$ does not satisfy such kind of congruence property. In Section \[sec:deu\] we will see a concrete example of using $\approx$, where quantum states are identified by purposefully ignoring their global phases. Case studies {#sec:casestudy} ============ To illustrate the power of our symbolic approach of reasoning about quantum circuits, we conduct a few case studies and compare the approach with the computational one in [@PRZ17]. Deutsch’s algorithm {#sec:deu} ------------------- Given a Boolean function $f : \{0,1\} \rightarrow \{0,1\}$, Deutsch [@Deu85] presented a quantum algorithm that can compute $f(0)\oplus f(1)$ in a single evaluation of $f$. The algorithm can tell whether $f(0)$ equals $f(1)$ or not, without giving any information about the two values individually. The quantum circuit in Figure \[Deu\] gives an implementation of the algorithm. It makes use of a quantum oracle that maps any state $|x{\rangle}\otimes|y{\rangle}$ to the state $|x{\rangle}\otimes |y\oplus f(x){\rangle}$, where $x,y\in\{0,1\}$. More specifically, the unitary operator $U_f$ can be in one of the following four forms: - if $f(0)=f(1)=0$, then $U_f=U_{f00}=I_2 \otimes I_2$; - if $f(1)=f(1)=1$, then $U_f=U_{f11}=I_2\otimes X$; - if $f(0)=0$ and $f(1)=1$, then $U_f=U_{f01}=CX$; - if $f(0)=1$ and $f(1)=0$, then $U_f=U_{f10}=B_0 \otimes X + B_3 \otimes I_2$. $$\Qcircuit @C=0.8em @R=1.8em { \lstick{\ket{0}} & \qw & \qw & \gate{H} & \qw & \qw & \multigate{2}{\mathcal{ \begin{array}{cccc} x & & & x \\ & & &\\ & & U_f &\\ & & &\\ y & & & y \oplus f(x) \\ \end{array} }} & \qw & \gate{H} & \qw & \qw \\ & & &\\ \lstick{\ket{1}} & \qw & \qw & \gate{H} & \qw & \qw & \ghost{\mathcal{ \begin{array}{cccc} x & & & x \\ & & &\\ & & U_f &\\ & & &\\ y & & & y \oplus f(x) \\ \end{array} }} & \qw & \qw & \qw & \qw \\ & \rstick{\ket{\psi_0}} & & & \rstick{\ket{\psi_1}} & & & & \lstick{\ket{\psi_2}} & & & \lstick{\ket{\psi_3}} }$$ We formalize Deutsch’s algorithm in Coq and use our symbolic approach to prove its correctness. Let us suppose that $|\psi_0{\rangle}= \ket{01}$ is the input state. There are three phases in this quantum circuit. The first phase applies the Hadamard gate to each of the two qubits. So we define the initial state and express the state after the first phase as follows: Definition PS. 0 := ∣0⟩ ⊗ ∣1⟩. Definition PS. 1 := (H ⊗ H) × PS. 0. Lemma step1 : PS. 1 = ∣+⟩ ⊗ ∣-⟩. Lemma [step1]{} claims that the intermediate state after the first phase is $|+{\rangle}\otimes |-{\rangle}$. We can use the strategy [operate$\_$reduce]{} designed in Section \[sec:symbolic\] to prove its correctness. The second phase applies the unitary operator $U_f$ to $|\psi_1{\rangle}$. Since $U_f$ has four possible forms, we consider four cases. Definition PS. 20 := (I\_2 ⊗ I\_2) × PS. 1. Definition PS. 21 := (I\_2 ⊗ X) × PS. 1. Definition PS. 22 := CX × PS. 1. Definition PS. 23 := (B0 ⊗ X .+ B3 ⊗ I\_2) × PS. 1. Lemma step20 : PS. 20 = ∣+⟩ ⊗ ∣-⟩. Lemma step21 : PS. 21 = -1 .\* ∣+⟩ ⊗ ∣-⟩. Lemma step22 : PS. 22 = ∣-⟩ ⊗ ∣-⟩. Lemma step23 : PS. 23 = -1 .\* ∣-⟩ ⊗ ∣-⟩. Each of the above four lemmas corresponds to one case. They claim that the intermediate state $|\psi_{2}{\rangle}$ is $\pm1 {\cdot}|+{\rangle}\otimes |-{\rangle}$ after the second phase when $f(0) = f(1)$, and $\pm1 {\cdot}|-{\rangle}\otimes |-{\rangle}$ when $f(0) \neq f(1)$. We prove the four lemmas by rewriting $|\psi_1{\rangle}$ with Lemma [step1]{} and using the strategy [operate$\_$reduce]{} again. The last phase applies the Hadamard gate to the first qubit of $|\psi_{2}{\rangle}$. So we still have four cases. Definition PS. 30 := (H ⊗ I\_2) × PS. 20. ... Lemma step30 : PS. 30 = ∣0⟩ ⊗ ∣-⟩. Lemma step31 : PS. 31 = -1 .\* ∣0⟩ ⊗ ∣-⟩. Lemma step32 : PS. 32 = ∣1⟩ ⊗ ∣-⟩. Lemma step33 : PS. 33 = -1 .\* ∣1⟩ ⊗ ∣-⟩. Observe that the only difference between $|\psi_{30}{\rangle}$ and $|\psi_{31}{\rangle}$ lies in the global phase $-1$, which can be ignored. Similarly for $|\psi_{32}{\rangle}$ and $|\psi_{33}{\rangle}$. Formally, we can prove the following lemmas. Lemma step31’ : PS. 31 ≈ ∣0⟩ ⊗ ∣-⟩. Lemma step33’ : PS. 33 ≈ ∣1⟩ ⊗ ∣-⟩. Therefore, after the last phase, we have $|\psi_{3}{\rangle}= |0{\rangle}\otimes |-{\rangle}$ when $f(0) = f(1)$, and $|\psi_{3}{\rangle}= |1{\rangle}\otimes |-{\rangle}$ when $f(0) \neq f(1)$. This is proved by using the intermediate results obtained in the first two phases and the strategy [operate$\_$reduce]{}. The above reasoning about the Deutsch’s algorithm proceeds step by step and shows all the intermediate states in each phase. Alternatively, one may be only interested in the output state of the circuit once an input state is fed. In other words, we would like to show a property like $$|\psi_{3ij}{\rangle}= (H\otimes I_2)\times U_{fij}\times (H\otimes H) \times |\psi_0{\rangle}.$$ Formally, we need to prove four equations depending on the forms of $U_f$. Lemma deutsch00 : (H ⊗ I\_2) × (I\_2 ⊗ I\_2) × (H ⊗ H) × (∣0⟩ ⊗ ∣1⟩) = ∣0⟩ ⊗ ∣-⟩ . Lemma deutsch01 : (H ⊗ I\_2) × (I\_2 ⊗ X) × (H ⊗ H) × (∣0⟩ ⊗ ∣1⟩) = -1 .\* ∣0⟩ ⊗ ∣-⟩ . Lemma deutsch10 : (H ⊗ I\_2) × CX × (H ⊗ H) × (∣0⟩ ⊗ ∣1⟩ = ∣1⟩ ⊗ ∣-⟩ . Lemma deutsch11 : (H ⊗ I\_2) × (B0 ⊗ X .+ B3 ⊗ I\_2) × (H ⊗ H) × (∣0⟩ ⊗ ∣1⟩) = -1 .\* ∣1⟩ ⊗ ∣-⟩. The second and fourth equations can be written in a simpler form as follows. Lemma deutsch01’ : (H ⊗ I\_2) × (I\_2 ⊗ X) × (H ⊗ H) × (∣0⟩ ⊗ ∣1⟩) ≈ ∣0⟩ ⊗ ∣-⟩ . Lemma deutsch11’ : (H ⊗ I\_2) × (B0 ⊗ X .+ B3 ⊗ I\_2) × (H ⊗ H) × (∣0⟩ ⊗ ∣1⟩) ≈ ∣1⟩ ⊗ ∣-⟩ . Using our symbolic reasoning, these lemmas can be easily proved. Thus, we know that the first qubit of the result state is $|0{\rangle}$ when $f(0) = f(1)$, and $|1{\rangle}$ otherwise. That is, $|\psi_{3ij}{\rangle}= |f(0) \oplus f(1){\rangle}|-{\rangle}$ as expected. In the above reasoning, states are described in vectors. We can also obtain a similar result if states are written in density matrix form. Teleportation ------------- Quantum teleportation [@BB93] is one of the most important protocols in quantum information theory. It teleports an unknown quantum state by only sending classical information, by making use of a maximally entangled state. Let the sender and the receiver be $Alice$ and $Bob$, respectively. The quantum teleportation protocol goes as follows, as illustrated by the quantum circuit in Figure \[Tele\]. 1. $Alice$ and $Bob$ prepare an EPR state $|\beta_{00}\rangle_{q_2,q_3}$ together. Then they share the qubits, $Alice$ holding $q_2$ and $Bob$ holding $q_3$. 2. To transmit the state $|\psi{\rangle}$ of the quantum qubit $q_1$, $Alice$ applies a $CX$ operation on $q_1$ and $q_2$ followed by an $H$ operation on $q_1$. 3. $Alice$ measures $q_1$ and $q_2$ and sends the outcome $x$ to $Bob$. 4. When $Bob$ receives $x$, he applies appropriate Pauli gates on his qubit $q_3$ to recover the original state $|\psi{\rangle}$ of $q_1$. $$\Qcircuit @C=0.8em @R=2em { \lstick{\ket{\psi}} & \qw & \qw & \qw & \ctrl{1} & \qw & \qw & \qw & \gate{H} & \qw & \qw & \qw & \measureD{M_1} & \cw & \cw & \cw & \cw & \control \cw \cwx[2] \\ \lstick{} & \qw & \qw & \qw & \targ & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \measureD{M_2} & \cw & \cw & \control \cw \cwx[1] \\ \lstick{} & \qw & \qw & \qw &\qw & \qw &\qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \gate{X^{M_2}} & \qw & \gate{Z^{M_1}} & \qw & \qw & \rstick{\ket{\psi}} \inputgroupv{2}{3}{0.5em}{1.5em}{\ket{\beta_{00}}}\\ & \rstick{\ket{\psi_0}} & & & & \rstick{\ket{\psi_1}} & & & & & & \lstick{\ket{\psi_2}} & & & \lstick{\ket{\psi_3}} & & & & & & \lstick{\ket{\psi_4}}}$$ We formalize the quantum teleportation algorithm in Coq and use our symbolic approach to prove its correctness. Let $|\psi{\rangle}=\alpha \ket{0}+\beta \ket{1}$ be any vector used as a part of the input state. The other part is $|\beta_{00}{\rangle}$, which needs extra preparation. For simplicity, we directly represent $|\beta_{00}{\rangle}$ with a combination of $|0{\rangle}$ and $|1{\rangle}$. So we define the input state $|\psi_0{\rangle}$ in Coq as follows. Definition PS. (a b : C) : Vector 2 := a .\* ∣0⟩ .+ b .\* ∣1⟩. Definition bell00 := /√2 .\* (∣0⟩ ⊗ ∣0⟩) .+ /√2 .\* (∣1⟩ ⊗ ∣1⟩). Definition PS. 0 := PS. a b ⊗ bell00. The input state goes through the quantum circuit that comprises four phases. We define the quantum states immediately after phases 2, 3 and 4 as follows. $$\begin{array}{rcl} |\psi_2{\rangle}& := & (H\otimes I_2\otimes I_2)\times(CX \otimes I_2)\times (|\psi{\rangle}\otimes |\beta_{00}{\rangle})\\ |\psi_{3ij}{\rangle}& := & (N_i\otimes N_j\otimes I_2)\times |\psi_2{\rangle}\\ |\psi_{4ij}{\rangle}& := & (I_2 \otimes I_2 \otimes Z^i) \times (I_2 \otimes I_2 \otimes X^j) \times |\psi_{3ij}{\rangle}\end{array}$$ In the third phase, due to the measurement with measurement operators $\{N_0,N_1\}$, where $N_0=B_0$ and $N_1=B_3$, there are four possible cases for the state $|\psi_3{\rangle}$, and the probability for each case can be calculated as $${\langle}\psi_2|\times (N_i \otimes N_j\otimes I_2)^\dagger\times (N_i\otimes N_j\otimes I_2)\times |\psi_2{\rangle}~=~ 1/4.$$ Let $i,j\in\{0,1\}$ be the measurement outcomes for the top two qubits. The quantum state after the fourth phase becomes $|\psi_{4ij}{\rangle}$. Using the simplification strategies discussed in Section \[sec:sym\], it is easy to prove that $|\psi_{4ij}{\rangle}$ can be simplified to be $ \frac{1}{2}|i{\rangle}\otimes|j{\rangle}\otimes|\psi{\rangle}$, which is the correct final state without normalization. Instead of a step-by-step reasoning, we may be only concerned with the final state after the circuit and would like to show the following equality $$\begin{array}{cl} |\psi_{4ij}{\rangle}= (I_2 \otimes I_2 \otimes Z^i) \times (I_2 \otimes I_2 \otimes X^j) \times (N_i\otimes N_j\otimes I_2)\times\\ (H\otimes I_2\otimes I_2)\times(CX \otimes I_2) \times (|\psi{\rangle}\otimes |\beta_{00}{\rangle}). \end{array}$$ In the formal proof, we have four cases to consider. They correspond to the four lemmas below. Lemma tele00 : forall (a b : C), (N0 ⊗ N0 ⊗ I\_2) × (H ⊗ I\_2 ⊗ I\_2) × (CX ⊗ I\_2) × (PS. a b ⊗ bell00) = / 2 .\* ∣0⟩ ⊗ ∣0⟩ ⊗ (PS. a b) . Lemma tele01 : forall (a b : C), (I\_2 ⊗ I\_2 ⊗ X) × (N0 ⊗ N1 ⊗ I\_2) × (H ⊗ I\_2 ⊗ I\_2) × (CX ⊗ I\_2) × (PS. a b ⊗ bell00) = / 2 .\* ∣0⟩ ⊗ ∣1⟩ ⊗ (PS. a b) . Lemma tele10 : forall (a b : C), (I\_2 ⊗ I\_2 ⊗ Z) × (N1 ⊗ N0 ⊗ I\_2) × (H ⊗ I\_2 ⊗ I\_2) × (CX ⊗ I\_2) × (PS. a b ⊗ bell00) = / 2 .\* ∣1⟩ ⊗ ∣0⟩ ⊗ (PS. a b) . Lemma tele11 : forall (a b : C), (I\_2 ⊗ I\_2 ⊗ Z) × (I\_2 ⊗ I\_2 ⊗ X) × (N1 ⊗ N1 ⊗ I\_2) × (H ⊗ I\_2 ⊗ I\_2) × (CX ⊗ I\_2) × (PS. a b ⊗ bell00) = / 2 .\* ∣1⟩ ⊗ ∣1⟩ ⊗ (PS. a b) . The above lemmas can be quickly proved using our symbolic approach. They show that the third qubit of the result state is always equal to $|\psi{\rangle}$, the state to be teleported from Alice to Bob. If the input state is given by a density matrix as follows, we can also prove that the final output state of the circuit is in a correct form. The lemma below is a counterpart of Lemma [tele00]{}, with states in density matrices instead of vectors. Lemma Dtele00 : forall (a b : C), super ((N0 ⊗ N0 ⊗ I\_2) × (H ⊗ I\_2 ⊗ I\_2) × (CX ⊗ I\_2)) (density (PS. a b ⊗ bell00)) = density (/ 2 .\* ∣0⟩ ⊗ ∣0⟩ ⊗ (PS. a b)) . Simon’s Algorithm ----------------- The Simon’s problem was raised in 1994 [@Simon97]. Although it is an artificial problem, it inspired Shor to discover a polynomial time algorithm to solve the integer factorization problem. Given a function $f : \{0,1\}^n \rightarrow \{0,1\}^n$, suppose there exists a string $s \in \{0,1\}^n$ such that the following property is satisfied: $$\label{eq:v}\begin{array}{cl} f(x) = f(y) ~~\Leftrightarrow~~ x=y\ \mbox{ or }\ x \oplus y =s \end{array}$$ for all $x,y \in \{0,1\}^n$. Here $\oplus$ is the bit-wise modulo 2 addition of two $n$ bit-strings. The goal of Simon’s algorithm is to find the string $s$. The algorithm consists of iterating the quantum circuit and then performing some classical post-processing. 1. Set an initial state $|0^n{\rangle}\otimes |0^n{\rangle}$, and apply Hadamard gates to the first $n$ qubits respectively. 2. Apply an oracle $U_f$ to all the $2n$ qubits, where $U_f: |x{\rangle}|y{\rangle}\mapsto |x{\rangle}|f(x) \oplus y{\rangle}$. 3. Apply Hadamard gates to the first $n$ qubits again and then measure them. When $s \neq 0^n$, the probability of obtaining each string $y\in\{0,1\}^n$ is $$p_y =\left\{ \begin{array}{ll} 2^{-(n-1)} \qquad & {\rm if} \quad s {\cdot}y = 0\\ 0 & {\rm if} \quad s {\cdot}y = 1. \end{array} \right.$$ Therefore, it can be seen that the result string $y$ must satisfy $s {\cdot}y = 0$ and be evenly distributed. Repeating this process $n-1$ times, we will get $n-1$ strings $y_1,\cdots,y_{n-1}$ so that $y_i \cdot s=0$ for $1\leq i\leq n-1$. Thus we have $n-1$ linear equations with $n$ unknowns ($n$ is the number of bits in s). The goal is to solve this system of equations to get $s$. We can get a unique non-zero solution $s$ if we are lucky and $y_1,...,y_{n-1}$ are linearly independent. Otherwise, we repeat the entire process and will find a linearly independent set with a high probability. $$\Qcircuit @C=1em @R=1em { & & & & & & & & &\\ \lstick{\ket{0}} & \qw & \gate{H} & \qw & \ctrl{2} & \qw & \qw & \gate{H} & \qw &\qw \\ \lstick{\ket{0}} & \qw & \gate{H} & \qw & \qw & \ctrl{1} & \qw & \gate{H} & \qw &\qw \\ \lstick{\ket{0}} & \qw & \qw & \qw & \targ & \targ & \qw & \qw & \qw &\qw\\ \lstick{\ket{0}} & \qw & \qw & \qw & \gate{X} & \qw & \qw & \qw & \qw &\qw \\ & & & & & & & & & \quad \gategroup{1}{4}{6}{7}{.7em}{.} \\ }$$ As an example, we consider the Simon’s algorithm with $n = 2$. The quantum circuit is displayed in Figure \[Simon11\]. We design the oracle as the gates in the dotted box $U_f = (I_2 \otimes CX \otimes I_2) \times (CIX \otimes X)$, where the gate $CIX$ is defined in page . For this oracle, $s = 11$ satisfies property (\[eq:v\]). The change of states can be seen as follows: $$\begin{array}{rl} |0000{\rangle}& \xrightarrow{H \otimes H \otimes I_2 \otimes I_2} |++{\rangle}|00{\rangle}\\ & \xrightarrow{U_f} \frac{1}{2}[(|00{\rangle}+|11{\rangle})|01{\rangle}+(|01{\rangle}+|10{\rangle})|11{\rangle}]\\ & \xrightarrow{H \otimes H \otimes I_2 \otimes I_2} \frac{1}{2}[(|00{\rangle}+|11{\rangle})|01{\rangle}+(|00{\rangle}-|11{\rangle})|11{\rangle}] \end{array}$$ We can establish the following lemma with our symbolic approach: Lemma simon : super ((H ⊗ H ⊗ I\_2 ⊗ I\_2) × (I\_2 ⊗ CX ⊗ I\_2) × (CIX ⊗ X) × (H ⊗ H ⊗ I\_2 ⊗ I\_2)) (density ∣0,0,0,0⟩) = density (/2 .\* ∣0,0,0,1⟩ .+ /2 .\* ∣1,1,0,1⟩ .+ /2 .\* ∣0,0,1,1⟩ .+ -/2 .\* ∣1,1,1,1⟩). We analyze the cases where the last two qubits are in the state $|01{\rangle}$ or $|11{\rangle}$. The corresponding first two qubits are in $|00{\rangle}$ or $|11{\rangle}$, each occurs with equal probability. By property (\[eq:v\]), it means that $x \oplus y = 00$ or $11$, so we obtain that $s$ = 11. Grover’s algorithm ------------------ In this section we consider Grover’s search algorithm . The algorithm starts from the initial state $|0{\rangle}^{\otimes n}$. It first uses $H^{\otimes n}$ (the $H$ gate applied to each of the $n$ qubits) to obtain a uniform superposition state, and then applies the Grover iteration repeatedly. An implementation of the Grover iteration has four steps: 1. Apply the oracle $O$. 2. Apply the Hadamard transform $H^{\otimes n}$. 3. Perform a conditional phase shift on $|x{\rangle}$, if $|x{\rangle}\neq |0{\rangle}$. 4. Apply the Hadamard transform $H^{\otimes n}$ again. Here the conditional phase-shift unitary operator in the third step is $2|0{\rangle}{\langle}0|-I$. We can merge the last three steps as follows: $$\begin{array}{cl} H^{\otimes n} \times (2|0{\rangle}{\langle}0|-I) \times H^{\otimes n} = 2|\phi{\rangle}{\langle}\phi|-I \end{array}$$ where $|\phi{\rangle}= \frac{1}{\sqrt{N}}\sum\limits_{x=0}^{N-1} |x{\rangle}$, where $N=2^n$. Therefore, the Grover iteration becomes $G=(2|\phi{\rangle}{\langle}\phi|-I) \times O$. As a concrete example, we consider the Grover’s algorithm with two qubits. The size of the search space of this algorithm is four. So we need to consider four search cases with $x^* = 0,1,2,3$. The oracle must satisfy that if $x=x^*$, then $f(x^*)=1$, otherwise $f(x)=0$. So in accordance with $x^* = 0,1,2,3$, we design four oracles $ORA_0, ..., ORA_3$, which are implemented by the four circuits in Figure \[G3\]. Definition ORA0 := B0 ⊗ (B0 ⊗ X .+ B3 ⊗ I\_2) .+ B3 ⊗ I\_2 ⊗ I\_2. Definition ORA1 := B0 ⊗ CX .+ B3 ⊗ I\_2 ⊗ I\_2. Definition ORA2 := B0 ⊗ I\_2 ⊗ I\_2 .+ B3 ⊗ (B0 ⊗ X .+ B3 ⊗ I\_2). Definition ORA3 := B0 ⊗ I\_2 ⊗ I\_2 .+ B3 ⊗ CX. $$\Qcircuit @C=.8em @R=1.7em { & \qw & \gate{H} & \qw & \multigate{2}{ORA} & \qw & \gate{H} & \qw & \gate{X} & \qw & \qw & \qw & \ctrl{1} & \qw & \qw & \qw & \gate{X} & \qw & \gate{H} & \qw\\ & \qw & \gate{H} & \qw & \ghost{\mathcal{ORA}} & \qw & \gate{H} & \qw & \gate{X} & \qw & \gate{H} & \qw & \targ & \qw & \gate{H} & \qw & \gate{X} & \qw & \gate{H} & \qw\\ & \qw & \gate{H} & \qw & \ghost{\mathcal{ORA}} & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \gate{H} & \qw \gategroup{1}{9}{2}{18}{.7em}{.} }$$ The whole algorithm is illustrated by the circuit in Figure \[G4\]. The gates in the dotted box perform the conditional phase shift operation $2|0{\rangle}{\langle}0|-I$. We then merge the front and back $H \otimes H$ gates to it and get the operation $CPS$ as follows. Definition MI := (B0 .+ B1 .+ B2 .+ B3) ⊗ (B0 .+ B1 .+ B2 .+ B3). Definition CPS := (((/2 .\* MI) .+ (-1) .\* (I\_2 ⊗ I\_2)) ⊗ I\_2). So we have the Grover iteration $G=(2|\phi{\rangle}{\langle}\phi|-I) \times O = CPS \times ORA_i$. Let the initial state be $|0{\rangle}\otimes |0{\rangle}\otimes |1{\rangle}$. After the Hadamard transform $H^{\otimes3}$, we only perform Grover iteration once to get the search solution. In summary, we formalize the Grover’s algorithm with two qubits in the vector form as follows, and use our symbolic approach to prove them. The reasoning using density matrices can also be done. Lemma Gro0: (I\_2 ⊗ I\_2 ⊗ H) × CPS × ORA0 × (H ⊗ H ⊗ H) × ∣0,0,1⟩ = ∣0,0,1⟩. Lemma Gro1: (I\_2 ⊗ I\_2 ⊗ H) × CPS × ORA1 × (H ⊗ H ⊗ H) × ∣0,0,1⟩ = ∣0,1,1⟩. Lemma Gro2: (I\_2 ⊗ I\_2 ⊗ H) × CPS × ORA2 × (H ⊗ H ⊗ H) × ∣0,0,1⟩ = ∣1,0,1⟩. Lemma Gro3: (I\_2 ⊗ I\_2 ⊗ H) × CPS × ORA3 × (H ⊗ H ⊗ H) × ∣0,0,1⟩ = ∣1,1,1⟩. Experiments ----------- Deutsch Simon Teleportation Secret sharing QFT Grover --------------- --------- -------- --------------- ---------------- ------- -------- Symbolic 2860 36560 40712 58643 34710 363160 Computational 25190 183230 46450 168680 68730 966140 : Comparison of two approaches with verification time in milliseconds[]{data-label="t:result"} We have conducted experiments on Deutsch’s algorithm, Simon’s algorithm, quantum teleportation, quantum secret sharing protocol, quantum Fourier transform (QFT) with three qubits, and Grover’s search algorithm with two qubits. In Table \[t:result\], we record the execution time of those examples in milliseconds in CoqIDE 8.10.0 running in a PC with Intel Core i5-7200 CPU and 8 GB RAM. As we can see in the table, our symbolic approach always outperforms the computational one in [@PRZ17]. The computational approach is slow because of the explicit representation of matrices and inefficient tactics for evaluating matrix multiplications. Let us consider a simple example. In the computational approach, the Hadamard gate $H$ is defined by [ha]{} below: Definition ha : Matrix 2 2 := fun x y =&gt; match x, y with | 0, 0 =&gt; (1 / √2) | 0, 1 =&gt; (1 / √2) | 1, 0 =&gt; (1 / √2) | 1, 1 =&gt; -(1 / √2) | \_, \_ =&gt; 0 end. Since $H$ is unitary, we have $HH = I$ and the following property becomes straightforward. Lemma H3\_ket0: (ha ⊗ ha ⊗ ha) × (ha ⊗ ha ⊗ ha) × (∣0,0,0⟩) = (∣0,0,0⟩). However, to prove the above lemma with the computational approach is far from being trivial. To see this, we literally go through a few steps. Firstly, we apply the associativity of matrix multiplication on the left hand side of the equation so to rewrite it into $$\begin{array}{rl} & (H \otimes H \otimes H) \times ((H \otimes H \otimes H) \times (|0{\rangle}\otimes |0{\rangle}\otimes |0{\rangle})) . \end{array}$$ Secondly, each explicitly represented matrix is converted into a two-dimensional list and matrix multiplications are calculated in order. Finally, we need to show that each of the eight elements in the vector on the left is equal to the corresponding element on the right. Let $A_0 = (H \otimes H \otimes H) \times (|0{\rangle}\otimes |0{\rangle}\otimes |0{\rangle})$ and $A_1 = (H \otimes H \otimes H) \times A_0$. With the computational approach, obvious simplifications such as multiplication and addition with $0$ and $1$ are carried out for the elements in $A_0$ and $A_1$, and no more complicated simplification is effectively handled. So $A_0$ is a two-dimensional list with each element in the form $\frac{1}{\sqrt{2}} \times \frac{1}{\sqrt{2}} \times \frac{1}{\sqrt{2}}$ and $A_1$ is a two-dimensional list whose first element is $$\begin{array}{l} (\frac{1}{\sqrt{2}} \times \frac{1}{\sqrt{2}} \times \frac{1}{\sqrt{2}} \times (\frac{1}{\sqrt{2}} \times \frac{1}{\sqrt{2}} \times \frac{1}{\sqrt{2}})) \\ + (\frac{1}{\sqrt{2}} \times \frac{1}{\sqrt{2}} \times \frac{1}{\sqrt{2}} \times (\frac{1}{\sqrt{2}} \times \frac{1}{\sqrt{2}} \times \frac{1}{\sqrt{2}})) \\ + \ ... \\ + (\frac{1}{\sqrt{2}} \times \frac{1}{\sqrt{2}} \times \frac{1}{\sqrt{2}} \times (\frac{1}{\sqrt{2}} \times \frac{1}{\sqrt{2}} \times \frac{1}{\sqrt{2}})),\end{array}$$ which is a summation of eight identical summands with $\frac{1}{\sqrt{2}}$ multiplied with itself six times; other elements are in similar forms. From this simple example, we can already see that the explicit matrix representation and ineffective simplification in matrix multiplication make the intermediate expressions very cumbersome. On the contrary, in the symbolic approach we have $$\begin{array}{rcl} A_1 & = & (H \otimes H \otimes H) \times (H \otimes H \otimes H) \times (|0{\rangle}\otimes |0{\rangle}\otimes |0{\rangle})\\ & = & (H \times H \times |0{\rangle}) \otimes (H \times H \times |0{\rangle}) \otimes (H \times H \times |0{\rangle}) \\ & = & (H \times |+{\rangle})\otimes (H \times |+{\rangle})\otimes (H \times |+{\rangle}) \\ & = & |0{\rangle}\otimes |0{\rangle}\otimes |0{\rangle}.\end{array}$$ Notice that here we have kept the structure of tensor products rather than to eliminate them. In fact, we lazily evaluate tensor products because they are expensive to calculate and preserving more higher-level structures opens more opportunities for rewriting. The symbolic reasoning not only renders the intermediate expressions more readable, but also greatly reduces the time cost of arithmetic calculations. In general, in the computational approach a multiplication of two $N \times N$ matrices of $O(k)$-length expressions results in a matrix of $O(Nk)$-length expressions, and those expressions are not effectively simplified. At the end of the computation, a matrix of $O(N^m)$-length expressions is obtained if $m+1$ matrices of size $N \times N$ are multiplied together, which takes exponential time to simplify. In our approach, we represent matrices symbolically and simplify intermediate expressions effectively on the fly, which has a much better performance. Related work {#sec:related} ============ Formal verification in quantum computing has been growing rapidly, especially in Coq. Boender et al. [@BKN15] presented a framework for modeling and analyzing quantum protocols using Coq. They made use of the Coq repository C-CoRN [@LCHF04] and built a matrix library with dependent types. Cano et al. [@CCDMS16] specifically designed CoqEAL, a library built on top of ssreflect [@ssref] to develop efficient computer algebra programs with proofs of correctness. They represented a matrix as a list of lists for efficient generic matrix computation in Coq but they did not consider optimizations specific for matrices commonly used in quantum computation. Rand et al. [@PRZ17] defined a quantum circuit language Qwire in Coq, and formally verified some quantum programs expressed in that language [@RPZ18; @RPLZ19]. Reasoning using their matrix library usually requires explicit computation, which does not scale well, as discussed in Section \[sec:casestudy\]. Hietala et al. [@HRHWH19] developed a quantum circuit compiler VOQC in Coq, which uses several peephole optimization techniques such as replacement, propagation, and cancellation as proposed by Nam et al. [@YNYA18] to reduce the number of unitary transformations. It is very different from our symbolic approach of simplifying matrix operations using the Dirac notation. Mahmoud et al. [@MF19] formalized the semantics of Proto-Quipper in Coq and formally proved the type soundness property. They developed a linear logical framework within the Hybrid system [@FM12] and used it to represent and reason about the linear type system of Quipper [@GLRSV13]. Note that although sparse matrix computation is well studied in other areas of Computer Science, we are not aware of any library in Coq dedicated to sparse matrices. We consider the symbolic approach proposed in the current work as a contribution in this perspective. Apart from Coq, other proof assistants have also been used to verify quantum circuits and programs. Liu et al. [@LZWYLLYZ19] used the theorem prover Isabelle/HOL [@NPW02] to formalize a quantum Hoare logic [@Yin16] and verify its soundness and completeness for partial correctness. Unruh \[6\] developed a relational quantum Hoare logic and implemented an Isabelle-based tool to prove the security of post-quantum cryptography and quantum protocols. Beillahi et al. [@BMT19] verified quantum circuits with up to 190 two-qubit gates in HOL Light. It relies on the formalization of Hilbert spaces in HOL Light proposed by Mahmoud et al. in [@MAT13], where a number of laws about complex functions and linear operators are proved. Although linear operators correspond to matrices in the finite-dimension case, our results are not implied by those in  [@BMT19; @MAT13] because we are in a different setting, i.e. Coq. Furthermore, one of our main contributions is to represent sparse matrices using Dirac notation, which is convenient for readability and cancelling zero matrices due to orthogonality of basic vectors. Notice that the laws in Table \[t:core\] play an important role in our symbolic reasoning of quantum circuits. Although they resemble to some laws in a ring, the matrices under our consideration can be of various dimensions and they do not form a ring. It is also critical that the multiplication of two matrices, e.g. a row vector and a column vector, could be a scalar number (and even zero). Thus, rings are not enough here. The proof-by-reflection technique for rings might be useful but are usually hard to develop. We have shown that the tactic-based method is already efficient in our application scenario, and also flexible for both fully-automated and interactive proofs. Conclusion and future work {#sec:concl} ========================== We have proposed a symbolic approach to reasoning about quantum circuits in Coq. It is based on a small set of equational laws which are exploited to design some simplification strategies. According to our case studies, the approach scales better than the usual one of explicitly representing matrices and is well suited to be automated in Coq. Dealing with quantum circuits is our intermediate goal. More interesting algorithms such as the Shor’s algorithm [@Sho94] also require classical computation. In the near future, we plan to formalize in Coq the semantics of a quantum programming language with both classical and quantum features. [8]{} C.H. Bennett, G. Brassard, C. Crepeau, R. Jozsa, A. Peres, and W. Wootters. Tele- porting an unknown quantum state via dual classical and EPR channels. [*Physical Review Letters*]{}, 70:1895–1899, 1993. Nielsen M A, Chuang I L. Quantum Computation and Quantum Information: 10th Anniversary Edition. Cambridge University Press, 2011. Coq Development Team. The Coq Proof Assistant Reference Manual. Electronic resource, available from <http://coq.inria.fr>. David Deutsch. Quantum theory, the Church-Turing principle, and the Universal Quantum Computer. [*Proceedings of the Royal Society of London A*]{}, 400:97-117, 1985. Robert Rand, Jennifer Paykin, Steve Zdancewic. QWIRE Practice: Formal Verification of Quantum Circuits in Coq. In [*Proceedings of the 14th International Conference on Quantum Physics and Logic*]{}, EPTCS 266: 119-132, 2018. Dirac P A M. A New Notation for Quantum Mechanics. [*Mathematical Proceedings of the Cambridge Philosophical Society*]{} 35(3): 416-418, 1939. Shor P W. Algorithms for Quantum Computation: Discrete Log and Factoring. In [*Proc. FOCS 1994*]{}, 124-133, IEEE Computer Society, 1994. Sylvie Boldo, Catherine Lelay, Guillaume Melquiond. Coquelicot. Available at\ <http://coquelicot.saclay.inria.fr/>. Daniel M. Greenberger, Michael A. Horne, Anton Zeilinger. Bell’s theorem, Quantum Theory, and Conceptions of the Universe. pp. 73-76, Kluwer Academics, Dordrecht, The Netherlands 1989. Jennifer Paykin, Robert Rand, Steve Zdancewic. QWIRE: A Core Language for Quantum Circuits. In [*Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages*]{}, 52: 846-858, 2017. Jaap Boender, Florian Kamm$\ddot{u}$ller, Rajagopal Nagarajan. Formalization of Quantum Protocols using Coq. In [*Proceedings of the 12th International Workshop on Quantum Physics and Logic*]{}, EPTCS 195:71-83, 2015. Cruz-Filipe, Herman Geuvers, Freek Wiedijk. C-CoRN, the Constructive Coq Repository at Nijmegen. [*Mathematical Knowledge Management*]{}, Lecture Notes in Computer Science 3119: 88-103, 2004. Guillaume Cano, Cyril Cohen, Maxime D$\acute{e}$en$\grave{e}$s, Anders M$\ddot{o}$rtberg, Vincent Siles. CoqEAL - The Coq Effective Algebra Library. <https://github.com/CoqEAL/CoqEAL>, 2016. Mathematical Components team. Mathematical Components. <https://math-comp.github.io> Robert Rand, Jennifer Paykin, Dong-Ho Lee, Steve Zdancewic. ReQWIRE: Reasoning about Reversible Quantum Circuits. In [*Proceedings of the 15th International Conference on Quantum Physics and Logic*]{}, EPTCS 287: 299-312, 2019. Kesha Hietala, Robert Rand, Shih-Han Hung, Xiaodi Wu, Michael Hicks. Verified Optimization in a Quantum Intermediate Representation. In [*Proceedings of the 16th International Conference on Quantum Physics and Logic*]{}, CoRR abs/1904.06319, 2019. Yunseong Nam, Neil J. Ross, Yuan Su, Andrew M. Childs, Dmitri Maslov. Automated Optimization of Large Quantum Circuits with Continuous Parameters. [*npj Quantum Information*]{}, 4(1): 23, 2018. Mahmoud M Y, Felty A P. Formalization of Metatheory of the Quipper Quantum Programming Language in a Linear Logic. [*Journal of Automated Reasoning*]{}, 63(4): 967-1002, 2019. Felty A P, Momigliano A. Hybrid: A Definitional Two-Level Approach to Reasoning with Higher-Order Abstract Syntax. [*Journal of Automated Reasoning*]{}, 48(1): 43-105, 2012 Green A S, Lumsdaine P L, Ross N J, Selinger P, Valiron B. Quipper: A Scalable Quantum Programming Language. [*Acm Sigplan Notices*]{}, 48(6): 333-342, 2013. Anticoli L, Piazza C, Taglialegne L, Zuliani P. Towards Quantum Programs Verification: From Quipper Circuits to QPMC. In [*International Conference on Reversible Computation*]{}, LNCS 9720: 213-219, 2016. Junyi Liu, Bohua Zhan, Shuling Wang, Shenggang Ying, Tao Liu, Yangjia Li, Mingsheng Ying, Naijun Zhan. Formal Verification of Quantum Algorithms Using Quantum Hoare Logic. In [*Proc. CAV 2019*]{}, LNCS 11562: 187-207. Springer, 2019. Tobias Nipkow, Lawrence C. Paulson, Markus Wenzel. Isabelle/HOL: A Proof Assistant for Higher-Order Logic. Lecture Notes in Computer Science. Springer, 2002. J. von Neumann. States, Effects and Operations: Fundamental Notions of Quantum Theory. Princeton University Press, 1955. Mingsheng Ying. Foundations of Quantum Programming. Morgan Kaufmann, 2016. Beillahi S M, Mahmoud M Y, Tahar S. A Modeling and Verification Framework for Optical Quantum Circuits. [*Formal Aspects of Computing*]{} 31: 321-351, 2019. Mahmoud M Y, Aravantinos Y, Tahar S. Formalization of Infinite Dimension Linear Spaces with Application to Quantum Theory. In [*Nasa Formal Methods Symposium*]{}, LNCS 7871: 413-427, 2013. Daniel R. Simon, On the power of quantum computation, SIAM Journal on Computing 26 (5), 1474 - 1483, 1997. [^1]: Nevertheless, we keep our Coq script in the repository at Github more rigid. There we use $\equiv$ to stand for the relaxed notion of matrix equivalence and reserve $=$ for the stronger notion of equivalence in the sense that $A=B$ means the two matrices $A$ and $B$ are equal component-wisely both within and outside their dimensions.
--- abstract: 'Current theories assume that the low intensity of the stellar extragalactic background light (stellar EBL) is caused by finite age of the Universe because the finite-age factor limits the number of photons that have been pumped into the space by galaxies and thus the sky is dark in the night. We oppose this opinion and show that two main factors are responsible for the extremely low intensity of the observed stellar EBL. The first factor is a low mean surface brightness of galaxies, which causes a low luminosity density in the local Universe. The second factor is light extinction due to absorption by galactic and intergalactic dust. Dust produces a partial opacity of galaxies and of the Universe. The galactic opacity reduces the intensity of light from more distant background galaxies obscured by foreground galaxies. The inclination-averaged values of the effective extinction $A_V$ for light passing through a galaxy is about 0.2 mag. This causes that distant background galaxies become apparently faint and do not contribute to the EBL significantly. In addition, light of distant galaxies is dimmed due to absorption by intergalactic dust. Even a minute intergalactic opacity of $1 \times 10^{-2}$ mag per Gpc is high enough to produce significant effects on the EBL. As a consequence, the EBL is comparable with or lower than the mean surface brightness of galaxies. Comparing both extinction effects, the impact of the intergalactic opacity on the EBL is more significant than the obscuration of distant galaxies by partially opaque foreground galaxies by factor of 10 or more. The absorbed starlight heats up the galactic and intergalactic dust and is further re-radiated at IR, FIR and micro-wave spectrum. Assuming static infinite universe with no galactic or intergalactic dust, the stellar EBL should be as high as the surface brightness of stars. However, if dust is considered, the predicted stellar EBL is about $290 \, \mathrm{n W m}^{-2}\mathrm{sr}^{-1}$, which is only 5 times higher than the observed value. Hence, the presence of dust has higher impact on the EBL than currently assumed. In the expanding universe, the calculated value of the EBL is further decreased, because the obscuration effect and intergalactic absorption become more pronounced at high redshifts when the matter was concentrated at smaller volume than at present.' author: - 'V. Vavryčuk' bibliography: - 'paper1.bib' title: Impact of galactic and intergalactic dust on the stellar EBL --- Introduction ============ The extragalactic background light (EBL) covers the near-ultraviolet, visible and infrared wavelengths from 0.1 to 1000 $\mu$m. Measurements of the EBL are provided by data from the Cosmic Background Explorer (COBE) mission, by the Infrared Space Observatory (ISO) instruments and by the Submillimeter Common User Bolometer Array (SCUBA) instrument (for reviews, see [@Hauser2001; @Lagache2005]). The direct measurements are supplemented by analysing integrated light from extragalactic source counts which provide a lower limit on the EBL [@Madau2000; @Hauser2001]. The upper limits are provided by attenuation of gamma rays from distant blazars due to scattering on the EBL [@Kneiske2004; @Dwek2005; @Primack2011; @Gilmore2012; @Biteau2015]. The spectral energy distribution of the EBL has two distinct maxima: at visible-to-near-infrared wavelengths in the range from 0.7 to 2 $\mu$m associated with the radiation of stars, and at far-infrared wavelengths from 100 to 200 $\mu$m associated with the thermal radiation of cold and warm dust in galaxies [@Schlegel1998; @Calzetti2000]. Despite the extensive number of measurements of the EBL, the uncertainties in their peak values are still large (see Fig. 1). The total EBL from 0.1 to 1000 $\mu$m lies roughly in the range from 40 to 200 $\mathrm{n W m}^{-2}\mathrm{sr}^{-1}$ and half of this value comes from the visible-to-near-infrared part of the spectrum [@Hauser2001; @Bernstein2002a; @Bernstein2002b; @Bernstein2002c; @Matsumoto2005; @Bernstein2007; @Dwek2013]. This value is quite low and reflects the fact that sky is dark in the night. Current theories assume that the low intensity of the EBL is caused primarily by finite age of the Universe and by its expansion [@Bondi1961; @Wesson1987; @Wesson1989; @Knutsen1997]. It is argued according to the Olbers’ paradox that the infinite static universe predicts a bright sky with intensity of the EBL comparable with surface brightness of stars being thus higher by more than 10 orders than the observed value. Consequently, the dark sky is taken as an important evidence for expanding universe of finite age. The finite age of the Universe implies that galaxies have not had time to populate the intergalactic space with enough photons to make it bright [@Wesson1989]. This argument is not, however, fully correct because it neglects an impact of light absorption by interstellar and intergalactic dust on the intensity of the EBL. The light extinction due to presence of absorbing interstellar dust has been observed, measured and numerically modelled by many authors [@Mathis1990; @Charlot2000; @Draine2003; @Draine2011; @Tuffs2004; @Draine2007; @Cunha2008; @Somerville2012; @Popescu2011]. The rate of light extinction is roughly 0.7-1.0 mag/kpc in the Milky Way [@Milne1980; @Koppen1998], but this value can vary being traced, for example, by dust temperature mapping [@Bernard2010]. Obviously, light extinction is observed also in other galaxies and its rate depends on the type of the galaxy, its dust content and the galaxy inclination [@Goudfrooij1994; @Calzetti2001; @Holwerda2005a; @Holwerda2007; @Lisenfeld2008; @Finkelman2008; @Finkelman2010]. The light extinction due to interstellar dust affects the EBL in two ways. First, it reduces light radiated by galaxies and subsequently their surface brightness. Second, presence of dust in nearby foreground galaxies causes obscuration of distant background galaxies [@Gonzalez1998; @Alton2001; @Holwerda2005a; @Holwerda2005b]. Consequently, the distant obscured galaxies become faint and do not contribute to the EBL significantly. In addition, the brightness of all distant galaxies is decreased because of light absorption by intergalactic dust. Obviously, accurate calculations of the EBL should take into account these effects. In this paper, we calculate the impact of light absorption by the galactic and intergalactic dust on the intensity of the stellar EBL. We show that the key factor responsible for the observed low stellar EBL is not the finite age or the expansion of the Universe [@Harrison1984; @Harrison1990; @Wesson1989; @Knutsen1997] but a low surface brightness of galaxies and a partial opacity of galaxies and of the Universe due to absorbing dust. Light extinction ================ Energy emitted by light sources and received at the Earth’s surface is controlled by three basic factors: distance of light sources, extinction of light along a light ray by scattering and dust absorption, and obscuration of distant light sources by those at closer distance. The obscuration depends on the number of light sources in unit volume, their size and transparency. For fully opaque sources like stars, the obscuration is most effective, for partially opaque sources like galaxies, the obscuration is suppressed. The energy received at the Earth’s surface is calculated by summing contributions of all light sources in a specified universe model. Energy flux $I$ received per unit area and time from light sources in the static universe is expressed as follows $$\label{eq1} I = \iiint_V \frac{n L}{4\pi r^2} e^{-\left(\kappa/\gamma\right)r} e^{-\lambda r} dV$$ where $r$ is the distance, $n$ is the mean number density of light sources (i.e., the mean number of light sources per unit volume), $L$ is the mean energy radiated by a light source per time (in W), $\kappa$ is the mean opacity of a light source, $\lambda$ is the mean absorption coefficient along a ray path, $dV$ is the volume element $$\label{eq2} dV = 4\pi r^2 dr$$ and coefficient $\gamma$ is the mean free path of a light ray between light sources (i.e., the mean travelling distance of a photon emitted by one light source and reaching another light source) $$\label{eq3} \gamma=\frac{1}{n \pi a^2}$$ where $a$ is the mean radius of light sources. The opacity $\kappa$ in Eq. (1) quantifies how much energy is absorbed by a light source when external light goes through the source. Hence, $\kappa$ is 1 for a fully opaque galaxy and 0 for a fully transparent galaxy. Terms $e^{-\lambda r}$ and $e^{-\left(\kappa/\gamma\right)\,r}$ in Eq. (1) describe the light extinction and the obscuration [@Harrison1990; @Knutsen1997] weighted by the opacity. Integrating Eq. (1) we get $$\label{eq4} I = n L \int_{r=0}^{\infty} e^{-\left(\kappa/\gamma + \lambda\right)\,r} dr = \frac{\gamma}{\kappa+\lambda \gamma} n L =\varepsilon j$$ where $$\label{eq5} \varepsilon=\frac{\gamma}{\kappa+\lambda \gamma}$$ and $$\label{eq6} j = nL$$ is the luminosity density (in $\mathrm{W m}^{-3}$). Alternatively, Eq. (4) can be expressed as $$\label{eq7} I = \frac{4l}{\kappa+\lambda \gamma}$$ where $$\label{eq8} l = \frac{L}{4\pi a^2}$$ is the mean surface energy density (in $\mathrm{W m}^{-2}$) radiated by a light source. For a universe with uniformly distributed stars (opacity $\kappa$ of stars is 1) and for zero interstellar absorption, $\lambda = 0$, Eq. (7) yields the equality between the energy received at the unit area of the Earth $l_E$ and the mean surface energy $l_S$ radiated by a star (in $\mathrm{W m}^{-2}$) $$\label{eq9} l_E = l_S$$ which is the mathematical formulation of the well-known Olbers’ paradox [@Harrison1990; @Knutsen1997]. For a universe with uniformly distributed galaxies with opacity $\kappa$ and for zero intergalactic absorption, $\lambda = 0$, Eq. (7) yields $$\label{eq10} l_E = \frac{l_G}{\kappa}$$ where $l_G$ is the mean surface energy radiated by a galaxy. Note that factor 4 in Eq. (7) is missing in Eqs. (9) and (10) because $l_E$ means the flux coming from the upper hemisphere and received at a unit area with a fixed (vertical) normal. Hence the integration in Eq. (1) is slightly different than in Eqs. (9) and (10) and yields a value which is four times lower [@Knutsen1997]. In the case of fully transparent galaxies ($\kappa = 0$) in the fully transparent static universe ($\lambda = 0$), the received energy in Eq. (10) diverges. Obviously, this is not realistic, because stars, interstellar dust and intergalactic dust are fully or partially opaque, so $\kappa$ and $\lambda$ are apparently non-zero and cause the intensity of light observed at the Earth being low. For example, if 1% of light energy is absorbed when the ray passes through a partially opaque galaxy and no energy is lost in the intergalactic space, Eq. (10) predicts the observed intensity of the EBL to be 100 times higher than the mean surface energy radiated by galaxies. Since the mean surface brightness of galaxies is extremely low, such intensity of the EBL corresponds effectively to the dark sky in the night. In order to calculate an accurate value of the intensity of the stellar EBL using eqs (4-6) we need values of number density $n$, radius of galaxies $a$, luminosity density $j$, galactic opacity $\kappa$ and intergalactic absorption $\lambda$ (also called the intergalactic or universe opacity). We shortly review estimates of these parameters based on observations in the next sections. ------------- ----------------- ----------------- ----- Galaxy type $A_V$ $\kappa$ $w$ (mag) (%) Elliptical 0.06 $\pm$ 0.02 0.05 $\pm$ 0.02 35 Spiral 0.70 $\pm$ 0.20 0.48 $\pm$ 0.15 20 Lenticular 0.30 $\pm$ 0.10 0.24 $\pm$ 0.08 45 ------------- ----------------- ----------------- ----- : Effective opacity of galaxies[]{data-label="table:1"} $A_V$ is the effective inclination-averaged visual extinction, $\kappa$ is the mean visual galactic opacity of the individual galaxy types, and $w$ is the relative frequency of the galaxy types taken from Table 4 (Typical galactic content of regular clusters) of [@Bahcall1999]. Number density, galaxy size and luminosity density ================================================== The number density is fairly variable because of galaxy clustering and presence of voids in the universe [@Peebles2001; @Jones2004; @vonBenda_Beckmann2008]. The number density might be ten times higher or more in clusters, at distances up to 15-20 Mpc than the density averaged over larger distances. The mean value of the number density over hundreds of Mpc is, however, stable. It is derived from the Schechter luminosity function [@Schechter1976] being in the range of $0.010 - 0.025 \, h^3 \mathrm{Mpc}^{-3}$ [@Peebles1993; @Peacock1999; @Blanton2001; @Blanton2003]. The size distribution of galaxies is dependent on their luminosity, stellar mass and the morphological type. Observed galaxies cover luminosities between $\sim10^8 L_{\sun}$ and $\sim10^{12} L_{\sun}$ with effective radii between $\sim 0.1\, h^{-1}$kpc and $\sim 10 \,h^{-1}$ kpc. For late-type galaxies, the characteristic luminosity in the R-band is -20.5 [@Shen2003]. The corresponding Petrosian half-light radius is $\sim 2.5-3.0\, h^{-1}$kpc and the $R90$ radius is about 3 times larger, $R90 = 7.5 - 9\, h^{-1}$kpc [@Graham2005]. This is close to a commonly assumed value $R = 10 \,h^{-1}$kpc [@Peebles1993; @Peacock1999]. The luminosity density is a fundamental quantity in observational cosmology standing in the Schechter luminosity function [@Schechter1976]. The most recent determination of the optical luminosity function come from large flux-limited redshift surveys such as the Two Degree Field Galaxy Redshift Survey (2dFGRS; [@Cross2001]), the Sloan Digital Sky Survey (SDSS; [@Blanton2001; @Blanton2003]) or Century Survey (CS; [@Geller1997; @Brown2001]). Independent estimates of the luminosity function in the R-band are well consistent being $(1.84 \pm 0.04) \times 10^8\, h\, L_{\sun}\, \mathrm{Mpc}^{-3}$ for the SDSS data [@Blanton2003] and $(1.9 \pm 0.6) \times 10^8 \,h L_{\sun}\, \mathrm{Mpc}^{-3}$ for the CS data [@Brown2001]. ------------- --------------- ---------- ---------- ----------- ---------------------------------------- ---------------------- -------------------- $n$ $\gamma$ $\kappa$ $\lambda$ $j^R$ $I_{\mathrm{theor}}$ $I_{\mathrm{obs}}$ (1/Mpc$^{3}$) (Gpc) (mag/Gpc) ($10^8\,L_{\sun} \,/\mathrm{Mpc}^{3}$) (nW/m$^{2}$/sr) (nW/m$^{2}$/sr) Minimum EBL 0.015 130 0.30 0.03 1.80 190 20 Maximum EBL 0.025 210 0.14 0.01 1.88 560 140 Optimum EBL 0.020 160 0.22 0.02 1.84 290 60 ------------- --------------- ---------- ---------- ----------- ---------------------------------------- ---------------------- -------------------- $n$ is the number density of galaxies, $\gamma$ is the mean free path of light between galaxies defined in Eq. (3), $\kappa$ is the mean opacity of galaxies, $\lambda$ is the intergalactic absorption, and $j^R$ is the R-band luminosity density [@Blanton2003], $I_{\mathrm{theor}}$ is the predicted intensity of the stellar EBL, and $I_{\mathrm{obs}}$ is the observed intensity of the stellar EBL. The mean effective radius of galaxies $a$ is considered to be 10 kpc. Galactic opacity ================ The galactic opacity can be measured by a variety of methods usually applied to a large set of samples. The most widely used methods test dependence of the surface brightness on inclination, multi-wavelength comparisons, and statistical analysis of the colour and number count variations induced by a foreground galaxy onto background sources [@Calzetti2001]. The most transparent galaxies are elliptical. [@Goudfrooij1994] and [@Goudfrooij1995] found that the observed infrared luminosities are compatible with central optical depths of the diffuse component $\tau \left( 0 \right) \leq 0.7$, with a typical value of $\tau \left( 0 \right) \sim 0.2$. The corresponding effective extinction $A_V$ is $0.04 - 0.08$ mag. The giant elliptical galaxies found at the centres of cooling flow clusters are often surrounded by extended ($\approx 10 - 100$ kpc) and dusty emission-line nebulae [@Voit1997; @Donahue2000]. Optical and UV emission-line studies give extinction values in the range $A_V \approx 0.3 - 2.0$ mag for the dust associated with the nebula. However, such galaxies are not statistically significant in the population of elliptical galaxies [@Calzetti2001]. The dust extinction in spiral and irregular galaxies is higher than in elliptical galaxies. For estimating the extinction by dust in spiral galaxies, [@Holwerda2005b] used the so-called Synthetic field method (SFM) which counts the number of background galaxies seen through a foreground galaxy [@Gonzalez1998; @Holwerda2005b]. The advantage of the SFM is that it yields the average opacity for the area of a galaxy disk without making assumptions about either the distribution of absorbers or of the disk starlight. [@Holwerda2005a] found that the dust opacity of the disk in the face-on view apparently arises from two distinct components: an optically thicker component ($A_I = 0.5 - 4$ mag) associated with the spiral arms and a relatively constant optically thinner disk ($A_I = 0.5$ mag). The early-type spiral disks show less extinction than the later types. As regards the inclination-averaged extinction, the typical values are according to [@Calzetti2001]: $0.5 - 0.75$ mag for Sa-Sab galaxies, $0.65 - 0.95$ mag for the Sb-Scd galaxies and $0.3 - 0.4$ mag for the irregular galaxies at the $B$-band. In summary, galaxies in the local universe are moderately opaque, and extreme values of the opacity are found only in the statistically insignificant more active systems [@Calzetti2001]. Adopting estimates of the relative frequency of specific galaxy types in the Universe and their average extinctions (see Table 1), we can calculate their mean visual opacities $$\label{eq11} \kappa = 1 - \exp \left(0.9211 A_V \right) \, ,$$ and finally the overall mean galactic opacity using the weighted average $$\label{eq12} \kappa = w_1 \kappa_1 + w_2 \kappa_2 + w_3 \kappa_3 \, ,$$ where subscripts 1, 2, and 3 stay for quantities of the elliptical, spiral and lenticular galaxies. According to Eq. (12) and Table 1, the average value of visual opacity $\kappa$ is about $0.22 \pm 0.08$. A more accurate approach should take into account the statistical distributions of the galaxy size and of the mean galaxy surface brightness for individual types of galaxies. Intergalactic opacity ===================== Observations indicate that the absorption of light is not limited to the interstellar medium within galaxies but is present also in the intergalactic space [@Nickerson1971; @Margolis1977; @Chelouche2007]. It is lower by several orders than in galaxies and depends on distance from galaxies. High values of attenuation are detected in galaxy halos [@Menard2010a] and in cluster centres. The attenuation due to dust in the galaxy clusters has been measured by reddening of background objects behind the clusters [@Chelouche2007; @Bovy2008; @Muller2008; @Menard2010a]. The attenuation can also be investigated by correlations between the positions of high-redshift QSOs and low-redshift galaxies using catalogues of UVX objects. The excess of high-redshift QSOs around low-redshift galaxies is explained by a model in which dust situated in foreground clusters of galaxies obscures the QSOs lying behind them [@Boyle1988; @Romani1992]. Based on constructing extinction curves around galaxies, [@Menard2010a] found visual attenuation of $A_V = (1.3 \pm 0.1) \times 10^{-2}$ mag at distance from a galaxy up to 170 kpc and $A_V = (1.3 \pm 0.3) \times 10^{-3}$ mag on large scale at distance up to 1.7 Mpc. Values of the same order are reported for an average visual extinction by intracluster dust also by [@Muller2008] and [@Chelouche2007]. In addition, consistent opacity was recently reported by [@Xie2015] who studied the luminosity and redshifts of the quasar continuum at the data sample of $\sim 90.000$ objects and estimated the effective dust density $n\sigma_V \approx 0.02\, h\, \mathrm{Gpc}^{-1}$ at $z < 1.5$. However, the intergalactic absorption is redshift dependent. According to [@Davies1997] the intergalactic extinction increases with redshift and transparent universe becomes significantly opaque (optically thick) at redshifts of $z = 1 - 3$. The increase of intergalactic extinction with redshift is confirmed by results of [@Menard2010a] who estimated $A_V$ to about 0.03 mag at $z = 0.5$ but to about $0.05 - 0.09$ mag at $z = 1$. Predicted and observed stellar EBL ================================== Taking into account estimates of the galactic and intergalactic opacity and other cosmological parameters (see Table 2), the intensity of the stellar EBL calculated by Eqs. (4) and (5) lies for wavelengths between 300 and 3500 nm in the range of $190-560 \,\mathrm{n W m}^{-2}\mathrm{sr}^{-1}$ with the optimum value of $$\label{eq13} I_{\mathrm{theor}} \approx 290 \,\mathrm{n W m}^{-2}\mathrm{sr}^{-1} \, .$$ Fig. 2 indicates that the EBL is rather insensitive to mean free path between galaxies $\gamma$ but quite sensitive to intergalactic opacity $\lambda$ and luminosity density $j$. Obviously, high values of the EBL are produced for high luminosity density and low intergalactic opacity. The observed intensity of the stellar EBL [@Hauser2001; @Bernstein2002a; @Bernstein2002b; @Bernstein2002c; @Bernstein2007] lies in the range of $20-140 \,\mathrm{n W m}^{-2}\mathrm{sr}^{-1}$ (see Fig. 1) with the optimum value of $$\label{eq14} I_{\mathrm{obs}} \approx 60 \,\mathrm{n W m}^{-2}\mathrm{sr}^{-1} \, .$$ Hence the predicted stellar EBL is about 5 times higher than the observed EBL. This result is surprising because it is commonly assumed that the EBL calculated for the infinite static universe must be higher than the observed EBL by more than 10 orders. The rather low value of the EBL in Eq. (13) evidences that the key factor for a successful prediction of the EBL is including the effects of the galactic and intergalactic absorption of light by dust. In fact, considering an expanding universe in the EBL calculations is not essential. Substituting the infinite static universe by expanding universe of finite age in the EBL calculations further reduces the predicted EBL but by factor of 5 only. Obscuration by galaxies versus intergalactic opacity ==================================================== As shown above, the low value of the stellar EBL is caused by two effects: (1) obscuration of background galaxies by partially opaque foreground galaxies, and (2) the intergalactic opacity. Considering observations of the number density of galaxies and of the galactic and intergalactic opacity, we can determine which effect has a more significant impact on the EBL. We calculate ratio $k$ $$\label{eq15} k = \frac{\lambda \gamma}{\kappa} \, ,$$ that is higher (lower) than 1 for the intergalactic opacity (obscuration of galaxies) being predominant. Figure 3 shows ratio $k$ as a function of intergalactic opacity $\lambda$ and mean free path $\gamma$. The ratio is calculated for three values of the mean galactic opacity, $\kappa = 0.30$, 0.22 and 0.14. The corresponding values of $k$ for optimum values of $\lambda$ and $\kappa$ are 10, 13 and 21. This evidences that the EBL is affected predominantly by the intergalactic opacity. The impact of the obscuration effect on the EBL is almost negligible for the majority of combinations of realistic values of $\lambda$ and $\kappa$. Discussion ========== The calculations prove that the impact of light absorption by galactic and intergalactic dust on the stellar EBL is significant. It is even more significant than the expansion, redshift or the finite age of the Universe. The absorbed starlight heats up the dust and is further re-radiated at the IR, FIR and micro-wave spectrum. Although the presented calculations are rough approximations, the estimate of the stellar EBL is robust and reliable. It is based on: (1) the luminosity density measurements [@Blanton2001; @Blanton2003; @Brown2001], (2) the estimate of the effective inclination-averaged galactic opacity [@Calzetti2001], and (3) the estimate of the intergalactic opacity [@Xie2015]. Even a minute intergalactic opacity of $1 \times 10^{-2} \, \mathrm{mag \, Gpc}^{-1}$ is high enough to produce significant effects on the EBL. In addition, galaxies are partially opaque due to galactic dust. Consequently, the opacity of foreground galaxies reduces the intensity of light radiated by distant background galaxies. The background galaxies become apparently faint and do not contribute to the stellar EBL significantly. As a result, the stellar EBL is comparable with or even lower than the mean surface brightness of galaxies. Comparing the impact of the intergalactic opacity and obscuration of galaxies on the EBL, the intergalactic opacity is more significant by factor of 10 or more. If the intergalactic opacity and the obscuration effects are neglected, the EBL predicted for the static universe should be according to the Olbers’ paradox as high as the surface brightness of stars. This value is higher by almost 14 orders than the observed one. Hence, the galactic and intergalactic absorption is the most important factor for the observed low stellar EBL. The corrections for the expansion of the Universe and for the redshift alter the predictions by less than one order only. I thank Benne W. Holwerda for his detailed and very helpful comments on the paper and Alberto Domínguez for providing me kindly with Fig. 1.
--- title: '**Subject: Comparison of transport coefficients for weakly coupled multi-component plasmas obtained with different formalisms**' ---     \ *From:*  Grigory Kagan (kagan@lanl.gov)\ *Date:*  December 10, 2016                    **Overview** ============ In recent years LANL has been supporting a substantial body of research on ion multi-species effects. The cornerstone of both qualitative and quantitative consideration of the relevant issues is the transport properties of plasmas with multiple ion species. A number of well established formalisms for weakly coupled multi-component plasmas with arbitrary number of ion species can be found in literature and used readily to obtain all the transport coefficients of interest [@hirschfelder1954molecular; @devoto1966transport; @ferziger1972mathematical; @zhdanov2002transport]. In particular, Kagan & Tang and Kagan, Baalrud & Daligault [@kagan2014thermo; @kagan2015TBI; @kagan2016influence] used existing formalisms by Zhdanov [@zhdanov2002transport] and Ferziger & Kaper [@ferziger1972mathematical]. On the other hand, in their subsequent work Simakov & Molvig [@simakov2016hydrodynamic-1; @simakov2016hydrodynamic-2] developed a new, self-consistent formalism based on the properly ordered perturbation theory. It can be noticed, however, that all the existing and the newly developed formalisms for weakly coupled plasmas rely on the same physical assumptions and mathematical approximations: the particles are assumed to interact via Debye shielded Coulomb potential and the linearized kinetic equation is solved by expanding the correction to the species’ distribution functions over a set of orthogonal polynomials. In different sources calculation starts with either Boltzmann [@hirschfelder1954molecular; @devoto1966transport; @ferziger1972mathematical] or Fokker-Planck [@zhdanov2002transport; @simakov2016hydrodynamic-1] kinetic equations, but the above mentioned assumption of the Debye shielded Coulomb potential makes the two equations equivalent. These calculations utilize either the so-called “Chapman-Enskog"[@hirschfelder1954molecular; @devoto1966transport; @ferziger1972mathematical; @simakov2016hydrodynamic-1] or “Grad" [@zhdanov2002transport] methods to solve for the distribution functions, but it is straightforward to observe that the same, Sonine orthogonal polynomials are employed in both types of the calculations. Hence, local transport coefficients obtained with all these formalisms must be identical. The direct comparison between diffusion coefficients obtained with Zhdanov and Ferziger & Kaper formalisms was demonstrated in Ref. [@kagan2015TBI] to find them identical indeed. Comparison between the diffusion coefficients obtained with Zhdanov and Simakov & Molvig’s formalisms was demonstrated in Ref. [@simakov2006verification] to also find them identical. Since these results have not been distributed to public, in this Note we reproduce the comparison for the diffusion coefficients and demonstrate that results for the electron and remaining ion transport coefficients for weakly coupled plasmas are identical as well. **Electron transport** ====================== Electron transport coefficients are provided in Section 8.2 of Ref. [@zhdanov2002transport] and their counterparts obtained with Simakov & Molvig’s formalism were presented in Ref. [@simakov2014electron]. Both sources make the observation that the electron transport coefficients depend on the effective ion charge $Z_{\bf eff}$ only and provide explicit expressions in terms of $Z_{\bf eff}$. Zhdanov addresses a more general case of magnetized plasmas, so to retrieve the unmagnetized results one should take the longitudinal transport coefficients from [@zhdanov2002transport]. With this notion the electron-ion dynamic friction and thermal force coefficients and the electron heat conductivity by Zhdanov are written as $$\begin{aligned} \label{eq: heat-forces} \alpha_{||} &= 1 - \frac{0.22 + 0.73 Z^{-1}}{0.31+ 1.20 Z^{-1} + 0.41 Z^{-2}},\\ \beta_{||} &= \frac{0.47 + 0.94 Z^{-1}}{0.31+ 1.20 Z^{-1} + 0.41 Z^{-2}},\\ \gamma_{||} &= \frac{3.9 + 2.3 Z^{-1}}{0.31+ 1.20 Z^{-1} + 0.41 Z^{-2}},\end{aligned}$$ respectively. In the same Chapman-Enskog approximation Simakov & Molvig find $$\begin{aligned} \label{eq: heat-forces} \alpha_{0} &= \frac{4 (16 Z^2 + 61 \sqrt{2} Z +72 ) }{217 Z^2 + 604 \sqrt{2} Z +288 },\\ \beta_{0} &= \frac{30 Z (11Z+15\sqrt{2}) }{217 Z^2 + 604 \sqrt{2} Z +288},\\ \gamma_{0} &= \frac{25 Z (433 Z+180 \sqrt{2}) }{4 (217 Z^2 + 604 \sqrt{2) } Z +288},\end{aligned}$$ respectively. Zhdanov finds the dimensionless electron viscosity coefficient to be $$\begin{aligned} \label{eq: heat-forces} \eta_e^{({0})} = \frac{1.46 + 1.04 Z^{-1}}{0.82+ + 1.82 Z^{-1} + 0.72 Z^{-2}}\end{aligned}$$ and Simakov & Molvig find it to be $$\begin{aligned} \label{eq: heat-forces} \epsilon_0 = \frac{5 Z (408 Z + 205 \sqrt{2} ) }{6 (192 Z^2 + 301 \sqrt{2} Z +178) }.\end{aligned}$$ It is straightforward to observe that for any given coefficient Simakov & Molvig’s expression is identical to its Zhdanov’s counterpart except for Zhdanov evaluates the square roots in decimals. **Ion heat conductivity and viscosity** ======================================= The ion transport coefficients are usually presented in the implicit form—by providing a set of linear algebraic equations, whose solution gives the coefficient(s) of interest. In particular, this is how the ion heat conductivities and viscosities are given by the Simakov & Molvig, Zhdanov and Ferziger & Kaper formalisms. The first two formalisms consider the case of a weakly coupled plasma specifically, whereas the Ferziger & Kaper formalism gives the more general results for an arbitrary binary interaction potential. The information about the interaction potential enters transport coefficients through the standard gas-kinetic cross-sections, the so-called “$\Omega$-integrals". To apply this formalism to a weakly coupled ionic mixture one needs to insert into the appropriate expressions the $\Omega$-integrals for the Debye shielded Coulomb potential, which are given by $$\begin{aligned} \label{eq: Omega-integrals} \Omega_{i,j}^{(l,r)} = l(r-1)! \frac{ \pi^{1/2} Z_i^2 Z_j^2 \ln \Lambda }{\mu_{ij}^{1/2} (2T)^{3/2}},\end{aligned}$$ where subscripts $i$ and $j$ denote the ion species and superscript $(l,r)$ the order of the $\Omega$-integral. Otherwise, formalisms by Simakov & Molvig and Ferziger & Kaper are structurally identical and the most natural to compare. In particular, the ion heat conductivity is calculated as follows. First, the coefficients $\Lambda_{ij}^{pq}$ for the linear set of algebraic equations (6.4-32) are obtained from Eq. (6.4-15), where the expressions for the bracket integrals in terms of $\Omega_{i,j}^{(l,r)}$ are given in Table 7.5. With the help of Eq. (\[eq: Omega-integrals\]) of this Note, this gives $\Lambda_{ij}^{pq}$ in terms of the ion species densities $n_i$, particle masses $m_i$ and charge numbers $Z_i$. The set of equations (6.4-32) is then solved for the matrix of coefficients $a_{j,q}^{(n)}$ and the heat conductivity $\lambda'$ is recovered from Eq. (6.4-45). To compare the results with their counterparts obtained with Simakov & Molvig formalism we digitize the data from Figs. 2 and 5 of Ref. [@simakov2016hydrodynamic-2] showing the dimensionless heat conductivities for the DT and DAu mixtures, respectively, and normalize $\lambda'$ by Ferziger & Kaper according to Eq. (6) of Ref. [@simakov2016hydrodynamic-2]. The two results are shown in Fig. \[fig: heat\] of this Note demonstrating that the two formalisms give identical predictions for the heat conductivity as expected. ![The normalized ion heat conductivities for the DT (left) and DAu (right) mixtures as functions of the relative density of the ion species. The definitions of the relative density for the DT and DAu cases are taken to be the same as in Figs. 2 and 5 of Ref. [@simakov2016hydrodynamic-2], respectively, to facilitate the comparison. Solid red lines show the results obtained with the Ferziger & Kaper formalism and blue circles show the data obtained by digitizing Figs. 2 and 5 of Ref. [@simakov2016hydrodynamic-2].[]{data-label="fig: heat"}](figs_2_5_combined-eps-converted-to.pdf) ![The normalized ion viscosities for the DT (left) and DAu (right) mixtures as functions of the relative density of the ion species. The definitions of the relative density for the DT and DAu cases are taken to be the same as in Figs. 3 and 6 of Ref. [@simakov2016hydrodynamic-2], respectively, to facilitate the comparison. Solid red lines show the results obtained with the Ferziger & Kaper formalism and blue circles show the data obtained by digitizing Figs. 3 and 6 of Ref. [@simakov2016hydrodynamic-2].[]{data-label="fig: visc"}](figs_3_6_combined-eps-converted-to.pdf) The viscosity is calculated as follows. First, the coefficients $H_{ij}^{pq}$ for the linear set of algebraic equations (6.4-39) are obtained from Eq. (6.4-36), where the expressions for the bracket integrals in terms of $\Omega_{i,j}^{(l,r)}$ are given in Table 7.6. With the help of Eq. (\[eq: Omega-integrals\]) of this Note, this gives $H_{ij}^{pq}$ in terms of the ion species densities $n_i$, particle masses $m_i$ and charge numbers $Z_i$. The set of equations (6.4-39) is then solved for the matrix of coefficients $b_{j,q}^{(n)}$ and the viscosity $\eta$ is recovered from Eq. (6.4-47). To compare the results with their counterparts obtained with Simakov & Molvig formalism we digitize the data from Figs. 3 and 6 of Ref. [@simakov2016hydrodynamic-2] showing the dimensionless viscosities for the DT and DAu mixtures, respectively, and normalize $\eta$ by Ferziger & Kaper according to Eq. (7) of Ref. [@simakov2016hydrodynamic-2]. The two results are shown in Fig. \[fig: visc\] of this Note demonstrating that the two formalisms give identical predictions for the viscosity as well. **Ion diffusion** ================= Finally, for this Note to be self-contained here we reproduce the comparison for the DT mixture, which was considered in both Ref. [@kagan2014thermo] by Kagan & Tang and in the subsequent work by Simakov & Molvig [@simakov2016hydrodynamic-2]. In Ref. [@kagan2014thermo], the diffusive mass flux is written in the form $$\label{eq: canonical-flux} \vec{i} = - \rho D \Bigl( \nabla c +k_p \nabla \log{p_i} + \frac{e k_E}{T_i}\nabla \Phi + k_T^{(i)} \nabla \log{T_i} + k_T^{(e)} \nabla \log{T_e}\Bigr),$$ where $$\begin{aligned} \label{eq: diffusion-coeff} &D= \frac{\rho T_i}{A_{lh}\mu_{lh} n_l \nu_{lh}} \times \frac{c(1-c)}{cm_h+(1-c)m_l}, \\ \label{eq: baro-diff-ratio} &k_p = c(1-c)(m_h-m_l)\Bigl( \frac{c}{m_l} + \frac{1-c}{m_h} \Bigr),\\ \label{eq: electro-diff-ratio} &k_E = m_lm_h c(1-c) \Bigl( \frac{c}{m_l} + \frac{1-c}{m_h} \Bigr) \Bigl( \frac{Z_l}{m_l} - \frac{Z_h}{m_h} \Bigr),\\ \label{eq: thermo-diff-ratio-el} &k_T^{(e)} = - m_lm_h c(1-c) \Bigl( \frac{c}{m_l} + \frac{1-c}{m_h} \Bigr) \Bigl( \frac{Z_l^2}{m_l} - \frac{Z_h^2}{m_h} \Bigr) \frac{T_e }{T_i } \frac{\beta_{||}}{Z_{\bf eff}}.\end{aligned}$$ The thermo-diffusion ratio $k_T^{(i)}$ was evaluated numerically as well as the dynamic friction coefficient $A_{lh}$ needed to retrieve the classical diffusion coefficient $D$ from Eq. (\[eq: diffusion-coeff\]) and presented in Figs. 2 and 1, respectively of Ref. [@kagan2014thermo]. In the above equations $c$ is the light species mass fraction, $\Phi$ is the electrostatic potential and $\nu_{lh}$ is the collision frequency between the ion species. ![Dynamic friction coefficient $A_{lh}$ (left) and thermo-diffusion ratio $k_T^{(i)}$ (right) for the DT mixture. Solid red lines show the results obtained in Ref. [@kagan2014thermo] with Zhdanov’s formalism and blue circles show the corresponding results obtained by digitizing Fig. 1 of Ref. [@simakov2016hydrodynamic-2].[]{data-label="fig: diff"}](diff_friction_combined-eps-converted-to.pdf) Simakov and Molvig use different representation for the diffusive flux. In particular, they operate with the gradient of the number fraction of the lighter species $\nabla x$ instead of the mass fraction $\nabla c$. To set the correspondence between the two expressions for the diffusive flux we notice that $$\label{eq: grad-x} \nabla x = \frac{1}{m_l m_h} \Bigl(\frac{\rho}{n_i} \Bigr)^2 \nabla c,$$ where $\rho$ and $n_i$ are the total mass and number densities of the ionic mixture, respectively. Then, it is straightforward to see that Simakov & Molvig results for $k_p$, $k_E$ and $k_T^{(e)}$ are identical to Eqs. (\[eq: baro-diff-ratio\])-(\[eq: thermo-diff-ratio-el\]). To compare $A_{lh}$ and $k_T^{(i)}$ we consider the DT case, for which Simakov & Molvig results can be retrieved by digitizing the data from Fig. 1 of Ref. [@simakov2016hydrodynamic-2]. We then plot them in Fig. \[fig: diff\] of this note over the corresponding results of Ref. [@kagan2014thermo] to see that the predictions of the Simakov & Molvig are again identical to the earlier results. [10]{} url \#1[[\#1]{}]{}urlprefix\[2\]\[\][[\#2](#2)]{} Hirschfelder J O, Curtiss C F and Bird R B 1954 [*Molecular theory of gases and liquids*]{} (Wiley New York) Devoto R 1966 [*Physics of Fluids*]{} [**9**]{} 1230–1240 Ferziger J H and Kaper H G 1972 [*Mathematical theory of transport processes in gases*]{} (North Holland) Zhdanov V M 2002 [*Transport Processes in Multicomponent Plasma*]{} (CRC Press) Kagan G and Tang X Z 2014 [*Physics Letters A*]{} [**378**]{} 1531–1535 Kagan G [**“Transport Calculations in Multi-component Plasmas"*, TBI seminar, January 21, 2015*]{} Kagan G, Baalrud S D and Daligault J 2016 [**“Influence of coupling on thermal forces and dynamic friction in plasmas with multiple ion species"*, arXiv:1609.00742*]{} Simakov A N and Molvig K 2016 [*Physics of Plasmas*]{} [**23**]{} 032115 Simakov A N and Molvig K 2016 [*Physics of Plasmas*]{} [**23**]{} 032116 Simakov A N and Albright B J April 28, 2016 [**“Verification of the Hydrodynamic Description of Unmagnetized Plasma with Multiple Ion Species"*, Los Alamos National Laboratory Research Memorandum XCP-6:16-010*]{} Simakov A N and Molvig K 2014 [*Physics of Plasmas (1994-present)*]{} [ **21**]{} 024503
--- abstract: 'We suggest a new model for the structure of a magnetic field embedded in a plasma whose average turbulent and magnetic energy densities are both much less than the gas pressure. This model is based on the popular notion that the magnetic field will tend to separate into individual flux tubes. We point out that interactions between the flux tubes will be dominated by coherent effects stemming from the turbulent wakes created as the fluid streams by the flux tubes. Balancing the attraction caused by shielding effects with turbulent diffusion we find that flux tubes have typical radii comparable to the local Mach number squared times the large scale eddy length, are arranged in a one dimensional fractal pattern, have a radius of curvature comparable to the largest scale eddies in the turbulence, and have an internal magnetic pressure comparable to the ambient pressure. When the average magnetic energy density is much less than the turbulent energy density the radius, internal magnetic field, and curvature scale of the flux tubes will be smaller than these estimates. Allowing for resistivity changes these properties, but does not alter the macroscopic properties of the fluid or the large scale magnetic field. In either case we show that the Sweet-Parker reconnection rate is much faster than an eddy turnover time. Realistic stellar plasmas are expected to either be in the ideal limit (e.g. the solar photosphere) or the resistive limit (the bulk of the solar convection zone). Allowing for significant viscosity drastically changes the macroscopic properties of the magnetic field. We find that all current numerical simulations of three dimensional MHD turbulence are in the viscous regime and are inapplicable to stars or accretion disks. However, these simulations are in good quantitative agreement with our model in the viscous limit. With the exception of radiation pressure dominated environments, flux tubes are no more, and often less, buoyant than a diffuse field of comparable energy density.' author: - 'Ethan T. Vishniac' title: 'The Dynamics of Flux Tubes in a High [$\beta$]{} Plasma' --- Preprint Introduction ============ The study of magnetized plasmas in astrophysics is complicated by a number of factors, not the least of which is that in considering magnetic fields in stars or accretion disks, we are considering plasmas with densities well above those we can study in the laboratory. In particular, whereas laboratory plasmas are dominated by the confining magnetic field pressure, stars, and probably accretion disks, have magnetic fields whose $\beta$ (ratio of gas pressure to magnetic field pressure) is much greater than one. Observations of the Sun suggest that under such circumstances the magnetic field breaks apart into discrete flux tubes with a small filling factor. This trend has also been seen in three dimensional simulations of MHD turbulence ([@nbjrrst92]). On the other hand, theoretical treatments of MHD turbulence in high $\beta$ plasmas tend to assume that the field is more or less homogeneously distributed throughout the plasma (e.g. [@k65], and [@dc90]). At the other extreme, there have been papers (e.g. [@do93]) which treat the magnetic field as a passively advected vector field. These papers indicate an increasingly complex substructure, but these calculations are unlikely to be relevant when considering fields capable of acting back on the surrounding fluid. Note that although numerical simulations indicate the existence of strong substructure ([@nbjrrst92], and [@tcv93]), its exact nature is sensitive to details of the simulation algorithms and the nature of the large scale flows. An example of strongly contrasting results can be found in numerical simulations by Tao et al. (1993) in which a turbulent flow with an imposed helicity and a weak diffuse field led to a largely stagnant and still weak field with substructure, as compared to the numerical simulations of Hawley & Balbus (1991) in which a diffuse field in a shearing flow led to a final state in which the magnetic pressure was large and continued to drive strong turbulence. There are at least three reasons for considering the possibility of substructure in the magnetic field. First, the mobility of magnetic field lines in a highly conducting plasma is an important issue, affecting the dynamics of fluid motion in stars and accretion disks. Second, the suggestion that turbulent diffusivity does not occur raises important issues concerning the possibility of creating and maintaining magnetic fields in astrophysical objects. For example, the obvious point that such fields do exist does not ensure that mean field dynamo theory is a useful tool for describing their generation. Third, the possibility that magnetic field lines tend to concentrate into partially evacuated flux tubes raises important questions regarding the speed at which such tubes can rise out of the dynamo region in a star or accretion disk. If we assume that an evacuated flux tube of radius $r_t$ is rising through a medium with a turbulent velocity $V_T$ then equating the turbulent drag with the buoyant acceleration we have $$V_b\approx {r_t g \Delta\rho\over V_T \rho},$$ where $\Delta\rho/\rho$ is the fractional density depletion of the flux tube, $V_b$ is the buoyant velocity, and $g$ is the local gravity. (We have assumed that $V_b\le V_T$ in this expression.) Clearly we need to know $r_t$ before we can consider the nature of buoyant magnetic flux loss. Here we give a qualitative discussion of a simple model for the distribution of magnetic flux tubes in a turbulent medium. This paper falls very far short of a derivation of this model from first principles. Instead, we simply explore the consequences of some simple ideas regarding the formation and interaction of magnetic flux tubes. We will see that although these ideas cannot be tested directly in the most interesting regime, i.e. the one applicable to realistic astrophysical objects, they do yield quantitative predictions for the current generation of numerical experiments. In §2 we discuss the mechanism by which small inhomogeneities evolve into discrete flux tubes, and the size and distribution of such flux tubes. In §3 we allow for the effect of viscosity and resistivity, both for their intrinsic interest and because no comparison to numerical results is possible without a quantitative understanding of their effects. In §4 we discuss reconnection between the flux tubes and show that it always occurs in less than an eddy turnover time, even if we calculate the reconnection rate using the Sweet-Parker rate. In §5 we discuss the implications of this work for magnetic buoyancy in astrophysical objects. We find that our model is consistent with observations of the small scale structure of the solar magnetic field. We also show that magnetic flux loss from accretion disks proceeds at the same slow rate previously estimated for a diffuse field, except for radiation pressure dominated disks. Finally, in §6 we conclude with a discussion of some of the broader issues involved in this work, including the possibility that the magnetic field fibrils of this model are an example of a dissipative structure. In the appendix we compare this model to numerical simulations of MHD turbulence. We will see that there are at least three important consequences of this model for dynamos and numerical simulations of dynamos. First, an initially diffuse field in a turbulent medium, e.g. a uniform field in a shearing flow, will initially show exponential growth as the flux tubes form. This growth saturates when the flux tube formation is complete and cannot be used as the basis for a self-sustaining dynamo effect. Since the typical state of the magnetic field is a collection of intense flux tubes, this effect is of limited interest. Second, the organization of the magnetic field into flux tubes turns out to allow the field lines to migrate relative to the fluid and to reconnect efficiently. In this sense, this model for the magnetic field substructure implies that the dynamics of fast dynamos are very much like those of slow dynamos. Third, this work suggests that the current crop of three dimensional MHD turbulence simulations are entirely dominated by viscosity and can be understood in terms of effects which are negligible in a star or accretion disk. In other words, these numerical simulations are inapplicable to realistic astrophysical objects. Throughout this paper we take the simplest possible model for fluid turbulence, i.e. the existence of a stochastic velocity field with a power spectrum taken from the work of Kolmogorov. It is likely that intermittency effects will change the details of the model proposed here. Magnetic Field Line Distribution in an Ideal Turbulent Fluid ============================================================ We begin by considering an idealized situation in which there exists a turbulent cascade with a well defined large eddy scale $L_T\equiv 2\pi/k_T$ and a turbulent velocity, on that scale, of $V_T$. The fluid is assumed to be inviscid, and perfectly conducting, although we will also assume that reconnection between magnetic field lines is efficient. (We will return to the consistency of these assumptions later on.) We will also assume that there is a certain amount of magnetic flux which crosses a turbulent cell, with an associated rms Alfvèn speed $V_A$. If $V_A\gg V_T$ then the field will suppress the turbulence. We will therefore assume that $V_A\le V_T$. For example, if the magnetic field is in a shearing flow, surrounded by turbulence of its own creation, then the near equality of $V_T$ and $V_A$ is guaranteed, as well as the curvature of the magnetic field lines on the scale $L_T$. Why should we expect to find flux tubes in a highly conductive fluid? Normally, one appeals to flux-freezing to establish that matter entrained on magnetic field lines will remain entrained. However, this ignores the possibility that an infinitesimal resistivity can lead to strong collective effects. As an example we can consider a flux tube which is thin enough that it is strongly affected by the motions of the surrounding fluid. Such a field line will tend to stretch at a rate $\gtrsim k_TV_T$. If the plasma is highly conducting then the same amount of matter will be entrained on a progressively longer and longer flux tube. In a stationary state this stretching will be balanced by the pinching off of closed loops. These loops will have some characteristic diameter $l\lesssim L_T$ and a compressive force per unit length of $\sim \rho_t l^{-1} V_{At}^2\pi r_t^2$, where $\rho_t$ is the density in the tube and $V_{At}$ is the rms Alfvén velocity in the tube. The scale $l$ is determined by the scale on which the flux tube is just weak enough to bend at large angles in response to velocities on that scale. This tension will be opposed, usually, by turbulent stretching with an averaged force per unit length of $\sim C_d\rho V_T^2 r_t $, which by hypothesis is large enough to have a significant, but not overwhelming effect. Some large fraction of the time the loops will collapse (cf. [@dfp93]) before they can be reabsorbed by the neighboring flux tubes. Regardless whether the internal pressure of the loop is dominated by the magnetic field or gas pressure the magnetic tension will decrease more slowly than the turbulent stretching force and the loop will collapse to a plasmoid ball, whose energy is lost either to microscopic dissipation or the buoyant loss of such magnetic bubbles. This process will tend to remove matter from the flux tubes at a rate of $\gtrsim k_TV_T$. On the other hand, matter will move into the flux tubes through ohmic diffusion, at a rate $\sim(\Delta\rho/\rho)\eta/r_t^2$, where $\Delta \rho/\rho$ is the fractional depletion of matter from the flux tube. In the limit in which $\eta\rightarrow 0$ we see that magnetic flux tubes will be perfectly empty, provided that reconnection isn’t suppressed in this limit. More realistically, how evacuated these flux tubes are will depend on the efficiency of these loss mechanisms and whether or not mass loading can take place in the stellar or disk atmosphere. If we start from a uniform, or nearly uniform field in an extremely highly conducting fluid, this process will end when the same amount of flux is divided into some number of intense flux tubes with a magnetic pressure equal to the ambient pressure and a local $\beta$ of order unity or less. The final rms Alfvèn velocity will be the geometric mean between its initial value and the local sound speed. This initial field amplification will occur at a rate comparable to $k_TV_T$, in agreement with the results of numerical experiments ([@hgb94], [@nbjrrst92]). What will be the typical radius, $r_t$, of the flux tubes? Will they show correlations for $r>r_t$ or will they be distributed uniformly? We begin by noting that a flux tube will resist being deformed by turbulent forces acting on a scale $l$ provided that $${B_t^2\over 4\pi R_c}\pi r_t^2>C_d\rho V_l^2 r_t, \label{eq:stiff}$$ where $B_t$ is the magnetic field strength in the flux tube, $C_d$ is the coefficient of turbulent drag, $V_l$ is the turbulent velocity on the scale $l$, and $R_c$ is the radius of curvature of the tube. If the tube is resisting turbulent motions on a scale $l$ then we can take $R_c=l/4\equiv k^{-1}\pi/2$. In addition, if there are bulk forces acting on both the fluid and the flux tubes, and exciting the turbulent motions, then the flux tubes can resist them only if $B_t^2> 4\pi\rho V_T^2$. As the magnetic field lines become more and more evacuated, the ratio of the typical turbulent force per unit length will scale as $r_t$, whereas the tube stiffness will scale as $B_t^{3/2} (B_t r_t^2)^{1/2} r_t$, where $B_tr_t^2$ is proportional to the magnetic flux is is therefore conserved. We see that stretching the flux tubes makes them stiffer as $B_t$ increases, a process that will continue as long as they can respond effectively to the turbulent motions in the fluid. We conclude that they will evolve until they are relatively straight, in the sense that their transverse excursions will be small compared to the wavelength of these disturbances (in the direction of the magnetic field) for all wavelengths much less than $l$. Whether or not $l\rightarrow L_T$ will depend on the presence, or absence, of some dynamo mechanism and the amount of magnetic flux crossing the boundaries of the system. What are the forces acting on collection of stiff flux tubes embedded in a turbulent medium? First, we note that there is a purely hydrodynamic attractive force acting between neighboring flux tubes. Given a bulk flow with velocity $V_l$ streaming by a flux tube there will be a spreading turbulent wake, within which the bulk flow will be diminished by roughly $V_l (r_t/r)^{1/2}$, where $r$ is the distance downstream from the flux tube. The wake width will be roughly $(r r_t)^{1/2}$. This implies that a flux tube situated downstream from its nearest neighbor and possessing a similar large scale curvature will be subjected to a less intense ram pressure and will feel a force per volume of $\sim \rho V_l^2 (r_t r)^{-1/2}$ pushing the tube upstream. However, the full effect of this force will be felt only by a fraction of order $(r_t/r)^{1/2}$ of the downstream flux tubes. Averaged over a loose collection of flux tubes this gives an upstream force density on the downstream flux tubes of roughly $\sim \rho V_l^2/r$ per upstream flux tube. Conversely, given a collection of flux tubes the upstream tubes will feel more pressure than their downstream companions and experience a similar average excess bulk force directed downstream. This is a two dimensional version of ‘mock gravity’, the attractive force created by shielding effects in the presence of an isotropic repulsive flux. In this case however, the external force is only statistically isotropic. At any moment it will have a well defined direction, and the induced attraction can only act along that axis. The turbulent wake of a single flux tube will fade into the turbulence of the fluid if the shear across it is comparable to, or less than, the shear of the turbulence on the same scale. Since the strength of the wake diminishes in proportion to its width, which is proportional to the square root of its length, this implies that the wake will persist as long as $${V_w \over w}< {V_l\over w}\left({r_t\over w}\right),$$ where $w\approx (r r_t)^{1/2}$ is the wake width. Given $V_w\propto w^{1/3}$, i.e. assuming a Kolmogorov power spectrum, this condition will be satisfied if $$r< (r_t l)^{1/2}. \label{eq:wake}$$ In other words, the wake of a single flux tube will persist for a distance approximately equal to the geometric mean between its width, and the scale at which it is marginally flexible. However, if there are other flux tubes within this distance then their turbulent wakes will combine and persist to larger scales. We note that the normal shearing of the large scale flow will create a dispersive force density of order $\rho k_T^2V_T^2r$, which is smaller than the attractive force due to the turbulent wakes. The shear associated with small scale eddies, e.g. on a scale $r_t$, is much greater, but there will be many such eddies along the length of each bundle of flux tubes. Their effects will add incoherently and their net dispersive effect will be of order $\rho V_{Tr}^2/L_T\sim V_T^2 (r/L_T)^{2/3}/L_T$, which is still negligible. We conclude that the flux tubes will tend to aggregate, at least up to the point that the attractive force saturates due to strong shadowing, i.e. when there are $N$ flux tubes in a region of size $r$ so that $$\tau\equiv C_d Nr_t/r\sim 1. \label{eq:shadow}$$ When $\tau$ is below this limit the individual flux tubes feel a attractive force density towards the center of the bundle of order $$C_d \tau \rho {V_T^2\over r_t}. \label{eq:attract}$$ This force decreases when the flux tubes become so stiff that they are essentially straight regardless of the degree of mutual shadowing (i.e. when the condition set forth in eq. (\[eq:stiff\]) is satisfied by a wide margin). On the other hand, when eq. (\[eq:shadow\]) is satisfied then it is unrealistic to treat the interaction of a bundle of semi-rigid flux tubes with a large scale flow purely in terms of the separate turbulent wakes created by individual flux tubes. In particular, in this situation we can expect to see a collective wake in which the streaming velocity is reduced by a factor of roughly $1- \tau$ immediately downstream from the flux tube bundle. This implies the existence of a Kelvin-Helmholtz instability with a characteristic growth time of roughly $$\Gamma_{KH}\approx {V_T\over r}\tau. \label{eq:kh}$$ The velocity associated with the vortices created immediately behind the flux tube bundle is roughly $$V_{vortex}\sim r\Gamma_{KH}\approx V_T\tau. \label{eq:vkh}$$ Since these vortices will be coherent along most of the length of the flux tube bundle we expect that they will be particularly important in causing such bundles to disperse. They will cause the region immediately downstream from any flux tube to have significantly larger turbulent pressure than the surrounding flow. However, the downstream vortices will tend to be advected away at a velocity at least as great as the fluid velocity immediately behind the flux tube bundle. Consequently when $\tau\ll1$ the coherent perturbed velocity near the flux tube bundle will be less than $V_{vortex}$ by a factor of roughly $\Gamma_{KH} (r/V_T)$ so that the flux tubes within the bundle will feel a dispersive bulk force of order $$C_d\rho \left({V_{vortex}\Gamma{KH}r\over V_T}\right)^2 r_t^{-1},$$ or $$C_d\rho \left(V_T \tau^2\right)^2 r_t^{-1}. \label{eq:disp}$$ By comparing eqs.(\[eq:attract\]) and (\[eq:disp\]) we see that a flux tube bundle in equilibrium will have $\tau\approx 1$ or $N\approx C_d^{-1} r/r_t$. Similarly, a collection of $\tilde N$ flux tube bundles, each consisting of $N$ flux tubes clustered within a radius of $\tilde r$, will tend to cluster so as to produce an aggregate structure with $\tilde N \tilde r/r\sim 1$ or $\tilde N N C_d^{-1}r_t/r\sim 1$. In other words, the distribution of flux tubes will evolve towards a fractal of dimension 1, with $N(r)$ flux tubes within a distance $r$ from any given flux tube where $$N(r)\approx {r\over C_d r_t}. \label{eq:num1}$$ This leads to overlapping turbulent wakes, such that the attractive force between the flux tubes persists throughout a bundle of flux tubes. This fractal distribution of flux tubes persists up the scale where the turbulent wake of a flux tube bundle extends for a distance comparable to its size. We see from eq. (\[eq:wake\]) that this upper limit on $r$ is $\sim l$. The size of an individual flux tube can be derived from the condition that it be marginally stiff with respect to the surrounding turbulent motions on the scale $l$, i.e. $${B_t^2\over \pi l}\pi r_t^2\approx C_d\rho V_l^2 r_t, \label{eq:stiff1}$$ and that $B_t$ have the maximum sustainable value. For a perfect fluid the latter condition is given by pressure equilibrium, i.e. $${B_t^2\over 8\pi}=P. \label{eq:bideal}$$ Then the typical radius of a single flux tube is just $$r_t\approx l{C_d\gamma{\cal M}_l^2\over 8\pi}, \label{eq:radiusa}$$ where ${\cal M}_l$ is the Mach number of the turbulent flow on the scale $l$ and $\gamma$ is its adiabatic index. Since $l\le L_T$ this gives an upper limit on the size of the typical flux tube. To do better we need to invoke the global properties of the magnetic field. Summing up the magnetic energy in a collection of flux tubes we see that if the rms value of the Alfvén speed is $V_A$, and the field consists of $N(l)$ flux tubes per turbulent cell of size $l$, then $${1\over 2}\rho V_A^2\approx {N(l)w\pi r_t^2P\over l^2}, \label{eq:num}$$ and $w$ is a geometric factor describing the amount by which a typical flux tube has its length increased over $l$ as it crosses the turbulent cell. In a saturated state in which the flux tubes are constantly producing closed loops as the turbulent stresses change this factor will be of order 3 or 4. (A flux tube which is almost, but not quite, stretched enough to produce a closed loop, will have $w$ slightly greater than 2. Allowing for loops that are not yet pinched off, or have not yet collapsed, should give a slightly larger value.) The radius of each of the flux tubes will be given by eq. (\[eq:radiusa\]), but a typical flux tube will have a neighbor about that distance away. Note that this implies very little large scale segregation between the magnetic field and the turbulent flow. The magnetic field fills only a small fraction of the total volume, but the flux tubes are broadly distributed in the fluid. Of course, this neglects the existence of coherent structures in the flow, which, if they exist, will tend to collect flux tubes in some fraction of the total volume. Eq. (\[eq:num1\]) implies that on a scale $l$ the number of flux tubes is approximately $l/(C_d r_t)$. Combining this result with eqs. (\[eq:num\]) and (\[eq:radiusa\]) we find that $$V_l^2\approx {4\over w} V_A^2. \label{eq:equi}$$ In other words, the scale $l$ is defined by the condition that the magnetic field and the turbulent flow be in equipartition on that scale (any difference between $w$ and 4 being comparable to the cumulative errors in our estimates of proportionality constants). If $V_l^2\propto l^n$, where $n=2/3$ for Kolmogorov turbulence, then we can invert eq. (\[eq:equi\]) to obtain $$l\approx L_T\left({4E_B\over wE_T}\right)^{1/n}, \label{eq:scale}$$ where $E_B$ and $E_T$ are the magnetic and turbulent energy densities respectively. This can be combined with eq. (\[eq:radiusa\]) to yield our final answer for the typical flux tube radius in MHD turbulence in an ideal fluid. We get $$r_t\approx L_T{C_d\gamma{\cal M}_T^2\over 8\pi}\left({4E_B\over wE_T}\right)^{1+1/n}. \label{eq:radius}$$ The number of flux tubes on a scale $l$, which represents the upper limit of the fractal clustering pattern, is $$N(l)={l\over C_d r_t}={8\pi\over C_d^2\gamma {\cal M}_T^2} \left({wE_T\over 4E_B}\right)^{1+{1\over n}} \label{eq:numx}$$ The total magnetic flux applied across a turbulent cell required to produce some given $E_B$ depends on the nature of the turbulent flow. If we assume the absence of any strong dynamo then the flux on the scale $L_T$ is related to the flux on the scale $l$ by square root of the number of independent volumes of length $l$ contained in a two dimensional slice of a turbulent cell of length $L_T$ then then $$\Phi_{tot}\approx {L_T\over l} \Phi_l. \label{eq:phran}$$ Combining this with eqs. (\[eq:num1\]) and (\[eq:radius\]) we obtain $$\Phi_{tot}\approx \left(\sqrt{4\pi\rho} V_A L_T^2\right) \left[{V_A(\gamma/2)^{1/2} \over w c_s}\right]\left({4E_B\over wE_T}\right)^{1/n}, \label{eq:flux}$$ where the first term in parentheses is the net flux one would expect from a diffuse field of the same total energy density. Of course, if there is a strong dynamo operating in each cell then the total flux might well be much less, even zero. On the other hand, in that case we expect $E_B\approx E_T$ and $l\approx L_T$. It’s interesting to note that in the limit of an ideal incompressible fluid, the amount of flux necessary to produce a dynamically significant magnetic field goes to zero. It is important to note that in this model the existence of numerous small fibrils of magnetic field does [*not*]{} imply a large number of small scale field reversals. Given the dynamic nature of the processes that shape the equilibrium distribution, and our assumption of rapid reconnection, any complex interweaving of magnetic field lines pointed in opposing directions will rapidly relax to a state where the magnetic field direction is the same for neighboring flux tubes. We see that even in the absence of a dynamo an initially uniform field can give rise to a final state with a greatly amplified magnetic energy density. It follows that a computer simulation which starts with a uniform field in a turbulent medium will see an exponential rise in the magnetic field energy at a rate of $\gtrsim V_T/L_T$ regardless of whether or not the system in question is actually capable of supporting a dynamo. If we define a minimal magnetic energy, $E_i$, by $$E_i\equiv \left({\Phi_{L_T}\over L_T^2}\right)^2 {1\over 8\pi},$$ then from eq. (\[eq:flux\]) we can show that the system will evolve to a state where $$E_B\approx {w\over 4} E_T \left({4PE_i\over E_T^2}\right)^{{n\over 2(1+n)}}.$$ Naturally, when this formula gives $E_B>E_T$ it fails, since the imposed flux gives a flux correlation between volumes of size $l$. For Kolmogorov turbulence this implies a net amplification of the magnetic energy equal to $(E_T/E_i)^{4/5}$ times the Mach number of the turbulence to the $-0.4$ power. In a certain sense this is a ‘turbulent dynamo’, since statistically symmetric turbulence is driving a large increase in the magnetic energy, but there is no net production of magnetic flux. The increase is a consequence of using a highly artificial initial condition. It is difficult to think of realistic circumstances where this effect could be important, although the very early evolution of the galactic magnetic field might be one. If the turbulent energy density is kept equal to the magnetic energy density, e.g. as in the Velikhov-Chandrasekhar instability ([@v59], [@c61]), the final magnetic energy density is roughly the geometric mean between $E_i$ and $P$. (In practice this follows only if the applied flux is in the direction of the shearing flow ([@vd94]) since a field applied in the direction of the shear vector will drive the creation of a larger magnetic field in the direction of the flow even in the absence of flux tube formation.) Finally, we note that the preceding discussion assumes that the magnetic flux does not simply migrate to some part of the fluid where the flow is consistently directed along the field lines. This will certainly happen if the velocity field is stationary and well-organized. Implicit in our assumption of turbulence is that these conditions are not met and that no such equilibrium is possible. Consequently, it is difficult to make any direct comparisons with simulations of ‘ABC-flows’ ([@a65], [@c70]) such as those performed by Galanti, Sulem, & Pouquet (1992). Imperfect Fluids ================ In most astrophysical applications one can assume that the viscosity and magnetic diffusivity are essentially zero. However, in this case we will see that in realistic situations, for example most of the convective region of the Sun, the resistivity of the fluid is large enough to affect the conclusions of the preceding section. It is somewhat more difficult to find cases where viscosity is important, but it is usually the dominant effect in numerical simulations, which are our only direct means of testing theories of high $\beta$ MHD turbulence. In this section we will define the regions of parameter space in which viscous and diffusive effects dominate. The boundaries between the various regimes need to be defined in a four dimensional parameter space since the Reynolds number, the magnetic Reynolds number, the Mach number of the turbulence, and the ratio of magnetic to turbulent energies are all physically significant independent variables. We start with diffusive effects. A flux tube of radius $r_t$ and ohmic diffusivity $\eta$ will spread radially at a rate given by $$\tau^{-1}_{diff}\sim \eta r_t^{-2},$$ where we have taken $\nabla^2 B_t\approx B_t/r_t^2$, which is a reasonable approximation for a flux tube with a gaussian profile. This spreading will dominate the evolution of the flux tube if this rate exceeds the stretching rate for the flux tube (which is also the large scale shearing rate). This allows us to define a diffusive radius of $$r_{diff}\equiv \left({\eta \over kV_l}\right)^{1/2}, \label{eq:rdiff}$$ where $k$ here refers to the wavenumber corresponding to $l$. If $r_{diff}<r_t$, where $r_t$ is given in eq. (\[eq:radius\]), then this defines the skin depth of the flux tube, within which the density drops sharply from its ambient value. On the other hand, if $r_{diff}>r_t$, then typical flux tubes are larger than our previous estimates, with smaller values of $B_t$. Combining eqs. (\[eq:equi\]), (\[eq:scale\]), (\[eq:radius\]) and (\[eq:rdiff\]) we see that resistivity can be ignored valid if $${\cal M}_T^4 \left({V_T\over k_T\eta}\right)\left({4E_B\over w E_T}\right)^{1/n+5/2} >\left({4\over C_d\gamma}\right)^2. \label{eq:diffcrit}$$ In other words, the magnetic Reynolds number has to exceed ${\cal M}_T^{-4}$ by about an order of magnitude in order to be safely into the ideal fluid limit when the magnetic and turbulent energy densities are comparable. For a Kolmogorov spectrum the ratio of the energies enters into this condition with an exponent of $4$. In practice this means that the regime where $E_B\ll E_T$ is inaccessible to direct simulation, if the results depend on resolving the smallest flux tubes. Note that we have defined the Reynolds number here using the inverse wavenumber. Another common convention is to use the wavelength of the turbulence, which gives a larger Reynolds number by a factor of $2\pi$. When this criterion is not satisfied the fractal distribution of magnetic flux will be truncated at the scale given by $r_{diff}$. However, we can still invoke the notion of marginal resistance to turbulent stretching at larger scales, so the properties of the system at $r>r_{diff}$ are unchanged. It is convenient to define a constant $\Psi$ which contains the environmental parts of the factor by which a fluid fails to satisfy the criterion set forth in eq. (\[eq:diffcrit\]), i.e. $$\Psi\equiv \left({\gamma C_d\over 4}\right)^2{\cal M}_T^4 \left({V_T\over k_T\eta}\right). \label{eq:psidef}$$ We conclude that the typical magnetic field intensity in a flux tube is $$B_t=\sqrt{8\pi P}\Psi^{1/4}\left({4E_B\over wE_T}\right)^{(1/n+5/2)/4}. \label{eq:btres}$$ Consequently, assuming there is no temperature gradient across the flux tubes, the flux tubes will have a fractional density depletion of $${\Delta\rho\over\rho}\approx \Psi^{1/2}\left({4E_B\over wE_T}\right)^{(1/n+5/2)/2}. \label{eq:diffrho}$$ The total number of flux tubes in a region of size $l$ will be $$N={l\over C_d r_t}={2\pi\over C_d}\left({V_l\over k\eta}\right)^{1/2}= {2\pi\over C_d}\left({V_T\over k_T\eta}\right)^{1/2}\left({4E_B\over wE_T}\right)^{1/4+1/2n} \label{eq:numr}$$ and the net flux for a turbulent cell of size $L_T$ will be (assuming once again that the flux adds incoherently) $$\Phi_{tot}\approx \left(\sqrt{4\pi\rho} V_A L_T^2\right) \left[{V_A(\gamma/2)^{1/2} \over w c_s}\right]\left({4E_B\over wE_T}\right)^{1/(2n)-5/4}\Psi^{-1/4}. \label{eq:fluxa}$$ Comparing to eq. (\[eq:flux\]) we see that when resistivity is important the amount of flux necessary to produce a given amount of magnetic energy increases relative to the ideal fluid result. In the regime described above, which we refer to as the resistive limit, the dynamics of the magnetic field are not strikingly different just because the smallest flux tubes are no longer as small, or as completely evacuated. The only macroscopic field property that changes is the total magnetic flux associated with a field of a given average energy density, and it is not clear how significant this quantity is in realistic situations. This insensitivity to diffusive effects does not carry over to the case in which viscous effects dominate the flux tube dynamics. This will happen when viscous damping of the turbulent wakes behind flux tubes prevents the formation of strong coherent vortices which can spread apart rigid magnetic field lines. Since the trailing vortices will have radii of about $r_t/2$, and since the fluid has to complete a full revolution in order to push apart the field lines, we can approximate the criterion for ignoring viscosity as $$\left({\pi\over r_t}\right)^2\nu< {V_l\over \pi r_t}. \label{eq:visc}$$ When this inequality is violated the lack of a strongly turbulent wake will cause flux tubes to aggregate until $r_t$ is large enough to marginally satisfy this inequality, or until the flux tubes become unresponsive to the surrounding fluid motions. Consequently, we can distinguish two regimes where viscosity has a significant impact and resistivity is negligible. In the weakly viscous regime there exists a scale $l<L_T$ such that flux tubes can be marginally resistant to turbulent motions on that scale with a radius $r_t$ that just satisfies eq. (\[eq:visc\]). In this regime we expect to see fewer, and larger, flux tubes than we would expect in the ideal fluid or resistive regimes. However, flux tubes retain their mobility since they are still neither completely rigid nor completely entrained in the surrounding fluid. In the strongly viscous regime it is possible to form almost completely rigid flux tubes with typical radii that are small enough that viscosity can partially damp their turbulent wakes, i.e. that do not satisfy eq. (\[eq:visc\]) for $l=L_T$. In this regime the magnetic field lines are incapable of undergoing the kind of deformation necessary to drive a dynamo. Note that for $E_T\sim E_B$ we expect $l\sim L_T$ in the resistive and ideal fluid regimes. Consequently, as the viscosity increases one passes directly into the strongly viscous regime. The weakly viscous regime is relevant only when $E_B\ll E_T$. The boundary between the weakly viscous and ideal fluid regimes can be derived from eqs. (\[eq:radius\]) and (\[eq:visc\]). We are in the ideal fluid regime when eq. (\[eq:diffcrit\]) is satisfied and $$1<\chi \left({4E_B\over wE_T}\right)^{3/2+1/n}. \label{eq:diffvis}$$ where $$\chi\equiv \left({C_d\gamma\over 4\pi^3}\right){\cal M}_T^2 \left({V_T\over k_T\nu}\right). \label{eq:chidef}$$ In other words, the Reynolds number has to exceed ${\cal M}_T^{-2}$ by roughly two orders of magnitude in order to be in the ideal fluid limit when $E_B\approx E_T$. Assuming a Kolmogorov spectrum, this criterion becomes harder to satisfy for small magnetic energies as the energy ratio to the third power. This makes the direct numerical exploration of magnetic field dynamics in the weak field limit extremely difficult, even setting aside the point that current simulations do not satisfy this condition for $E_B=E_T$. In the weakly viscous regime eqs. (\[eq:stiff1\]) and (\[eq:visc\]) can be combined to yield $$l\le L_T \chi^{{-1\over 1+3n/2}}, \label{eq:lwv}$$ where the upper limit is obtained by taking $r_t$ as large as possible. It is reasonable to assume that we are in this limit, provided that there is more than one flux tube in a volume of size $l$. If not, then consolidation of flux tubes will not drive $r_t$ up to the limit given by eq. (\[eq:visc\]). If we are at this limit then eq. (\[eq:lwv\]) indicates that the scale of curvature for the magnetic field lines is only a function of the Mach and Reynolds numbers of the turbulent flow. Combining eqs. (\[eq:lwv\]), (\[eq:visc\]) and (\[eq:chidef\]) we find, after some manipulation, that $$N(l)={8\pi\over C_d^2\gamma {\cal M}_T^2} \left({4E_B\over wE_T}\right) \chi^{{2n\over 1+3n/2}}. \label{eq:numwv}$$ However, in this regime the fractal distribution does not extend all the way from $r_t$ to $l$. Instead we can show from eqs. (\[eq:num1\]), (\[eq:visc\]), (\[eq:chidef\]), (\[eq:lwv\]), and (\[eq:numwv\]) that it extends to a scale $l_c<l$ given by $$l_c=l\left({4E_B\over wE_T}\right)\chi^{{n\over 1+3n/2}}.$$ In the weakly viscous regime the magnetic field is segregated from the bulk of the fluid not only on small scales, but also on the scale of the curvature of the field lines. When eq. (\[eq:numwv\]) gives $N(l)\le 1$, then we take $N(l)=1$ and obtain the value of $r_t$ from eqs. (\[eq:stiff1\]) and the definition of the energy ratio. We find $$r_t={C_d\gamma {\cal M}_T^2\over 4 k_T}\left[\left({4E_B\over w E_T}\right) {8\pi\over C_d^2\gamma {\cal M}_T^2}\right]^{{1+n\over 2n}},$$ and $$l=L_T\left[\left({4E_B\over w E_T}\right) {8\pi\over C_d^2\gamma {\cal M}_T^2}\right]^{{1\over 2n}}.$$ This weak field extension to the weakly viscous regime obtains when $N(l)$ in eq. (\[eq:numwv\]) gives $N(l)<1$ or $$\chi<\left[\left({4E_B\over w E_T}\right) {8\pi\over C_d^2\gamma {\cal M}_T^2}\right]^{{-1-3n/2\over 2n}}.$$ Its boundary with the strongly viscous limit is defined by $l\rightarrow L_T$ or $$\left({4E_B\over w E_T}\right)< {C_d^2\gamma {\cal M}_T^2\over 8\pi}, \label{eq:svwv}$$ where the inequality is satisfied on the weakly viscous side of the boundary. The boundary between the resistive and weakly viscous regimes is more complicated and involves the appearance of yet another regime where $\eta$ and $\nu$ are of comparable importance in determining the typical flux tube radius. We will refer this limit as the mixed regime. From eqs. (\[eq:rdiff\]) and (\[eq:visc\]) we see that everywhere in this regime $${\eta\over\nu}=\pi^6\left({\nu\over k_TV_T}\right)\left({V_T\over V_l}\right) \left({L_T\over l}\right). \label{eq:mixc}$$ From eq. (\[eq:scale\]) we see that the boundary between the mixed and resistive regimes is defined by $${\eta\over\nu}<\pi^6\left({\nu\over k_TV_T}\right) \left({4E_B\over wE_T}\right)^{-1/2-1/n},$$ where the inequality is satisfied on the resistive side of the boundary. Within the mixed regime we have the condition of marginal stability, eq. (\[eq:stiff1\]), as well as eq. (\[eq:mixc\]). The former implies that within this regime $${B_t^2\over 8\pi P}=\chi\left({l\over L_T}\right)\left({V_l\over V_T}\right)^3. \label{eq:mixd}$$ Combining eqs. (\[eq:psidef\]), (\[eq:chidef\]), (\[eq:mixc\]) and (\[eq:mixd\]) we get $$l=L_T\left({\Psi\over\chi^2}\right)^{{1\over 1+n/2}},$$ and $${B_t^2\over 8\pi P}=\left({\Psi\over\chi^2}\right)^{{3n/2+1\over1+n/2}}\chi.$$ As we move into the mixed regime, in the direction of decreasing resistivity, the curvature scale of the flux tubes increases, the radius of the individual tubes decreases (following eq. (\[eq:visc\])) and the ratio of magnetic pressure in the flux tubes to the ambient pressure increases. Ultimately we either reach the limit where $l=L_T$ or $B_t^2=8\pi P$. The former defines the boundary with the strongly viscous regime. The condition that we are on the mixed regime side of the boundary is $\Psi<\chi^2$ or $${\eta\over\nu}>\pi^6\left({\nu\over k_TV_T}\right). \label{eq:svmr}$$ The latter limit defines the boundary with the weakly viscous regime, which can be reached from the mixed regime if, and only if, $\chi>1$. This boundary is defined by $$\Psi>\chi^{{1+5n/2\over 1+3n/2}},$$ where the inequality is satisfied on the weakly viscous side of the boundary. Before describing the properties of the strongly viscous regime, it is useful to state the conditions under which it can be avoided. From eqs. (\[eq:lwv\]), (\[eq:svwv\]), and (\[eq:svmr\]) we see that the boundaries of the strongly viscous regime can be expressed by the condition that $$1>\chi^2>\psi,$$ and that the ratio of the magnetic energy to turbulent energy exceeds the Mach number squared divided by something like 25. In the limit of incompressibility, this implies that eq. (\[eq:svmr\]) is the only boundary to the strongly viscous regime, and that satisfying this inequality in a numerical simulation is the most important goal for numerical simulations of MHD turbulence. This condition can be expressed as the requirement that the magnetic Prandtl number has to be less than $\sim 10^{-3}$ times the Reynolds number in order to avoid the strongly viscous regime. The difficulty we face in constructing numerical simulations which will not end up in the strongly viscous regime is more serious than the failure to reach the ideal fluid regime. In the strongly viscous regime the magnetic field will tend to settle into a configuration where the individual flux tubes are rigid, and yet do not break apart into smaller structures. Once the field has reached such a configuration it will be largely insensitive to the surrounding turbulent velocity field. The rate at which the field lines stretch to compensate for ohmic diffusion will be small. This implies that such a magnetic field will not grow exponentially due to some net helicity in the velocity field, even if such growth were expected from mean-field dynamo theory. From an astrophysical viewpoint this is not a particularly interesting limit. However, it is the limit most likely to apply to current numerical simulations of three dimensional MHD turbulence. In the appendix we present a detailed comparison between various simulations and the predictions of our model for this limit. At the edge of the strongly viscous limit the typical flux tube radius is $$r_t\approx {\pi^3\nu\over V_l}=L_T{C_d\gamma{\cal M}_T^2\over 8\pi}\chi^{-1}, \label{eq:rvisc}$$ We note that the fact that viscosity dominates on scales slightly below $r_t$ does not prevent us from assuming turbulent drag, although the appropriate value of $C_d$ will be the value for low Reynolds numbers. We can then use eq. (\[eq:stiff1\]) to derive the magnetic field inside a flux tube. We find that $$B_t\approx \sqrt{8\pi P \chi} \label{eq:btvisc}$$ The fractional density depletion will be $${\Delta\rho\over\rho}\approx \chi.$$ The number of flux tubes per turbulent cell will be $$N={E_B\over E_T}{\cal M}_T^{-2}\left[{32\pi\over \gamma C_d^2w}\right]\chi, \label{eq:nvisc}$$ where $w$ will be of order 2, since these flux tubes are just thick enough to bend significantly, but not quite enough to produce loops. The total magnetic flux will be $$\Phi_{tot}=\sqrt{4\pi\rho}V_A L_T^2\left({V_A\over c_s}\right) \left[{\sqrt{\gamma} \over\sqrt{2} w}\right]\chi^{-1/2}.$$ In any particular simulation $N$ can be smaller than one. That is, it may be that the initial conditions are such that all the flux in the computational box can be contained in a single flux tube, regardless of the number of turbulent cells in the box. However, it is more likely that given an initially weak diffuse field, and some local helicity, there will be at least one flux tube per turbulent cell. Large scale correlations in the field direction, which will also be a consequence of some imposed helicity, will tend to prevent the cancellation of flux tubes in adjacent turbulent cells. Each flux tube will contain a magnetic flux of $$\Phi_{tube}\approx \sqrt{8\pi P} L_T^2 \chi^{-3/2} {\cal M}_T^4 {C_d^2\gamma^2\over 64\pi}.$$ In this case the magnetic energy density will be $$E_B\equiv {1\over 2}\rho V_A^2 \approx {1\over 2}\rho V_T^2 \left({\nu k_T\over V_T}\right) {\pi^2\over 8}wC_d. \label{eq:z7}$$ For $\nu$ small this can be arbitrarily far below equipartition, even if the velocity field of the fluid is capable (under other circumstances) of driving a strong dynamo. For moderate Reynolds numbers, i.e. when $$\left({V_T\over k_T\nu}\right)<\left({2\pi^3\over C_d}\right) \label{eq:modvisc}$$ eq. (\[eq:btvisc\]) implies $V_{At}<V_T$. This is misleading since the presence of bulk forces driving the turbulence can impose $V_{At}\ge V_T$ as a separate condition. In this case we replace eq. (\[eq:btvisc\]) with $V_{At}=V_T$. The condition of marginal stiffness then implies a radius less than the one given in eq. (\[eq:rvisc\]), i.e. $$r_t\approx {L_T C_d\over 4\pi}. \label{eq:rvisc1}$$ The actual radius of a typical flux tube will lie between this value and the one given in eq. (\[eq:rvisc\]) depending on the flux threading each turbulent cell. If $\Phi_{tot}$ lies in the range $$\pi\left({C_d\over 2k_T}\right)^2\sqrt{4\pi\rho}V_T<\Phi_{tot}< \pi\sqrt{4\pi\rho}V_T\left({\pi^3\nu\over V_T}\right)^2 \label{eq:zz}$$ there will be one flux tube per turbulent cell with a radius $$r_t\approx\left({\Phi_{tot}\over\pi\sqrt{4\pi\rho}V_T}\right)^{1/2}.$$ For fluxes beyond the upper limit in eq. (\[eq:zz\]) the number of flux tubes per turbulent cell is $$N\approx {V_A^2L_T^2\over\pi^7\nu^2w},$$ and the total flux is $$\Phi_{tot}=\sqrt{4\pi\rho}V_A L_T^2\left({V_A\over V_T}\right) w^{-1}.$$ In this limit the minimal stationary state for a numerical simulation with an initially weak diffuse field has a magnetic energy density of $${1\over 2}\rho V_A^2\approx {1\over 2}\rho V_T^2 {wC_d^2\over 16\pi}. \label{eq:min}$$ It may seem surprising that this is insensitive to the value of $\nu$, but this is somewhat misleading. For $\nu$ so large that the flux tube wakes become entirely dominated by viscosity the appropriate value of $C_d$ will scale as $\nu^{1/2}$. Eq. (\[eq:min\]) will only apply over a limited range of moderate Reynolds numbers. At low Reynolds numbers, when $V_{At}\sim V_T$ and eq. (\[eq:rvisc1\]) is valid, resistivity is negligible provided that $${V_T\over k_T\eta}>{4\over C_d^2}. \label{eq:q1}$$ There is one other limit on the magnetic Prandtl number which is important. If the resistivity, $\eta$, is much larger than $max[\nu, C_d r_t V_T]$, then the magnetic flux tubes will tend to resist deforming in response to turbulent fluid motions (cf. [@b50]). In the ideal fluid limit we can see from eq. (\[eq:radius\]) that this implies $$\eta< {C_d^2\gamma\over 4k_T} V_T{\cal M}_T^2.$$ Using the definition of $\Psi$ in eq. (\[eq:psidef\]) we can rephrase this as $$\Psi>{\gamma\over 4}{\cal M}_T^2,$$ which is always satisfied since, by definition, a system in the ideal fluid regime will have $\Psi>1$. In the resistive regime we need to replace eq. (\[eq:radius\]) with eq.(\[eq:rdiff\]) so that our limit on $\eta$ becomes $$\eta<C_d^2{V_T\over k_T},$$ i.e. the magnetic Reynolds number has to be greater than $C_d^{-2}$, which is of order unity. Again, this will be trivially satisfied in any case of interest. On the other hand, if we turn our attention to the viscous regime we see that for high Reynolds numbers (when the flux tube radius is given by eq. (\[eq:rvisc\])), the upper limit on $\eta$ becomes $$\eta<C_d \pi^3\nu. \label{eq:evlim}$$ At moderate Reynolds numbers the minimum value of $r_t$ is given by eq. (\[eq:rvisc1\]) and the limit on $\eta$ becomes $$\eta<{C_d^2\over 2k_T}V_T. \label{eq:evlim1}$$ Of course, at very small Reynolds numbers we recover the condition $\eta<\nu$. The original work by Batchelor proposed this as the only limit, but based on a very different conceptual model for the distribution of magnetic flux. For our purposes this limit is important only for very small Reynolds numbers. The less stringent limits given in eqs. (\[eq:evlim\]) and (\[eq:evlim1\]) are the important ones. In this context, it is interesting to note that simulations done with $\eta\ge\nu$ have shown a strong suppression of the growth of the magnetic field energy ([@nbjrrst92]). In fact, these simulations seem to show that in the viscous regime the critical value of $\eta/\nu$ is close to one (although no such suppression was seen in the work of Tao et al. (1992) which had $\eta=\nu$). Nordlund et al. ascribed the difference to the different heat conductivities used. Here we suggest that it is instead by due to the different boundary conditions used in the two simulations. Tao et al. started with a weak nonzero flux and a static fluid which gradually responded to the large scale forcing. Nordlund et al. used a weak flux with large scale structure which averaged to zero over the simulation volume, and began from fully developed turbulence. Both simulations started from a uniform field. Apparently these differences made the simulation of Nordlund et al. more vulnerable to immediate turbulent dissipation. The argument in the preceding paragraph suggests that Nordlund et al. would have seen little effect if they had taken the final state of a low $\eta$ run and used it as the initial condition for a high $\eta$ run (assuming that $\eta$ still satisfies the limit given in eq. (\[eq:evlim\])). These results suggest that the optimum strategy for designing a code which can simulate 3D MHD turbulence in the resistive regime is to take the largest resistivity consistent with some reasonable ability to resolve flux tubes. Since the ratio of flux tube radius to eddy size is roughly the square root of the magnetic Reynolds number this implies a magnetic Reynolds number of $\sim10^2$. Fixing this and maximizing the Reynolds number should give the easiest route out of the viscous regime. Reconnection ============ In the preceding section we have assumed that reconnection is rapid, in the sense that flux tubes reconnect much faster than an eddy turnover time. In fact, this is a controversial point. The actual rate of reconnection depends on the structure of the flux tubes. Even in the context of a particular model for their structure the rate is not well understood. Parker (1957) and Sweet (1958) proposed that reconnection should cross a flux tube at a speed of $$V_{rec}\approx V_A \left({V_A r_t\over \eta}\right)^{-1/2}. \label{eq:r1}$$ The physical basis for this estimate is that at the interface between two reconnecting flux tubes the gas builds up an excess pressure, of order $\rho V_A^2$, preventing the opposing field lines from reconnecting efficiently. The rate of reconnection is then controlled by the rate at which particles escape from the reconnection region, presumably by moving a distance of order $r_t$ along the field lines. The excess pressure is maintained through the heat released by the dissipation of magnetic field energy. If we identify $V_A$ with its rms value, or with the local turbulent velocity, then this rate is quite slow. Magnetized regions on scales comparable to the size of a convective cell are unable to reconnect efficiently in one eddy turnover time. This has led to a number of proposals for mechanisms that will increase reconnection rates. Several of these ([@ce77], [@d84], and [@s88]) appeal to plasma effects which will be heavily suppressed in a strongly collisional plasma, like the kind we are considering here. One process that can apply in a collisionally dominated plasma is the Petscheck reconnection mechanism ([@p64]) which has the effect of replacing the denominator of eq. (\[eq:r1\]) with the logarithm of the magnetic Reynolds number. It remains unclear whether or not this rate is attainable under realistic conditions inside a star or accretion disk (e.g. see [@b86]). Another possibility is that once reconnection gets under way the rate is determined by nonlinear hydrodynamic processes that increase the reconnection speed by some large factor ([@ml86]). Here we will assume that the Sweet-Parker rate is essentially correct. Given that our purpose is to show that reconnection is rapid in the model for MHD turbulence we propose here, this is the conservative strategy. We will also assume that the internal structure of a typical flux tube is given by balancing the effects of stretching along the field lines with radial ohmic diffusion. In other words, we neglect turbulent diffusion. We will defer discussion of this point to a later paper. Here we note only that the large magnetic field strength in the flux tubes may will the dimensionality of the flow. Since turbulent diffusion is largely suppressed in two dimensions ([@cv91]) it seems plausible to neglect it here ([@d94]). If we model the flux tube as having infinite extent in the $\hat z$ direction, with $\vec B=B(r) \hat z$ and $\partial_z v_z=\tau^{-1}$ then the stationary solution for a tube in an isothermal fluid satisfies $${1\over r}\partial_r (rv_r \rho)+{\rho\over \tau}=0, \label{eq:cont}$$ $${1\over r}\partial_r (rv_r B)={1\over r}\partial_r(r\eta\partial_r B), \label{eq:bdiff}$$ and $${B^2\over 8\pi}+\rho {k_B T\over \mu}=P_{tot}, \label{eq:press}$$ where $\mu$ is the mean mass per particle, $P_{tot}$ is the total pressure in the fluid, $\eta$ is the resistivity (assumed to be independent of density), and $v_r$ is the radial velocity. Eq. (\[eq:bdiff\]) can be integrated, assuming that the magnetic field and its derivative vanish at large radii, to obtain $$v_r=\eta\partial_r \ln B={\eta\over 2} \partial_r \ln P_{mag}, \label{eq:vdiff}$$ where $P_{mag}$ is the magnetic pressure. Combining eqs. (\[eq:cont\]), (\[eq:press\]) and (\[eq:vdiff\]) we find that $$\left[\partial_x\ln(x\rho)\right]\left[\partial_x\ln(1-\rho/\rho_{\infty}) \right] +\partial_x^2\ln(1-\rho/\rho_{\infty})+2=0, \label{eq:flux1}$$ where $\rho_{\infty}$ is the density at large distances from the flux tube, and $x$ is the dimensionless radial distance defined by $$x\equiv {r\over \sqrt{\eta \tau}}.$$ Eq. (\[eq:flux1\]) can be rewritten in terms of $P_{mag}$ as $$\left[\partial_x\ln(x(1-P_{mag}/P_{tot}))\right]\left[\partial_x\ln P_{mag}\right] +\partial_x^2\ln P_{mag}+2=0. \label{eq:flux2}$$ A flux tube in the resistive regime will have $P_{mag}\ll P_{tot}$ everywhere. In this case eq. (\[eq:flux2\]) implies $$P_{mag}\approx P_{mag}(x=0)\exp \left[-{r^2\over 2\eta\tau}\right].$$ A flux tube in the ideal fluid regime has the curious feature that it consists, in this approximation, of a thin shell with a radius comparable to $\sqrt{\eta\tau}$ surrounding an interior where $\rho=0$. If the radius of the tube is much greater than the thickness of the shell then the full equation for $\rho$ reduces to an integral solution of the form $$x-x_0=\int_0^{\rho/\rho_{\infty}} {qdq\over 2(1-q)\sqrt{-\ln(1-q)-q-q^2/2}},$$ where $x_0$ is the dimensionless radius of the evacuated interior. Close to $x_0$ this becomes $$\rho={\rho_{\infty}\over 3}(x-x_0)^2 \left(1-{5\over 36}(x-x_0)^2+\cdots\right).$$ At large distances we find $$\rho\approx \rho_{\infty}\left(1-\exp[ -(x-x_0-1.45)^2]\right).$$ Now we consider reconnection involving two isolated flux tubes in the ideal fluid limit. Since the magnetic pressure in the tubes equals the ambient pressure the Alfvén velocity at the edge of the flux tubes will be roughly $c_s$. However, subject to the approximation that $\eta$ is really independent of density and that the flux tube reaches a stationary internal state, the Alfvén velocity goes to infinity within a distance of $r_{skin}\sim \sqrt{\eta\tau}$. Realistically it will reach a value exponentially higher than $c_s$. Consequently, the reconnection rate is just $${V_{rec}\over r_{skin}}\approx \left({c_s \eta\over r_t r_{skin}^2}\right)^{1/2}, \label{eq:rec1b}$$ or using eqs. (\[eq:scale\]) and (\[eq:radius\]) $${V_{rec}\over r_{skin}}\approx k_Tc_s{\cal M}_T^{-1/2} \left({4E_B\over wE_T}\right)^{-{1\over4}-{1\over n}}. \label{eq:rec1a}$$ We note that this estimate is insensitive to $\eta$. One loophole in this argument is the assumption that reconnection in the outer layers of the flux tube, where the magnetic field is very small, has a negligible effect on the overall speed of reconnection. Since $V_{rec}\rightarrow 0$ exponentially in this region it is easy to imagine that the actual rate of reconnection is controlled in the flux tube envelope. This is the case, but fortunately this doesn’t change our basic conclusion that reconnection is rapid. Neglecting reconnection for the moment, we can consider the dynamics of the collision between two flux tubes. As the flux tubes try to move past one another at a speed $\sim v_l$, they create sharp bends over some region of typical size $r_t$. At the contact point the local pressure rises by roughly $\bar B^2 sin\phi$, where $\phi$ is the bending angle and $\bar B$ is the root mean square value of the magnetic field in the flux tube. This implies that eq. (\[eq:rec1a\]) should be corrected by a factor of $\sim \phi^{1/4}$. In the ideal fluid limit the flux tube core is empty, and in that region $V_A=\infty$ (neglecting a breakdown of our assumption that $\eta$ is independent of density). However, bending waves in a flux tube will involve moving the mass contained in the flux tube skin and so the effective Alfvén speed, which is also the speed at which bending waves can travel down the flux tube, will be approximately $$V_{At}\approx \left({2 r_{skin}\over r_t}\right)^{-1/2}c_s \sim \Psi^{1/4}c_s \left({4E_B\over wE_T}\right)^{{1\over4n}+{5\over 8}}. \label{eq:speed}$$ When $k V_{At}$ exceeds the reconnection rate, then the Alfvén speed is effectively infinite and the bending angle $\phi$ is just $k V_l t$. Equating the time in this expression with the characteristic reconnection time we find that in this limit the reconnection rate given in eq. (\[eq:rec1a\]) is modified to $${V_{rec}\over r_{skin}}\approx k_Tc_s{\cal M}_T^{-{1\over 5}} \left({4E_B\over wE_T}\right)^{-{1\over10}-{1\over n}}. \label{eq:rec1aa}$$ Large scale reconnection events will involve bundles of such flux tubes, each consisting of $N(l)$ individual flux tubes in a bundle of radius $l$. These tubes do not need to reconnect serially, but simultaneous reconnection is also unlikely. The total reconnection rate should be down from the single reconnection rate given in eq. (\[eq:rec1aa\]) by at most a factor of $N^{1/2}$. If the rate of reconnection is slow enough that tubes undergo some compression before the reconnection front reaches them then this can be an underestimate of the true reconnection rate. In no case can the reconnection rate exceed the bulk flow rate across a bundle radius. Using eqs. (\[eq:numx\]), and (\[eq:rec1aa\]) we conclude that the bundle reconnection rate divided by the eddy turnover rate on the scale of curvature of the flux tubes will be $${1\over kV_l\tau_{rec}}= \min\left[{\cal M}_T^{-1/5} \left({4E_B\over wE_T}\right)^{-{1\over 10}},1\right], \label{eq:reca1}$$ where the minimum of 1 for this ratio arises from the fact that the bundles cannot reconnect in less time than it takes for the flux tubes to move across a distance $l$. This minimum rate may be overly conservative, if the conditions that lead to flux bundle collisions tend to concentrate them in the process, or if the process of reconnection itself results in bulk motions that accelerate the collision. Ignoring this point we see that in this limit of the ideal fluid regime, where the mass loading on the flux tubes is insignificant, the limiting rate in the reconnection of flux tube bundles is set by the bulk motion of the bundles, not by the actual reconnection of the flux tubes. When the distance a signal can travel down a flux tube in the time for a pair of isolated flux tubes to reconnect is less than $l$, the bending angle becomes $V_l/V_{At}$. From eqs. (\[eq:speed\]) and (\[eq:rec1aa\]) this happens when $$\Psi<{\cal M}_T^{-4/5}\left({4E_B\over wE_T}\right)^{-2.9-{1\over n}}.$$ In this case eq. (\[eq:rec1a\]) needs to be corrected by multiplying the LHS by $(V_l/V_{At})^{1/4}$. This gives $${V_{rec}\over r_{skin}}\approx k_Tc_s{\cal M}_T^{-{1\over 4}}\Psi^{-1/16} \left({4E_B\over wE_T}\right)^{-{9\over32}-{17\over 16n}}. \label{eq:rec1ab}$$ We see that the reconnection rate increases slightly as we approach the boundary of the ideal fluid regime. In this regime the bundle reconnection rate is $${1\over kV_l\tau_{rec}}= \min\left[{\cal M}_T^{-1/4}\Psi^{-1/16} \left({4E_B\over wE_T}\right)^{-{9\over 32}-{1\over16n}},1\right]. \label{eq:reca2}$$ At the limit of the ideal fluid regime, given by eq. (\[eq:diffcrit\]), the bundle reconnection rate estimate obtained by summing up individual reconnection events can be as large as $kV_l {\cal M}_T^{-1/4}$ times the energy ratio to the $-1/8$ power. In the resistive limit $r_{skin}=r_t=\sqrt{\eta\tau}$. Although the flux tubes are solidly filled in, the magnetic field within the tubes is well below equipartition with the exterior pressure. Consequently $V_{At}$ is still given by eq. (\[eq:speed\]) and the reconnection rate for a pair of isolated flux tubes becomes $${V_{rec}\over r_{skin}}\approx k_Tc_s{\cal M}_T^{-{1\over 4}}\Psi^{5/16} \left({4E_B\over wE_T}\right)^{{21\over32}-{11\over 16n}}. \label{eq:rec1ac}$$ Note that now the reconnection rate decreases as the resistivity increases. Using eq. (\[eq:numr\]) we see that the bundle reconnection rate becomes $${1\over kV_l\tau_{rec}}= \min\left[{\cal M}_T^{-1/4}\Psi^{1/16} \left({4E_B\over wE_T}\right)^{-{1\over 32}-{1\over16n}},1\right]. \label{eq:reca3}$$ We note that ${\cal M}_T^{-1/4}\Psi^{1/16}$ is more or less the sixteenth root of the the magnetic Reynolds number. We see that in the resistive regime the reconnection rate for the individual tubes in a bundle is slow enough that the eddy turnover rate wins by only a modest factor. Nevertheless, it does win. We conclude that assuming the Sweet-Parker rate for magnetic reconnection in a turbulent medium gives reconnection which happens faster than the eddy turnover time, so that magnetic reconnection is primarily limited by the rate at which flux tubes move across the fluid. Paradoxically, the tube reconnection rate actually slows as the resistivity in the fluid increases. Buoyancy ======== How quickly will a single flux tube rise? Each flux tube will feel a bulk upward acceleration of $\Delta\rho g/\rho$, where $g$ is the local gravity. They will tend to drift upward as fast as allowed by their coupling to the surrounding turbulent medium. The turbulent drag per unit length on a long flux tube moving upward with a systematic velocity $V_b\hat z$ is $$F_{drag}=C_d|\vec V_T-\vec V_b|r_t(\vec V_T-\vec V_b).$$ If we assume that $|\vec V_b|\ll|\vec V_T|$ then this becomes $$F_{drag}\approx -C_d{4\over 3} V_T V_b r_t.$$ Equating this to the buoyant force we find that $$V_b \approx {r_tg\over V_T} {3\pi\over 4C_d} {\Delta\rho\over\rho}.$$ In the ideal fluid limit $\Delta\rho=\rho$. Using eq. (\[eq:radius\]) we get $$V_b \approx {1\over k_T l_p} {3\pi\over 16} V_T \left({4E_B\over wE_T}\right)^{1+1/n}, \label{eq:vb}$$ where $l_p\equiv P/\rho g$ is the pressure scale height. In the resistive limit we see from eqs. (\[eq:rdiff\]) and (\[eq:diffrho\]) that the product of $\Delta\rho$ and $r_t$ is not a function of the resistivity. This is a consequence of the condition for marginally stiff flux tubes. As a result the speed with which flux tubes rise is insensitive to whether or not they are in the resistive regime. The conventional picture of the magnetic field distribution is that it tends to segregate from the surrounding gas to the extent that $V_A$ rises to $V_T$. For a star the usual assumption is that flux is lost at a speed of approximately $V_A\sim V_T$, subject to uncertainties about removing mass from the magnetic field lines. We see that this roughly agrees with eq. (\[eq:vb\]) when the magnetic field is in equipartition with the turbulent energy density and $k_Tl_p$, which is what we expect for turbulence driven by a convective instability. On the other hand, eq. (\[eq:vb\]) indicates that the flux tubes that comprise weak magnetic fields will rise very slowly, with a speed proportional to $V_A^5$ (assuming that $n=2/3$). This still leaves open the possibility that the rate at which magnetic flux rises is dominated by collective modes or by diffusion. We will see that a similar degree of agreement between the predictions of this model and expectations based on a diffuse field obtains for buoyancy speeds in accretion disks. This concordance, when $E_B\approx E_T$, between buoyancy estimates based on the flux tube model proposed here and the rates derived from a diffuse field hides some important conceptual differences between the two pictures. Vainshtein & Rosner (1991) have shown that the conventional picture of magnetic field distribution leads to the expectation that magnetic flux is rarely lost from astrophysical objects. The model proposed here allows magnetic flux to be lost at the rate given by dividing $V_b$ by the scale of the system. Depending on the resistivity of the surrounding gas, mass is continuously unloaded from the individual flux tubes and the escape of almost completely empty flux tubes, bearing significant amounts of magnetic flux, poses no particular problem. Magnetic Buoyancy in the Sun and Other Stars -------------------------------------------- If the turbulence is driven by convection, as we expect in stars, and the magnetic field is strong, as it appears to be in the Sun, then $V_b$ is a large fraction of $V_T$ and this result can only be taken as a rough indication of the value of $V_b$. The picture that this suggests is one in which the magnetic field of the star is generated in some turbulent zone of width $\Delta Z$ such that $\Delta Z\Gamma>V_T$, where $\Gamma$ is the dynamo growth rate. In this case the magnetic field will grow to equipartition in this layer. The flux generated in this zone rises through the convective layer, gradually breaking up into separate flux tubes and acquiring structure on smaller and smaller scales as the scale of the local turbulence shrinks. The tendency of the flux loops to acquire more and more small scale structure should keep the magnetic energy density close to the kinetic energy density as the flux tubes rise. A full calculation of how this would work must be dynamical, in the sense that the speed with which the magnetic flux tubes is rise is fast enough that we should expect some deviation from a results derived under the assumption that the magnetic field is in equilibrium with the local turbulence. In this paper we will limit ourselves to pointing out some some of the qualitative features we expect to see based on our model for MHD turbulence. How would this work in the Sun? In fig. (1) we show the value of $\Psi^{1/2}C_d^{-1}$ as a function of the local temperature for a mixing length model of the solar convection zone ([@s89]). We note from eq. (\[eq:diffrho\]) that when this is less than one it is the fractional density depletion (times $C_d$) within a flux tube when $\Psi<1$ (assuming $E_B\approx E_T$). When it exceeds one it is roughly 2 divided by the fraction of the flux tube volume occupied by the skin layer. In the spirit of mixing length theory we have assumed that the local pressure scale height is the diameter of a turbulent eddy (or half the dominant wavelength). We have also calculated the resistivity as though the solar plasma were entirely ionized, which is only a crude approximation near the surface. We see that for the Sun $\Psi^{1/2}C_d^{-1}$ runs from a few times $10^{-3}$ at the base of the convection zone to greater than $1$ near the solar surface. However, the tubes are significantly evacuated only near the surface, at temperatures less than $\sim 18,000$ K. For small values of $C_d$, say $0.1$, the temperature at which the tubes become evacuated drops below $10^4$ K. We note that there is evidence that the SuperFine Structure (SFS) of the Sun consists of unresolved flux tubes whose magnetic pressure is roughly equal to the ambient pressure (for a review see [@vbt93], or [@s94]). The novel feature of this picture is that the existence of largely evacuated flux tubes on the surface of the Sun would appear to be a coincidence, marginally achieved on the Sun, and not necessarily to be expected for stars with significantly different structure. Of course, none of this should be taken as directly contradicting the idea that large flux tubes that penetrate the photosphere can undergo convective collapse ([@p78], [@s83]). It may be the latter process happens independently of any of the mechanisms discussed in this paper. Given that the magnetic field in the bulk of the solar convection zone is in the resistive regime, the magnetic flux per flux tube is given by eqs. (\[eq:scale\]), (\[eq:rdiff\]) and (\[eq:btres\]) as $$\Phi_t=\sqrt{8\pi P} \pi\left({\eta\over k_T V_T}\right)\Psi^{1/4} \left({4E_B\over wE_T}\right)^{{5\over 4n}+{1\over8}}. \label{eq:sflux}$$ In the narrow layer where the magnetic field is in the ideal fluid regime this becomes (from eqs. (\[eq:bideal\]) and (\[eq:radius\])) $$\Phi_t=\sqrt{8\pi P} \pi \left({C_d\rho V_T^2\over 4Pk_T}\right)^2 \left({4E_B\over wE_T}\right)^{2+{2\over n}}. \label{eq:sflux1}$$ $\Phi_t$ is shown for the solar convection zone as a function of the local temperature in fig. (2) assuming equipartition. We note that the flux per tube drops monotonically from the base of the convection zone to point where the resistive regime ends, and rises thereafter. If the properties of the local flux tubes stay in equilibrium with the surrounding turbulence then each flux tube at the base of the solar convection zone will make $\sim 10^2$ flux tubes near the layer where $\Psi\sim1$, and $\sim 20$ flux tubes in the photosphere. In the same spirit we expect from eq. (\[eq:radius\]) that the flux tubes making up the SFS have radii of about 1.3 km. More realistically we should expect our assumption of strict equilibrium with the local turbulence to break down near the top of the convection zone and interpret this as a prediction of flux tube radii on the order of a kilometer or so in the top of the convection zone. As the magnetic flux tubes rise through the comparatively thin layer separating the top of the convection zone from the photosphere they will tend to aggregate (since larger flux tubes will be more buoyant) and speed up, giving rise to an exponentially decreasing mean magnetic energy density and a slightly greater typical flux tube radius. Is our assumption of equipartition justified? Let’s consider the ratio of magnetic energy density to turbulent energy density as a function of height. As the magnetic field lines rise their total flux is conserved. However the ratio of size of the local turbulent cells to the original scale of organization of the field drops sharply. If we assume that the field is disordered on intermediate scales, as a consequence of bending on those scales as the flux tubes rise, then the flux threading a turbulent cell is proportional to the length of the cell divided by the local buoyant velocity. In other words $$\Phi_{tot}\sim \left({l_p r_0\over r l_0}\right)\Phi_0 \left({rl_0\over r_0 V_bt_0}\right)propto {l_p\over V_b}, \label{eq:q2}$$ where the subscript ‘$0$’ denotes the conditions at the base of the convection zone, where the dynamo operates, $t_0$ is the characteristic time for flux tubes to escape from the dynamo region, and the repeated factor of $r/r_0$ reflects the transverse spreading of flux tubes imposed by the spherical geometry. Using eqs. (\[eq:fluxa\]) and (\[eq:vb\]) we see from this that in the resistive parts of the solar convection zone $$\left({4E_B\over wE_T}\right)\approx \left({\Psi^{1/2}P^{1/2}\over l_p\rho V_T^3}\right)^{{4n\over 3(2+n)}}. \label{eq:en1}$$ Near the top of the convection zone, where $\Psi\gtrsim 1$ we use eq. (\[eq:flux\]) instead to obtain $$\left({4E_B\over wE_T}\right)\approx \left({P^{1/2}\over l_p\rho V_T^3}\right)^{{n\over 2(1+n)}}. \label{eq:en2}$$ Evaluating eqs. (\[eq:en1\]) and (\[eq:en2\]) for the solar model given in Stix (1989) gives a ratio of magnetic to turbulent energy that rises monotonically, by a factor of about 6 between the bottom of the convection zone and the end of the resistive regime, and by an additional factor of 30 at the top of the convection zone. Clearly, if the dynamo produces a magnetic field in anything like equipartition with the local turbulence at the bottom of the convection zone, then the magnetic field will be in equipartition with the turbulence throughout the convective region, and with less energy on scales between $l_p$ and $l_0$ than one would expect from a randomly twisted field on those scales. This prediction of approximate equipartition is supported by helioseismological data ([@gmwk91]). If the large scale poloidal field of the Sun is due to the coherent generation of magnetic flux in a solar dynamo, which seems probable given its quasi-periodic oscillations, then the strength of the field should be related to the magnetic flux which passes through the dynamo region. The appropriate measure of the strength of this field should be the average field strength, $\langle \vec B\rangle$, in quiet regions of the Sun, far from eruptions of flux ropes and at an altitude where $\beta$ is small enough that the magnetic field fills a large fraction of space. Choosing high latitudes for comparison also simplifies the geometry since we need only consider the flux threading the layer of turbulent cells in the dynamo region and ignore questions of radial transport of poloidal flux. Measurements of the magnetic field strength above the polar regions of the Sun suggest a value in the range 1 to 2 gauss ([@a73]). If the dynamo takes place in the first pressure scale height above the bottom of the convection zone, then the flux per unit area can be obtained by divided the RHS of eq. (\[eq:fluxa\]) by $L_T^2$. An estimate of the surface poloidal magnetic field strength follows if we multiply this number by $(r_{dynamo}/R_{\sun})^2$. Following this procedure we obtain a value between $0.6$ and $5$ gauss depending on which fiducial radius near the bottom of the solar convection zone we use. Evidently the dynamo is about as efficient in generating a large scale poloidal field as it can be, at least at high latitudes, given the strength of convection in the Sun. A crude estimate of the distribution of magnetic field energy as a function of scale can be obtained by calculating the fraction of the total magnetic energy, as a function of solar radius, contributed by the flux rising from bottom of the convection zone assuming that those flux tubes rise at a fixed fraction of $V_T$ and are essentially unbent. The remainder of the magnetic energy will be due to structures on scales intermediate between $l_0$ and $l_p$. We can combine eqs. (\[eq:btres\]) and (\[eq:q2\]) to get this fraction as a function of $r$. It is $$F_{B0}\propto {wB_t\Phi_{tot}\over \rho V_T^2 l_p^2} \propto {w(r)\sqrt{P}\over \rho V_T^3 r}\Psi^{1/4}, \label{eq:q3}$$ In the top layer of the solar convection zone, where the ideal fluid regime applies, this scaling should be replaced by one that drops the factor of $\Psi^{1/4}$. The value of this scaling factor, normalized to one at the base of the convective zone, is shown in fig. (3). We note that without some twisting of the magnetic field lines as they rise, the ratio of magnetic to turbulent energy in the photosphere would be very small, and rise sharply with decreasing $r$. We see that the steady drop in $F_{B0}$ as $r$ increases implies that each level in the solar convection zone impresses structure on the magnetic field lines as they rise, and that the energy contained in magnetic field line curvature on a scale $L$ is related to the energy necessary to keep the magnetic energy and turbulent energy in equipartition as the field lines rise through that layer of the Sun in which $l_p\approx L$. We can see from fig. (3) that it is necessary to stretch the magnetic field lines by a factor of slightly more than $10^2$, or about $5$ e-foldings as they rise in order to maintain equipartition. Since the number of pressure scale heights in the convective zone is of order a few dozen, and since each flux tube stays in a given eddy for at least an eddy turnover time, this is not a unreasonable amount of stretching. Assuming that the rising magnetic field lines is in approximate equipartition with the local turbulence, we can estimate the upward flux of entrained matter by multiplying the volume filling factor of the magnetic field times $4\pi\rho r^2 V_b$ in the resistive regime, i.e. most of the convective zone. In the top of the convection region, where the ideal fluid regime applies, this must be corrected by a factor of $\sim (2\sqrt{\eta\tau}/r_t)$, since only the ‘skin’ of each flux tube is carrying matter. Using eqs. (\[eq:psidef\]) and (\[eq:btres\]) and taking $V_b\sim V_T$ we find that $$\dot M\approx {8\pi r^2\rho V_T\over C_d\sqrt{V_T/k_T\eta}}$$ in the resistive regime. The same result (to within a factor of 2 or so) can be obtained in the ideal fluid regime using eq. (\[eq:radius\]). This mass flux drops from $\sim 4\times 10^{20}$ gm/sec at the base of the convection zone to $\sim 1.3\times 10^{19}$ at the top. In other words, the amount of mass entrained on rising flux tubes drops by a factor of 30 as they cross the convection zone, assuming that the flux tubes stay in equilibrium with their environment. This is still $\sim 10^{-8} M_{\sun}$ per year, substantially more than the mass flux in the solar wind, implying that there is considerable mass unloading from the flux tubes above the convection zone. One major omission in this model is that we have treated the velocity field as sufficiently chaotic that the distribution of flux tubes can be described entirely in terms of flux tube interactions. However, there are slowly shifting convective cells on the solar surface, i.e. solar granules. It follows that these relatively stable flow patterns will tend to collect vertical flux tubes wherever the fluid velocities are largely vertical, a feature which is beyond the scope of the simple model described here. Smaller scale features should still be described in terms of this model. Finally we note again that these predictions are all based on the assumption that the magnetic flux tubes are always able to reach equilibrium with their environment as they rise. However, for a stellar magnetic field in equipartition the buoyant velocity is some large fraction of $V_T$. It follows that large deviations from local equilibrium are possible. They are less likely for accretion disks, since $V_b$ is typically much less than $V_T$, except (as we shall see) for thick, or radiation pressure dominated, accretion disks. Magnetic Buoyancy in Accretion Disks ------------------------------------ In an ionized accretion disk the magnetic field drives the turbulence through the Velikhov-Chandrasekhar shearing instability ([@v59], [@c61], [@bh91], [@hb91], and [@hgb94]) so that $E_T\sim E_B$ ([@vd92]). The transport of angular momentum in accretion disks via this process is known as the Balbus-Hawley mechanism. If we assume that the internally generated field has a large scale azimuthal component and a small scale random component induced by the turbulence then we have $k_T\sim \Omega/V_A$, where $\Omega$ is the local rotational frequency. Since for a disk, $l_p\sim H$, where $H$ is the disk thickness, and $c_s\sim H\Omega$ the flux tubes drift upward with a systematic velocity given by $$V_b \approx {V_A^2\over c_s}\approx {V_T^2\over c_s}\sim \alpha c_s, \label{eq:diskrise}$$ where we have used the fact that the dimensionless viscosity $\alpha$ is approximately $V_T^2/c_s^2$. In other words, the magnetic flux tubes rise at a speed which is less than the local turbulent velocity by a factor of the Mach number. A similar result can obtained from a qualitative argument based on the nonlinear interaction between the shearing and buoyant modes of a diffuse magnetic field ([@vd92]). Consequently one predicts that magnetic flux is lost from the disk at a rate of $V_A^2/(c_s H)\sim\alpha\Omega$. Note that we have dropped all constants of order unity in this argument. So far we have assumed that the ambient pressure is supplied by charged particles. The modest resistivity of most astrophysical plasmas then allows us to propose that the magnetic field pressure can be very large inside the flux tubes, with a compensating deficit of gas pressure. However, in radiation pressure dominated environments the diffusion of photons into flux tubes will prevent the magnetic field pressure from ever dominating even small volumes in the plasma. This implies large and weak flux tubes which, if effectively evacuated of matter, will be much more buoyant than a diffuse field would be. Consequently the magnetic dynamo in a radiation pressure dominate disk will saturate at a lower level, giving rise to a smaller effective viscosity. We can make this point more quantitative by observing that in this situation $B_t^2$ is limited to $8\pi P_{gas}$. Assuming equipartition, we can calculate the typical flux tube radius by truncating the fractal distribution of flux tubes in an ideal fluid at the scale where the mean magnetic pressure is at this limit. Eq. (\[eq:radius\]) becomes $$r_t\approx {C_d \rho V_T^2\over 4k_T P_{gas}}. \label{eq:rdisk}$$ Neglecting resistivity we still expect that $\Delta\rho=\rho$, i.e. these flux tubes are virtually empty. Therefore $$V_b \approx {P_{tot}\over P_{gas}}{1\over k_T l_p} {3\pi\over 16} V_T, \label{eq:vb1}$$ or for an accretion disk $$V_b \sim {P_{tot}\over P_{gas}}\alpha c_s. \label{eq:vb1a}$$ Since the magnetic flux lost to buoyancy must be replaced we can equate the dynamo growth rate to $V_b/H$, implying $$\alpha\sim \left({\Gamma_{dynamo}\over\Omega}\right)\left({P_{gas}\over P_{tot}}\right). \label{eq:alpha}$$ If the dynamo is unaffected by the dominance of radiation pressure, then this implies that the vertically integrated heating rate in a radiation pressure dominated disk is $$Q_+\sim \alpha P_{tot} c_s\sim \left({\Gamma_{dynamo}\over\Omega}\right) P_{gas}c_s,$$ which depends only on the gas pressure. This result applies only if the dissipation rate remains dominated by stresses induced by the Velikhov-Chandrasekhar instability and if $V_b$ remains less than $V_T$. Is it reasonable to treat the radiation pressure as uniform across the flux tubes? If we are in the ideal fluid limit then the speed with which photons will diffuse into a flux tube is approximately $${c\over \sigma_Tn_e\sqrt{\eta\tau}}.$$ Since the photons, like the gas, are eliminated from the flux tubes through the process of stretching, folding, and pinching off loops it follows that $$\Delta P_\gamma {c\over \sigma_Tn_e\sqrt{\eta\tau}}r_t^{-1}\approx {P_\gamma-\Delta P_\gamma\over\tau}, \label{eq:raddiff}$$ where $\Delta P_\gamma$ is the photon pressure differential and $P_\gamma$ is the external photon pressure. In the limit where $\Delta P_{\gamma}<P_{gas}$ we have $\Delta P_{\gamma}\ll P_{\gamma}$. Using eq. (\[eq:rdisk\]) we can rewrite this as $$\Delta P_{\gamma}\sim {P_{\gamma}\sigma_T n_e\over c k_T} \sqrt{\eta\Omega}{C_d\over 4}\left({\rho V_T^2\over P_{gas}}\right)$$ Since $V_T^2\approx\alpha c_s^2$ this is $$\Delta P_{\gamma}\sim Re_B^{-1/2}\alpha^2 {P_\gamma^2\over P_{gas}} {\sigma_T n_e Hc_s\over c},$$ where $Re_B$ is the magnetic Reynolds number and we have discarded $C_d/4$ as a factor of order unity. For radiation pressure dominated disks $$\sigma_Tn_eH \sim{c\over c_s\alpha},$$ which implies that $$\Delta P_\gamma\sim Re_B^{-1/2} \alpha P_{gas}\left({P_\gamma\over P_{gas}} \right)^2.$$ Finally, in assuming that we were in the ideal fluid limit, as we did at the beginning of this discussion, we implied that $Re_B\alpha^2\gg P_{gas}^2/P_\gamma^2$. This in turn implies that $$\Delta P_{\gamma}\ll P_{gas}\alpha^2\left({P_\gamma\over P_{gas}}\right)^3 \sim P_{gas}\left({\Gamma\over\Omega}\right)^2 {P_\gamma\over P_{gas}} \sim {\Gamma\over\Omega}\left({V_b\over V_T}\right)^2. \label{eq:raddiff2}$$ We conclude that in the ideal gas limit $\Delta P_\gamma\ll P_{gas}$ unless $V_b\gg V_T$. However, in this limit our assumption regarding the ordering of the velocities is violated. Moreover it is unclear that one can apply the Velikhov-Chandrasekhar instability to this case. Fortunately, this limit only obtains when $P_\gamma/P_{gas}$ exceeds $\Omega/\Gamma_{dynamo}$. In the internal wave driven dynamo model ([@vjd90], [@vd92], and [@vd94]) the angular momentum deposited by the nonlinear dissipation of the internal waves gives a minimum for $\alpha$ which will dominate over the value derived from the Velikhov-Chandrasekhar instability at smaller values of $P_\gamma/P_{gas}$. In the resistive limit eq. (\[eq:raddiff\]) becomes $$\Delta P_\gamma {c\over \sigma_Tn_e\eta\tau}\approx {P_\gamma-\Delta P_\gamma\over\tau}. \label{eq:raddiff3}$$ Following the same line of reasoning as above we can replace eq. (\[eq:raddiff2\]) with $$\Delta P_\gamma\sim {P_\gamma\over Re_B}\sim P_{gas} {\eta\Omega\over\alpha c_{gas}^2},$$ where $c_{gas}^2$ is the sound speed of the hot gas alone (i.e. $P_{gas}/\rho$). Assuming that the magnetic diffusivity is dominated by electron-ion collisions then $\eta/c_{gas}^2\approx 2\times 10^{-11} T_6^{-5/2}$ seconds. Consequently, for AGN, for which $T_6$ is of order unity, $\alpha$ is of order $10^{-2}$ to $10^{-3}$, and $\Omega$ is no more than $10^{-3}$, we conclude that $\Delta P_\gamma\ll P_{gas}$ even if the resistive limit applies. Disks around smaller mass black holes will tend to have lower disk temperatures (by roughly a factor of $M_{BH}^{1/4}$) and larger values of $\Omega$ (by a factor of $M_{BH}^{-1}$) so even if we consider a solar mass black hole we still find that $\Delta P_\gamma$ will be less than $P_{gas}$ by at least a factor of $10^{-8}$. We conclude that our assumption that the radiation pressure does not vary significantly across a flux tube boundary is satisfied for all accretion disks likely to be dominated by radiation pressure. In fact, we can show that it is quite likely that such disks are always in the ideal fluid regime. Taking into account that the magnetic pressure is limited to the gas pressure alone in such disks, eq. (\[eq:diffcrit\]) can be modified to yield a criterion for the resistive regime. It is $$\left({\alpha P_\gamma\over P_{gas}}\right)^3 \left({c_{gas}^2\over \Omega\eta}\right)<\sim16. \label{eq:diffcrit1}$$ Taking into account eq. (\[eq:alpha\]) and assuming that the resistivity is dominated by electron-ion collisions this implies that $$\left({\Gamma_{dynamo}\over \Omega}\right)^3 \Omega^{-1} T_6^{5/2} <\sim 3\times10^{-10}, \label{eq:diffcrit2}$$ where $\Omega$ is given in radians per second. Since $\Gamma_{dynamo}/\Omega\sim\alpha$ for normal, i.e. gas pressure dominated, accretion disks, and phenomenological determinations of $\alpha$ in such disks tend to give values in the range $10^{-1}$ to $10^{-2}$ this makes it seem relatively unlikely that a realistic model of a radiation pressure dominated accretion disk could be in the resistive regime. We still need to determine whether or not the magnetic dynamo is suppressed by viscosity in radiation pressure dominated disks. Applying eq. (\[eq:diffvis\]) to accretion disks and once again remembering that the matter pressure, rather than the total pressure, limits the local magnetic field strength, we find that the criterion for ignoring viscosity is $$4\pi^3< \left({\alpha P_\gamma\over P_{gas}}\right)^2 {c_{gas}^2\over \Omega\nu}, \label{eq:z9}$$ where we have ignored $C_d$ and $\gamma$ as begin close to unity. The viscosity is apt to be dominated by the photon shear viscosity which is ([@t30], [@smt71]) $$\nu={8\over 9} {P_\gamma\over\rho c^2} {c\over n_e\sigma_T}.$$ Once again discarding constants of order unity we see that since $${P_\gamma c\over n_e\sigma_T H}\sim\alpha\Sigma c_s^2\Omega,$$ where $\Sigma$ is the matter column density in the disk, it follows that $$\nu\sim {\alpha H c_s^3\over c^2}. \label{eq:z10}$$ Combining eqs. (\[eq:z9\]) and (\[eq:z10\]) we find that viscosity will not dominate the magnetic field dynamics when $$4\pi^3< \sim \left({\alpha P_\gamma\over P_{gas}}\right)\left({c\over c_s}\right)^2, \label{eq:z11}$$ or $$4\pi^3< \sim {\Gamma_{dynamo}\over \Omega}\left({c\over r\Omega}\right)^2 \left({r\over H}\right)^2. \label{eq:z12}$$ If the magnetic dynamo is an $\alpha-\Omega $ dynamo driven by purely local processes, then $\Gamma_{dynamo}/\Omega$ is some number less than one. In the internal wave driven dynamo model $\Gamma_{dynamo}\sim (H/r)^k\Omega$, where $k$ is between 1 and $1.5$, a result which is also consistent with a number of phenomenological studies of accretion disk models ([@mo83], [@mm84], [@m84], [@s84], [@mo85], [@lpf85], [@cwp86], [@hw89], [@mw89], [@mwa89], and [@c94]). In either case we see that whereas the right hand side of eq. (\[eq:z12\]) is apt to be quite large in most disks, near the event horizon of a black hole accreting near the Eddington limit, so that $r\Omega$ can approach $c$ and $H$ can approach $r$, this inequality may not be satisfied. In other words, near the very inner edge of accretion disks around black holes the magnetic dynamo could fail altogether due to photon viscosity. One consequence of these results is that we can estimate the fraction of the energy generated by dissipation within the disk which is ejected in the form of rising magnetic flux tubes. The magnetic energy density, $E_B$, is roughly $\alpha P_{tot}$. The magnetic energy flux is just $$F_B\sim\ \alpha P_{tot}V_b\sim \alpha P_{tot}c_s\left({V_b\over c_s}\right), \label{eq:z6}$$ where $\alpha P_{tot} c_s$ is the vertically integrated energy generation rate (and therefore approximately equal to the radiative flux from the disk). Comparing eqs. (\[eq:diskrise\]) and eq. (\[eq:z6\]) we see that the fraction of energy carried away by rising flux tubes is roughly $\alpha$ for a normal disk. This energy is likely to be eventually dissipated as nonthermal radiation from the disk chromosphere and corona. For a radiation pressure dominated disk eq. (\[eq:diskrise\]) must be replaced by eq. (\[eq:vb1a\]) and the fraction of a disk’s energy budget carried by rising flux tubes is $\alpha P_\gamma/P_{gas}$ or $\Gamma_{dynamo}/\Omega$. The latter expression will be approximately correct regardless of whether or not the disk is dominated by radiation pressure. We conclude that unless $P_\gamma>P_{gas}(\Gamma_{dynamo}/\Omega)^{-1}$ we can model a radiation pressure dominated disk using a dimensionless viscosity which couples only to the gas pressure, provided that purely hydrodynamic effects do not contribute a significant viscosity (as they will in the internal wave driven dynamo model). Since $\alpha\sim\Gamma_{dynamo}/\Omega$ in a gas pressure dominated disk this is equivalent to limiting the radiation pressure to a value no more than two or three orders of magnitude greater than the gas pressure. However, this is less of a limit than it might appear, since coupling dissipation to gas pressure alone causes the ratio $P_{\gamma}/P_{gas}$ to rise quite slowly with decreasing radius. In a sense this result is anti-climactic. This kind of model has been previously proposed ([@le74]) as a way to avoid the severe instabilities which would otherwise occur in an accretion disk with a local viscosity coupled to the total pressure ([@prp73], [@le74], [@ss76]). In fact, magnetic buoyancy has been specifically cited as a mechanism which might limit dissipation to a rate proportional to the gas pressure rather than the total pressure ([@el75], [@c81], [@sc81], and [@sr84]). In a similar vein, Sakimoto & Coroniti (1989) claimed that any model for angular momentum transport due to global magnetic stresses proportional to the total pressure could not be internally self-consistent. However, their model assumed that angular momentum transport was due to global Reynolds stresses rather than the Velikhov-Chandrasekhar instability. Moreover, they lacked any clear criterion for the flux tube radius. Consequently, their result was expressed as a preference for coupling to gas pressure rather than total pressure, given a choice between the two, rather than a derivation of the correct coupling. What we have shown here is that given our model for MHD turbulence, and the assumption that the Balbus-Hawley mechanism is responsible for angular momentum transport in accretion disks, the dissipation rate is proportional to the product of the dynamo growth rate and the gas pressure. Conclusions =========== We have proposed a model for the distribution of the magnetic field in a highly conducting, turbulent medium with a high $\beta$. The basic feature of the model is that the magnetic flux is distributed in bundles of small radius and large Alfvén velocity. The typical scale of flux tube curvature is the scale at which the turbulent kinetic energy density and the average magnetic field energy density are in equipartition. This is much larger than the typical flux tube radius, which is set by the condition that the tubes be marginally stiff to fluid motions on the curvature scale. The skin depth of the flux tubes can be smaller still, depending on the regime begin considered. The direction of the magnetic field inside the flux tubes is strongly correlated over all scales less than the curvature scale, i.e. the magnetic field does not show numerous reversals on small scales. This structure implies efficient reconnection, allows the magnetic field and the bulk of the plasma to move independently, and yet retains enough coupling between the two that the basic notion of a fast dynamo remains plausible. We have not attempted to rederive mean-field dynamo theory from this model, nor find a replacement for it based on the dynamics of the flux tubes. We note only that the basic features of mean field dynamo theory, twisting due to forcing by the surrounding fluid flow and reconnection, are inevitable parts of this model. Applying the model to stars we can see that the kind of substructure observed in the sun is the inevitable result of a dynamo buried at the base of the convective zone. Applying the model to accretion disks we see that, as previously claimed, magnetic flux loss from accretion disks is relatively inefficient and proceeds at a rate that scales with the rate for vertical turbulent diffusion. In addition, we have found that for values of $P_\gamma/P_{gas}$ moderately greater than one the dissipation couples only to the gas pressure. We have made no attempt to apply this model to the galactic magnetic field. There are several reasons for this. First, the mean magnetic pressure in the disk of the galaxy probably exceeds the thermal pressure, although it is in rough equipartition with the turbulent pressure and the cosmic ray pressure. This violates our assumption of large $\beta$. Second, the disk is filled with supersonic turbulence whereas we have taken $V_T/c_s$ as a small parameter. Third, the magnetic field of the galaxy interacts with both the gas in the disk, and the cosmic rays. It is not clear to what extent the latter can be treated as a fluid, nor what their role might be in the galactic dynamo. Fourth, the galactic disk is a highly inhomogeneous environment, with many local sources of outflow, complete with entrained particles and fields. This makes it unlikely that the model we have presented here, based on turbulent cells whose internal properties are statistically homogeneous, can be applied. It is intriguing to note that the proposed distribution of magnetic flux in an ideal, turbulent fluid seems to maximize the dissipation of the turbulent energy, at least in the ideal fluid regime. A qualitative argument along these lines is as follows. Consider a turbulent cell threaded by some fixed amount of flux. The rate at which turbulent energy is dissipated in an unmagnetized eddy is fixed by the large scale eddy turnover rate. In order to enhance this rate of dissipation the magnetic field needs to absorb energy directly from the large scale eddies and transfer it to some much smaller scale on a short time scale. There are basically two ways this could happen. First, the magnetic field might be gathered into flux tubes which are stiff, on the largest scale of turbulence. In this case kinetic energy in the large scale eddies is dissipated in the turbulent wakes behind the flux tubes on a time scale $\sim V_T/r_t\gg V_T/L_T$. Second, the magnetic field might be pliant on large scales, but constantly reconnecting on smaller scales so that energy absorbed by the field is immediately transferred to smaller scale field loops which rapidly collapse or are folded into smaller and smaller loops. In the first case the energy dissipation rate induced by the magnetic field is $$\dot E_T\approx N_T \rho V_T^2 (r_t^2L_T){V_T\over r_t},$$ where $N_T$ is the total number of flux tubes. The constraint imposed by the external magnetic flux can be expressed in this case as $N_TV_{At} r_t^2\equiv \Phi_T$, where $\Phi_T$ is a constant. Consequently, $$\dot E_T\approx {\rho V_T^3 L_T\Phi_T\over r_tV_{At}}. \label{eq:z1}$$ The condition that these flux tubes be stiff is, ignoring constants of order unity, the condition that $$V_{At}^2r_t\ge V_T^2L_T. \label{eq:z2}$$ Comparing eqs. (\[eq:z1\]) and (\[eq:z2\]) we see that the energy dissipation rate is maximized if (\[eq:z2\]) is just marginally satisfied and if $V_{At}$ is as large as possible, i.e. $V_{At}\approx c_s$. In this limit the energy dissipation rate due to the presence of the magnetic field is $$\dot E_T\approx \rho c_s V_T L_T\Phi_T. \label{eq:z3}$$ In the second case the magnetic field will absorb, and dissipate energy, at a rate proportional to its total energy times the shear due to large scale flows, i.e. $$\dot E_T\approx E_B {V_T\over L_T}.$$ The problem of maximizing the energy dissipation rate is equivalent to maximizing the magnetic energy density subject to the constraints implied by $\Phi_T$ and the dynamics of the turbulence. We can express the total magnetic energy as $$E_B\approx \Phi_l l V_{At}\left({L_T\over l}\right)^3,$$ where $l$ is the scale on which the flux tubes can resist the local fluid motions, and $\Phi_l$ is the magnetic flux, divided by a factor of $(4\pi\rho)^{1/2}$, across a typical volume of size $l$. Since the magnetic field lines are essentially random walks on larger scales we can use eq. (\[eq:phran\]) to obtain $$\dot E_T\approx \Phi_T V_{At}V_T\left({L_T\over l}\right).$$ Comparing this result to eq. (\[eq:z3\]) we see that if $V_{At}=c_s$ then weaker and more numerous flux tubes dissipate energy more efficiently than a few rigid flux tubes, provided the more numerous flux tubes do not interfere with the turbulent cascade. The latter condition is particularly important, since a uniform magnetic field with an energy density greater than the energy density of local eddies can be shown to strongly inhibit energy dissipation ([@k65], [@dc90]) by replacing the usual turbulent cascade with Alfvénic turbulence. When the number of flux tubes in turbulent eddies of size $l$, $N_l$ exceeds $l/r_t$, then even if the flux tubes are not completely stiff on this scale, their mutual shadowing implies that most of the energy on this scale goes into the magnetic field, which can dissipate this energy only through Alfvénic turbulence. Mutual shadowing is moderate if $N_lr_t\lesssim l$. Since $$\Phi_l\sim N_l V_{At} r_t^2\sim V_{At}r_t l\sim \Phi_T {l\over L_T},$$ it follows that $$V_{At}r_t \sim {\Phi_T\over L_T}\approx {V_l^2 l\over V_{At}},$$ where the last expression is just a restatement of the fact that $l$ is defined as the scale of marginal stiffness. The ratio of $V_{At}$ to $l$, which must be maximized to maximize energy dissipation, is therefore proportional to $V_l^2$. We conclude that the energy dissipation rate is maximized when $l$ is maximized. Our last equation implies that $V_l^2l$, and therefore $V_l^2$, is maximized when $V_{At}$ is maximized, which brings us back to the condition that $V_{At}\approx c_s$. Combining this with our previous results we find $$E_B\sim \rho V_l^2 L_T^3,$$ $$\Phi_T\sim {lV_l^2 L_T\over c_s},$$ and $$r_t\sim {\Phi_T\over L_Tc_s}\sim l{\cal M}_l^2.$$ Adding the assumption that $V_l^2\propto l^n$ will allow us to rederive, in less exact form, the major results of §II. The only remaining result is the correlation between flux tubes on scales smaller than $l$. This plausibly follows from the condition that the turbulent wakes of the individual flux tubes can be made maximally efficient at dissipation if they overlap at every scale. It appears that the model proposed here is equivalent to claiming that the magnetic field in a turbulent flow is an example of a [*dissipative structure*]{}, an ordered state which promotes the dissipation of energy and the production of entropy. This presents a sharp contrast to the effect of a more smoothly distributed magnetic field ([@k65], [@dc90]) which inhibits the decay of turbulent eddies and their eventual dissipation. This model contains several implications for numerical MHD calculations. First, a simulation which starts from a diffuse magnetic field (e.g. [@hgb94]) embedded in a turbulent fluid will show a strong initial growth in the mean magnetic energy density, [*even if there is no dynamo at work*]{} with an e-folding rate close to the eddy turnover rate. If the calculation is compressible with a Mach number close to one (in the sense of satisfying eq. (\[eq:diffcrit\])) then the final magnetic energy density will be roughly the geometric mean between its initial value and the thermal pressure of the surrounding fluid times $w$, i.e. times a factor of order three or four. A true dynamo can be distinguished from this effect only by a careful examination of the large scale distribution of magnetic flux, or by starting from a state consisting of flux tubes with $V_A\sim c_s$. Claims regarding the ability of the Velikhov-Chandrasekhar instability to support a dynamo should be evaluated with this point in mind, especially since current simulations show a strong dependence on the initial state, suggesting that no dynamo is present. However, given the current state of three dimensional MHD codes eq. (\[eq:diffvis\]) represents a major obstacle to producing realistic simulations. The minimum value of ${\cal M}^2$ which avoids the viscous regime is of order $120$ divided by the Reynolds number. This means that the current generation of codes are limited to very slightly subsonic turbulence, or a value of the resistivity high enough to satisfy eq. (\[eq:svmr\]). We have already noted that satisfying this criterion is also difficult. The presence of a large scale shear evidently prevents the strongly viscous regime from producing a completely stagnant situation ([@hgb94]), nevertheless the flux tube dynamics should still be very different in this regime. The field amplification due to flux tube formation is unlikely to play a major role in astrophysical objects, since the initial state is normally unrealistic. (The early galaxy may be one exception to this rule.) Second, the failure to create a simulation in which the fluid is in the resistive or ideal fluid regimes will prevent the magnetic energy density from reaching equipartition with the turbulent flow, even in the presence of a strong dynamo. Current MHD simulations of stellar convection suffer from this difficulty, and should not be taken as realistic models of stellar dynamos. Third, such failures can occur even for large values of the magnetic and fluid Reynolds numbers. Success is most likely for compressible codes with large values of $V_T/c_s$, i.e. not too much less than one, or for incompressible codes with $\eta/\nu\gg 1$. In particular, one can go from the viscous regime to the resistive regime for a simulation of incompressible MHD turbulence most easily by allowing the value of $\eta$ is be much larger than the minimum value given by the nature of the computer code. Improving the code by lowering $\eta$ can actually result in [*less*]{} realistic results. On the other hand, current numerical codes are likely to prove essential in testing the model proposed in this paper, even if they can’t be used to simulate realistic situations. The model proposed here makes some specific predictions concerning the saturated state of the magnetic field in simulations with strong dynamos. (The presence of a dynamo may be necessary to prevent the magnetic field from disappearing when the imposed magnetic flux is very small, or zero). In particular, we predict the existence of a critical Reynolds number $\sim 60/C_d$ (where the Reynolds number is defined with the inverse wavenumber of the strongest flows as the fiducial length). Simulations with smaller Reynolds numbers should show flux tubes with internal Alfvén velocities $\sim V_T$, and a flux tube radius (or alternatively, the magnetic Taylor microscale) between $C_d/2k_T$ and $30 \nu/V_T$. Simulations that start with a very weak uniform magnetic flux will evolve toward a state in which each large scale eddy has one flux tube, with the minimum flux tube radius, and an average magnetic field energy which is a fraction, approximately equal to $0.04C_d^2$, of the kinetic energy density. If the imposed magnetic flux is too large to be accommodated within so few flux tubes, then the magnetic energy density will exceed its minimal value by the ratio of the total magnetic flux to the maximum value consistent with the minimal state. Initially, the increased magnetic energy will be reflected in a proportional increase in the area of each flux tube (or an increase in the magnetic Taylor microscale proportional to the square root of the flux). However, once the average flux tube radius reaches its maximum value it will stabilize and any further increases in magnetic energy will be accommodated in additional flux tubes of the same large size. We note again that the ratio of the maximum and minimum flux tube radii is inversely proportional to the Reynolds number of the simulation. The value of the resistivity will have little effect on these predictions as long as the magnetic Reynolds number is large, and the resistivity is insufficient to damp the dynamo before the field has had a chance to form flux tubes. As we move to higher Reynolds numbers, i.e. larger than the critical value cited above, we should see both the ratio of the average magnetic energy density to the turbulent kinetic energy density, and the ratio of the magnetic Taylor scale to the eddy size, fall off inversely with the Reynolds number. The maximum magnetic flux consistent with $\sim 1$ flux tube per eddy will drop at the slightly faster rate of $Re^{-3/2}$ as the Reynolds number is increased. Initial states with greater magnetic flux will saturate at a higher magnetic energy density (scaling linearly with the initial magnetic flux) but with flux tubes of the same radius. Everywhere in the strongly viscous regime, which is the one accessible to current numerical simulations, the radius of curvature of the field lines will be $\sim L_T/4$. The contrast between this scale, which may be approximated by $[\int B^4dV/\int ((\vec B\cdot\vec\nabla)\vec B)^2 dV]^{1/2}$, and the magnetic Taylor microscale will become increasingly obvious for Reynolds numbers above the critical value. The simulations of Nordlund et al. (1992) and Tao et al. (1993) are consistent with these predictions (modulo some uncertainty about the actual effects of varying of $\eta$) but have similar Reynolds numbers, bracketing the dividing line given in eq. (\[eq:modvisc\]). The division between large and small scale magnetic fields, an initial step in traditional mean-field theory, is not particularly useful here. The small scale features of this model, the flux tube radius and the flux tube skin depth, are intimately connected to the dynamics of the large scale field. This makes it difficult to compare this treatment of magnetic field dynamics to recent work in mean-field theory (e.g. [@cv91], [@gd94]). However, the two approaches do have different predictions for numerical simulations. For example, Gruzinov & Diamond predict that the dynamical behavior of the large scale magnetic field will change dramatically once its amplitude exceeds $(\rho V_T^2/R_m)^{1/2}$, while the small scale magnetic field reaches equipartition with the turbulent energy density. It does not predict the existence of a critical value of the Reynolds number, or a decline in $E_B/E_T$ at high Reynolds number. Moreover, it suggests a final state which is quite sensitive to $\eta$. It follows that a confirmation of the predictions in the preceding paragraph would be strong argument for using the flux tube dynamics suggested here, rather than mean-field theory and the failure of those predictions, and a confirmation of the predictions of Gruzinov and Diamond, would have the opposite effect. It is appropriate to pause at this moment and remember what is not included here. We have included only the most basic features of fluid turbulence, meaning that we have characterized the fluid motion by an large scale eddy size and the velocity on that scale. We have argued that the smaller eddies play very little role in the dynamics of the magnetic field, except for the turbulent wakes generated by the flux tubes themselves. Real turbulence is often characterized by intermittent coherent structures. We have made no allowance the effects of such structures and make no predictions concerning simulations which explicitly include coherent flows. Realistic astrophysical situations may include cases where turbulent motion is driven on a variety of scales. Our results suggest that the most important scale is the one where most of the turbulent energy resides, at least for magnetic fields in equipartition with the turbulence, but in cases where the energy peak is broadly distributed, or where there are two or more competing peaks in the Fourier spectrum, the magnetic field may show more complicated structure. Nevertheless, within the limitations imposed by our neglect of these points, our model appears to explain many of the features observed in numerical situations and some aspects of the solar magnetic field. It is appropriate to regard it as a crude sketch of a more complete theory. This work was supported by NASA grant NAGW-2418. I would like to thank several people for useful discussions, including Fausto Cattaneo, Jung-Yeon Cho, Patrick Diamond, Robert Duncan, Russell Kulsrud, Norman Murray, Stefano Migliuolo, Christopher Thompson, and Samuel Vainshtein. I would also like to thank the anonymous referee of a previous, unpublished note, who persuaded me that magnetic buoyancy cannot be understood without a theory for predicting flux tube radii under different physical conditions. The initial impetus for this work came from a visit to the Canadian Institute for Theoretical Astrophysics. A Comparison to Numerical Work ============================== In a recent paper Tao et al. ([@tcv93]) simulated three dimensional incompressible MHD turbulence with an imposed helicity. The simulation took place in a box with sides of length $2\pi$. The fluid had $\nu=\eta=1/130$. The turbulence was driven by imposed bulk forces tuned so that the rms fluid velocity was one. The turbulence was supported by forces distributed in phase space from $|k|=2$ to $|k|=4$. They found that the magnetic field energy density tended to saturate at values far below equipartition with the fluid kinetic energy. Here we show that their simulation was solidly inside the viscous regime, and that their results can be understood in terms of the formulae given in this paper. We begin by noting that $c_s=\infty$ so that ${\cal M}=0$. Taking $k_T\approx 3$ we note that $$Re\equiv\left( {V_T\over k_T\nu}\right)\approx 43.$$ Comparing to eq. (\[eq:modvisc\]) we see that this places us in, or close to, the regime of moderate Reynolds numbers where we expect the Alfvén velocity in the flux tubes to be close to $V_T$. Following eq. (\[eq:q1\]) resistivity will be negligible provided that the Reynolds number exceeds $4/C_d^2$, which will certainly be true here. It follows that we are well inside the viscous regime. In all of their simulations they started with a uniform magnetic flux. In the first two cases the starting value of $V_A$ was $32^{-1}$ and $256^{-1}$ and the simulations evolved towards the same final state, which we can identify with our minimal viscous state. In the last case they took an initial value of $30^{-1/2}$, and reached a somewhat different state. We may anticipate their results to the point of noting that the typical magnetic Taylor microscale they found in their minimal state was $\sim 0.2$, which should be close to the typical flux tube radius. The appropriate value of $C_d$ for the flow past the flux tubes is $\sim 1.5$ ([@r46]). This implies that the dividing line for the moderate Reynolds number regime is (from eq. (\[eq:modvisc\])) at $Re=41$. In other words, we expect the minimal flux tubes to be close in size to the maximal flux tubes (given in eq. (\[eq:rvisc\])) with a typical size of $$r_t\approx 0.25$$ We note that this is 25% larger than the observed value, but considering the crudity of our calculations this counts as excellent agreement. The minimal state should have one flux tube per turbulent cell, for a total energy density of $$E_B\approx {w\over 2} V_T^2 {\pi r_t^2\over L_T^2}\approx 0.044,$$ where we have taken $w=2$, the appropriate value for the viscous regime. The observed value is $E_B=0.05$. We note that the flux per turbulent cell in the minimal state (defined in terms of the area integral of $V_A$) is 0.13. The sign of this flux will vary from one turbulent cell to the next. However, if the initial value of $V_A$ exceeds $\sim 0.05$ (which is the lower limit from eq. (\[eq:zz\]) for this simulation) then the simulation will be unable to fit its total flux into a minimal state configuration. In other words, the simulation with $V_A=32^{-1}$ initially is below the dividing line by a factor of about 1.7. The simulation with $V_A$ initially set to $30^{-1/2}=0.18$ will, according to our predictions, settle into a state with flux tubes of approximately the same size, but with an energy density equal to $(w/2) V_T\Phi_{tot}$, or $E_B\approx 0.18$. Tao et al. find an energy of about $0.18$ and a radius of about $0.3$. The fact that the area per flux tube goes up in this case by a factor of about 2, whereas the total magnetic energy rises by a factor of almost 4, is due to the increased number of flux tubes per turbulent cell. The discrepancy in radius in this third case can be understood as following from this simulation being slightly deeper into the regime of moderate Reynolds numbers than we have estimated, so that a larger initial flux increases the flux tube radius slightly. Since the minimal and maximal flux tube radii differ by a factor proportional to $\nu$ in this regime, we expect that the two will converge for a similar simulation with $\nu\approx 0.005$, or smaller by a factor of $3/2$. To be more specific, we expect that the larger of the two radii will shrink down to the smaller as $\nu$ decreases. Still smaller values of $\nu$ should produce an Alfvén velocity in the flux tubes which is larger than $V_T$ by a factor that rises inversely as the square root of $\nu$, while the flux tube radius falls with $\nu$. If $\nu$ becomes small enough, then the inequality in eq. (\[eq:svmr\]) will be satisfied and the simulations will finally reach the resistive limit. In their present form the simulations of Tao et al. fail to reach this limit by a factor of roughly $25$. Since we require that $\eta$ must be no larger than $\nu$ this implies that $\nu$ has to be lowered by a factor of roughly 25 if we keep $\eta=\nu$. A better strategy would be lower $\nu$ and leave $\eta$ fixed, in which case the Reynolds number need only go up by a factor $\sim5$ to reach the more realistic resistive regime. Another useful comparison can be made with the work of Nordlund et al. (1992) who simulated a three dimensional dynamo in a convective flow. They found no field amplification for $\eta\ge\nu$, but for $\nu=2\eta$ and $\nu=4\eta$ their simulations evolved towards a unique stationary state with a kinetic energy approximately $\sim 16$ the magnetic energy. The Mach number of the flow was of order $10^{-2}$ and the Reynolds number was $\sim 300$, based on the thickness of the convective layer, which is roughly $95$ by the definition we have used here. We see from these parameters that the flow was in the viscous regime, a bit above the dividing line between moderate Reynolds numbers (where $V_A\approx V_T$) and large Reynolds numbers. According to the model given here their final state should be a minimal energy state with roughly one flux tube per turbulent cell and an energy ratio (from eq. (\[eq:z7\])) of $${V_A^2\over V_T^2}\approx {C_dw\pi^2\over 8} {\pi\over 300}\approx 0.039,$$ where we have used $w\approx 2$ and $C_d\approx 1.5$. This is low by a factor of $\sim 1.6$, but given the more complicated nature of their simulation this is still reasonable agreement with our model. In fact, given that the simulation of Tao et al. indicated that we underestimated the Reynolds number dividing the viscous regime moderate and strong Reynolds numbers by a factor of $\sim 1.5$, and that the ratio of magnetic energy to kinetic energy should drop linearly with the inverse of the Reynolds number above this threshold, the entire discrepancy may be due to this same point. This would imply that the right hand side of eq. (\[eq:modvisc\]) should be multiplied by a factor of 1.5. Nordlund et al. do not quote an average flux tube radius or Alfvén velocity in the flux tubes. These two simulations have Reynolds numbers which are roughly similar, which makes it harder to draw firm conclusions from their agreement with our model. Moreover, the details of convective turbulence are different from turbulence driven at large scales by a random external force. However, we note that these simulations do show the expected qualitative behavior, which is that for the minimal state of a viscously dominated simulation the ratio of magnetic energy to kinetic energy drops as the Reynolds number increases. Allen, C.W. 1973, Astrophysical Quantities, Athlone Press: London Arnold, V.I. 1965, C.R. Acad. Sci. Paris, 261, 17 Balbus, S.A. & Hawley, J.F.  1991, , 376, 214 Batchelor, G.K. 1950, Proc. Roy. Soc. Lond., 201, 405 Biskamp, D. 1986, Phys. Fluids, 29, 1520 Cannizzo, J.K. 1994, , 435, 000 Cannizzo, J.K., Wheeler, J.C., & Polidan, R.S. 1986, , 330, 327 Cattaneo, F. & Vainshtein, S.I. 1991, , 376, L21 Chandrasekhar, S. 1961, Hydrodynamic and Hydromagnetic Stability, Oxford University Press: London Childress, S. 1970, J. Math. Phys., 11, 3063 Coroniti, F.V. 1981, , 244, 587 Coroniti, F.V. & Eviatar, A.1977, , 33, 189 DeLuca, E.E., Fisher, G.H., & Patten, B.M. 1993, , 411, 383 Diamond, P.H. 1994, private communication Drake, J.F. 1984, in Magnetic Reconnection in Space and Laboratory Plasma, ed. E.W. Hones, Geophysical American Union: Washington D.C., p. 61 Diamond, P.H. & Craddock, G.G.  1990, Comments Plasma Phys. Controlled Fusion, 13, 287 Du, Y. & Ott, E.  1993, J. Fluid Mech., 257, 265 Eardley, D.M. & Lightman, A.P. 1975, , 200, 187 Galanti, B., Sulem, P.L., & Pouquet, A. 1992, Geophys. Astrophys. Fluid Dynamics, 66, 183 Goldreich, P., Murray, N., Willette, G., & Kumar, P. 1991, , 370, 752 Gruzinov, A. & Diamond, P.H. 1994, Phys. Rev. Lett., 72, 1651 Hawley, J.F. & Balbus, S.A. 1991, , 376, 223 Hawley, J.F., Gammie, C.F., & Balbus, S.A. 1994, submitted to Huang, M. & Wheeler, J.C.1989, , 343, 229 Kraichnan, R.H. 1965, Phys. Fluids, 8, 1385 Lightman, A.P. & Eardley, D.M. 1974, , 187, 897 Lin, D.N.C., Papaloizou, J., & Faulkner, J. 1985, , 212, 105 Mattheus, W. & Lamkin, S.L. 1986, Phys. Fluids, 29, 2513 Meyer, F. 1984, , 131, 303 Meyer, F. & Meyer-Hofmeister, E. 1984, , 104, l10 Mineshige, S. & Osaki, Y. 1983, , 35, 377 Mineshige, S. & Osaki, Y. 1983, , 37, 1 Mineshige, S. & Wheeler, J.C. 1989, , 343, 241 Mineshige, S. & Wood, J. 1989, , 241, 259 Nordlund, Å., Brandenburg, A., Jennings, R.L., Rieutord, M., Ruokolainen, J., Stein, R., & Tuominen, I. 1992, , 392, 647 Parker, E.N. 1957, J. Geophys. Res., 62, 509 Parker, E.N. 1978, , 221,368 Petscheck, H.E. 1964, in Proceeding of AAS-NASA Symp. on the Physics of Solar Flares, ed. W.W. Hess, NASA Spec. Pub. No. SP-50: New York, p. 425 Pringle, J.E., Rees, M.J., & Pacholczyk, A.G. 1973, , 29, 179 Rouse, H. 1946, Elementary Mechanics of Fluids, John Wiley & Sons: New York Sakimoto, P.J. & Coroniti, F.V. 1981, , 247, 19 Sakimoto, P.J. & Coroniti, F.V.  1989, , 342, 49 Sato, H., Matsuda, T., & Takeda, H. 1971, Prog. Theor. Phys. Suppl., 49, 11 Shakura, N.I. & Sunyaev, R. 1973, , 175, 613 Smak, J. 1984, Acta Astr., 34, 161 Spruit, H.C. 1983, in Solar and Stellar Magnetic Fields: Origin and Coronal Effects, ed. J.O. Stenflo, Reidel: Dordrecht, p. 41 Stella, L. & Rosner, R. 1984, , 277, 312 Stenflo, J.O. 1994, Solar Magnetic Fields, Kluwer Acad. Pub.: Dordrecht Stix, M. 1989, The Sun: an Introduction, Springer-Verlag: New York Strauss, H.R. 1988, , 326, 412 Sweet, P.A. 1958, in (IAU Symp. No. 6) Electromagnetic Phenomena in Cosmical Plasma, ed. B. Lehnert, Cambridge University Press: New York, p. 123 Tao, L., Cattaneo, F., & Vainshtein, S.I. 1993, to be published in Theory of Solar and Planetary Dynamos, ed. M.R.E. Proctor, P.C. Matthews, and A.M. Rucklidge, NATO ASI, Isaac Newton Lecture Note series Thomas, L.H. 1930, Q.J. Math. 1930, 1, 239 Vainshtein, S., Bykov, A.M., & Toptygin, I.N. 1993, Turbulence, Current Sheets, and Shocks in Cosmic Plasma, Gordon and Breach: Amsterdam Vainshtein, S.I. & Rosner, R. 1991, , 376, 199 Velikhov, E.P. 1959, Soviet JETP, 35, 1398 Vishniac, E.T. & Diamond, P.H. 1992, , 398, 561 Vishniac, E.T., & Diamond, P.H.  1994, in Accretion Disks in Compact Stellar Systems, ed. J.C. Wheeler, World Scientific Press: Singapore Vishniac, E.T., Jin, L., & Diamond, P.H. 1990, , 365, 552 -1.5cm -1cm -2cm
--- abstract: 'This is an expository article on the theory of algebraic stacks. After introducing the general theory, we concentrate in the example of the moduli stack of vector budles, giving a detailed comparison with the moduli scheme obtained via geometric invariant theory.' author: - | Tomás L. Gómez\ \ Tata Institute of Fundamental Research\ Homi Bhabha road, Mumbai 400 005 (India)\ `tomas@math.tifr.res.in` date: 9 November 1999 title: Algebraic stacks --- Introduction ============ The concept of algebraic stack is a generalization of the concept of scheme, in the same sense that the concept of scheme is a generalization of the concept of projective variety. In many moduli problems, the functor that we want to study is not representable by a scheme. In other words, there is no fine moduli space. Usually this is because the objects that we want to parametrize have automorphisms. But if we enlarge the category of schemes (following ideas that go back to Grothendieck and Giraud, and were developed by Deligne, Mumford and Artin) and consider algebraic stacks, then we can construct the “moduli stack”, that captures all the information that we would like in a fine moduli space. The idea of enlarging the category of algebraic varieties to study moduli problems is not new. In fact A. Weil invented the concept of abstract variety to give an algebraic construction of the Jacobian of a curve. These notes are an introduction to the theory of algebraic stacks. I have tried to emphasize ideas and concepts through examples instead of detailed proofs (I give references where these can be found). In particular, section \[sectionversus\] is a detailed comparison between the moduli *scheme* and the moduli *stack* of vector bundles. First I will give a quick introduction in subsection \[quick\], just to give some motivations and get a flavour of the theory of algebraic stacks. Section \[sectionstacks\] has a more detailed exposition. There are mainly two ways of introducing stacks. We can think of them as 2-functors (I learnt this approach from N. Nitsure and C. Sorger, cf. subsection \[subsfunctors\]), or as categories fibered on groupoids (This is the approach used in the references, cf. subsection \[subsgroupoids\]). From the first point of view it is easier to see in which sense stacks are generalizations of schemes, and the definition looks more natural, so conceptually it seems more satisfactory. But since the references use categories fibered on groupoids, after we present both points of view, we will mainly use the second. The concept of stack is merely a categorical concept. To do geometry we have to add some conditions, and then we get the concept of algebraic stack. This is done in subsection \[subsalgebraic\]. In subsection \[subsgroupspaces\] we introduce a third point of view to understand stacks: as groupoid spaces. In subsection \[subsproperties\] we define for algebraic stacks many of the geometric properties that are defined for schemes (smoothness, irreducibility, separatedness, properness, etc...). In subsection \[subspoints\] we introduce the concept of point and dimension of an algebraic stacks, and in subsection \[subssheaves\] we define sheaves on algebraic stacks. In section \[sectionversus\] we study in detail the example of the moduli of vector bundles on a scheme $X$, comparing the moduli stack with the moduli scheme. Appendix A is a brief introduction to Grothendieck topologies, sheaves and algebraic spaces. In appendix B we define some notions related to the theory of 2-categories. Quick introduction to algebraic stacks {#quick} -------------------------------------- We will start with an example: vector bundles (with fixed prescribed Chern classes and rank) on a projective scheme $X$ over an algebraically closed field $k$. What is the moduli stack ${{\mathcal{M}}}$ of vector bundles on $X$?. I don’t know a short answer to this, but instead it is easy to define what is a morphism from a scheme $B$ to the moduli stack ${{\mathcal{M}}}$. It is just a family of vector bundles parametrized by $B$. More precisely, it is a vector bundle $V$ on $B\times X$, flat over $B$, such that the restriction to the slices $b\times X$ have prescribed Chern classes and rank. In other words, ${{\mathcal{M}}}$ has the property that we expect from a fine moduli space: the set of morphisms ${\operatorname{Hom}}(B,{{\mathcal{M}}})$ is equal to the set of families parametrized by $B$. We will say that a diagram $$\begin{aligned} \label{commdiag} \xymatrix{ {B} \ar[r]^f \ar[rd]_{g} & {B'} \ar[d]^{g'} \\ & {{{\mathcal{M}}}} }\end{aligned}$$ is commutative if the vector bundle $V$ on $B\times X$ corresponding to $g$ is isomorphic to the vector bundle $(f\times {\operatorname{id}}_X)^*V'$, where $V'$ is the vector bundle corresponding to $g'$. Note that in general, if $L$ is a line bundle on $B$, then $V$ and $V\otimes p^*_B L$ won’t be isomorphic, and then the corresponding morphisms from $B$ to ${{\mathcal{M}}}$ will be different, as opposed to what happens with moduli schemes. A $k$-point in the stack ${{\mathcal{M}}}$ is a morphism $u:{\operatorname{Spec}}k \to{{\mathcal{M}}}$, in other words, it is a vector bundle $V$ on $X$, and we say that two points are isomorphic if they correspond to isomorphic vector bundles. But we shouldn’t think of ${{\mathcal{M}}}$ just as a set of points, it should be thought of as a category. The objects of ${{\mathcal{M}}}$ are points[^1], i.e. vector bundles on $X$, and a morphism in ${{\mathcal{M}}}$ is an isomorphism of vector bundles. This is the main difference between a scheme and an algebraic stack: a scheme is a *set* of points, but in an algebraic stack is a *category*, in fact a *groupoid* (i.e. a category in which all morphisms are isomorphisms). Each point comes with a group of automorphisms. Roughly speaking, a scheme (or more generally, an algebraic space [@Ar1], [@K]) can be thought of as an algebraic stack in which these groups of automorphisms are all trivial. If $p$ is the $k$-point in ${{\mathcal{M}}}$ corresponding to a vector bundle $V$ on $X$, then the group of automorphisms associated to $p$ is the group of vector bundle automorphisms of $V$. This is why algebraic stacks are well suited to serve as moduli of objects that have automorphisms. An algebraic stack has an atlas. This is a scheme $U$ and a surjective morphism $u:U \to {{\mathcal{M}}}$ (with some other properties). As we have seen, such a morphism $u$ is equivalent to a family of vector bundles parametrized by $U$, and we say that $u$ is surjective if for every vector bundle $V$ over $X$ there is at least one point in $U$ whose corresponding vector bundle is isomorphic to $V$. The existence of an atlas for an algebraic stack is the analogue of the fact that for a scheme $B$ there is always an *affine* scheme $U$ and a surjective morphism $U \to B$ (if $\{U_i\to B\}$ is a covering of $B$ by affine subschemes, take $U$ to be the disjoint union $\coprod U_i$). Many local properties (smooth, normal, reduced...) can be studied by looking at the atlas $U$. It is true that in some sense an algebraic stack looks, locally, like a scheme, but we shouldn’t take this too far. For instance the atlas of the classifying stack $BG$ (parametrizing principal $G$-bundles, cf. example \[quotient\]) is just a single point. The dimension of an algebraic stack ${{\mathcal{M}}}$ will be defined as the dimension of $U$ minus the relative dimension of the morphism $u$. The dimension of an algebraic stack can be negative (for instance, $\dim (BG)=-\dim(G)$). A coherent sheaf $L$ on an algebraic stack ${{\mathcal{M}}}$ is a law that, for each morphism $g:B \to {{\mathcal{M}}}$, gives a coherent sheaf $L_B$ on $B$, and for each commutative diagram like (\[commdiag\]), gives an isomorphism between $f^* L_{B'}$ and $L_B$. The coherent sheaf $L_B$ should be thought of as the pullback “$g^*L$” of $L$ under $g$ (the compatibility condition for commutative diagrams is just the condition that $(g'\circ f)^*L$ should be isomorphic to $f^* {g'}^* L$). Let’s look at another example: the moduli quotient (example \[quotient\]). Let $G$ be an affine algebraic group acting on $X$. For simplicity, assume that there is a normal subgroup $H$ of $G$ that acts trivially on $X$, and that $\overline G=G/H$ is an affine group acting freely on $X$ and furthermore there is a quotient by this action $X \to B$ and this quotient is a principal $\overline G$-bundle. We call $B=X/G$ the *quotient scheme*. Each point corresponds to a $G$-orbit of the action. But note that $B$ is also equal to the quotient $X/\overline G$, because $H$ acts trivially and then $G$-orbits are the same thing as $\overline G$-orbits. We can say that the quotient scheme “forgets” $H$. One can also define the *quotient stack* $[X/G]$. Roughly speaking, a point $p$ of $[X/G]$ again corresponds to a $G$-orbit of the action, but now each point comes with an automorphism group: given a point $p$ in $[X/G]$, choose a point $x\in X$ in the orbit corresponding to $p$. The automorphism group attached to $p$ is the stabilizer $G_x$ of $x$. With the assumptions that we have made on the action of $G$, the automorphism group of any point is always $H$. Then the quotient stack $[X/G]$ is not a scheme, since the automorphism groups are not trivial. The action of $H$ is trivial, but the moduli stack still “remembers” that there was an action by $H$. Observe that the stack $[X/\overline G]$ is not isomorphic to the stack $[X/G]$ (as opposed to what happens with the quotient schemes). Since the action of $\overline G$ is free on $X$, the automorphism group corresponding to each point of $[X/\overline G]$ is trivial, and it can be shown that, with the assumptions that we made, $[X/\overline G]$ is represented by the scheme $B$ (this terminology will be made precise in section \[sectionstacks\]). Stacks {#sectionstacks} ====== Stacks as 2-functors. Sheaves of sets. {#subsfunctors} -------------------------------------- Given a scheme $M$ over a base scheme $S$, we define its (contravariant) functor of points ${\operatorname{Hom}}_S(-,M)$ $$\begin{array}{rccc} {{\operatorname{Hom}}_S(-,M):} &{({{Sch}}/S)}& \longrightarrow &{({{Sets}})} \\ & {B} &\longmapsto &{{\operatorname{Hom}}_S(B,M)} \end{array}$$ where $({{Sch}}/S)$ is the category of $S$-schemes, $B$ is an $S$-scheme, and ${\operatorname{Hom}}_S(B,M)$ is the set of $S$-scheme morphisms. If we give $({{Sch}}/S)$ the étale topology, ${\operatorname{Hom}}_S(-,M)$ is a sheaf. A sheaf of sets on $({{Sch}}/S)$ with the étale topology is called a space. Then schemes can be thought of as sheaves of sets. Moduli problems can usually be described by functors. We say that a sheaf of sets $F$ is representable by a scheme $M$ if $F$ is isomorphic to the functor of points ${\operatorname{Hom}}_S(-,M)$. The scheme $M$ is then called the fine moduli scheme. Roughly speaking, this means that there is a one to one correspondence between families of objects parametrized by a scheme $B$ and morphisms from $B$ to $M$. \[defvectorbundle\] None of these examples are sheaves (then none of these are representable), because of the presence of automorphisms. They are just presheaves (=functors). For instance, given a curve $C$ over $S$ with nontrivial automorphisms, it is possible to construct a family $f:{{\mathcal{C}}}\to B$ such that every fiber of $f$ is isomorphic to $C$, but ${{\mathcal{C}}}$ is not isomorphic to $B \times C$. This implies that $M_g$ doesn’t satisfy the monopresheaf axiom. This can be solved by taking the sheaf associated to the presheaf (sheafification). In the examples, this amounts to change isomorphism classes of families to equivalence classes of families, when two families are equivalent if they are locally (using the étale topology over the parametrizing scheme $B$) isomorphic. In the case of vector bundles, this is the reason why one usually declares two vector bundles $V$ and $V'$ on $X \times B$ equivalent if $V{\cong}V'\otimes p_B^* L$ for some line bundle $L$ on $B$. The functor obtained with this equivalence relation is denoted ${\underline{{\mathfrak{M}}}}$ (and analogously for ${\underline{{\mathfrak{M}}}}^{s}$ and ${\underline{{\mathfrak{M}}}}^{ss}$). Note that if two families $V$ and $V'$ are equivalent in this sense, then they are locally isomorphic. The converse is only true if the vector bundles are simple (only automorphisms are scalar multiplications). This will happen, for instance, if we are considering the functor ${\underline{{\mathfrak{M}}}}^{\prime s}$ of stable vector bundles, since stable vector bundles are simple. In general, if we want the functor to be a sheaf, we have to use a weaker notion of equivalence, but this is not done because for other reasons there is only hope of obtaining a fine moduli space if we restrict our attention to stable vector bundles. Once this modification is made, there are some situations in which these examples are representable (for instance, stable vector bundles on curves with coprime rank and degree), but in general they will still not be representable, because in general we don’t have a universal family: Let $F$ be a representable functor, and let $\phi:F \to {\operatorname{Hom}}_S(-,X)$ be the isomorphism. The object of $F(X)$ corresponding to the element ${\operatorname{id}}_X$ of ${\operatorname{Hom}}_S(X,X)$ is called the universal family. When a moduli functor $F$ is not representable and then there is no scheme $X$ whose functor of points is isomorphic to $F$, one can still try to find a scheme $X$ whose functor of points is an approximation to $F$ in some sense. There are two different notions: . We say that a scheme $M$ corepresents the functor $F$ if there is a natural transformation of functors $\phi:F \to {\operatorname{Hom}}_S(-,M)$ such that - Given another scheme $N$ and a natural transformation $\psi:F \to {\operatorname{Hom}}_S(-,N)$, there is a unique natural transformation $\eta: {\operatorname{Hom}}_S(-,M)\to {\operatorname{Hom}}_S(-,N)$ with $\psi= \eta \circ \phi$. $$\xymatrix{ {F} \ar[d]^{\phi} \ar[rd]^{\psi} \\ {{\operatorname{Hom}}_S(-,M)} \ar[r]^{\eta} & {\operatorname{Hom}}_S(-,N)\\ }$$ This characterizes $M$ up to unique isomorphism. Let $({{Sch}}/S)'$ be the functor category, whose objects are contravariant functors from $({{Sch}}/S)$ to $(Sets)$ and whose morphisms are natural transformation of functors. Then $M$ represents $F$ iff ${\operatorname{Hom}}_S(Y,M)= {\operatorname{Hom}}_{({{Sch}}/S)'}({{\mathcal{Y}}},F)$ for all schemes $Y$, where ${{\mathcal{Y}}}$ is the functor represented by $Y$. On the other hand, one can check that $M$ corepresents $F$ iff ${\operatorname{Hom}}_S(M,Y)= {\operatorname{Hom}}_{({{Sch}}/S)'}(F,{{\mathcal{Y}}})$ for all schemes $Y$. If $M$ represents $F$, then it corepresents it, but the converse is not true. From now on we will usually denote a scheme and the functor that it represents by the same letter. A scheme $M$ is called a coarse moduli scheme if it corepresents $F$ and furthermore - For any algebraically closed field $k$, the map $\phi(k):F({\operatorname{Spec}}k) \to {\operatorname{Hom}}_S({\operatorname{Spec}}k, M)$ is bijective. In both cases, given a family of objects parametrized by $B$ we get a morphism from $B$ to $M$, but we don’t require the converse to be true. \[vb1\] Let $({{groupoids}})$ be the 2-category whose objects are groupoids, 1-morphisms are functors between groupoids, and 2-morphisms are natural transformation between these functors. A presheaf in groupoids (also called a quasi-functor) is a contravariant 2-functor ${{\mathcal{F}}}$ from $({{Sch}}/S)$ to $({{groupoids}})$. For each scheme $B$ we have a groupoid ${{\mathcal{F}}}(B)$ and for each morphism $f:B'\to B$ we have a natural transformation of functors ${{\mathcal{F}}}(f)$ that is denoted by $f^*$ (usually it is actually defined by a pullback). \[bbund\] Now we will define the concept of stack. First we have to choose a Grothendieck topology on $(Sch/S)$, either the étale or the fppf topology. Later on, when we define algebraic stack, the étale topology will lead to the definition of a Deligne-Mumford stack ([@DM], [@Vi], [@E]), and the fppf to an Artin stack ([@La]). For the moment we will give a unified description. In the following definition, to simplify notation we denote by $X|_i$ the pullback $f^*_i X$ where $f_i:U_i \to U$ and $X$ is an object of ${{\mathcal{F}}}(U)$, and by $X_i|_{ij}$ the pullback $f^*_{ij,i} X_i$ where $f_{ij,i}:U_i \times_U U_j \to U_i$ and $X_i$ is an object of ${{\mathcal{F}}}(U_i)$. We will also use the obvious variations of this convention, and will simplify the notation using remark \[B2\]. \[sheaf\] A stack is a sheaf of groupoids, i.e. a 2-functor (presheaf) that satisfies the following sheaf axioms. Let $\{U_i \to U\}_{i\in I}$ be a covering of $U$ in the site $({{Sch}}/S)$. Then 1. (Glueing of morphisms) If $X$ and $Y$ are two objects of ${{\mathcal{F}}}(U)$, and $\varphi_i:X|_i\to Y|_i$ are morphisms such that $\varphi_i|_{ij}=\varphi_j|_{ij}$, then there exists a morphism $\eta:X\to Y$ such that $\eta|_i=\varphi_i$. 2. (Monopresheaf) If $X$ and $Y$ are two objects of ${{\mathcal{F}}}(U)$, and $\varphi:X\to Y$, $\psi:X \to Y$ are morphisms such that $\varphi|_i=\psi|_i$, then $\varphi = \psi$. 3. \[sheafthree\] (Glueing of objects) If $X_i$ are objects of ${{\mathcal{F}}}(U_i)$ and $\varphi_{ij}:X_j|_{ij} \to X_i|_{ij}$ are morphisms satisfying the cocycle condition $\varphi_{ij}|_{ijk}\circ \varphi_{jk}|_{ijk}= \varphi_{ik}|_{ijk}$, then there exists an object $X$ of ${{\mathcal{F}}}(U)$ and $\varphi_i:X|_i \stackrel{{\cong}}\to X_i$ such that $\varphi_{ji}\circ \varphi_i|_{ij}= \varphi_j|_{ij}$. Let’s stop for a moment and look at how we have enlarged the category of schemes by defining the category of stacks. We can draw the following diagram $$\xymatrix{ & {Algebraic\,Stacks} \ar[r] & {Stacks} \ar[r] &{Presheaves\,of\, groupoids} \\ {{{Sch}}/S} \ar[r] \ar[ur] & {Algebraic\,Spaces} \ar[r] \ar[u] &{Spaces} \ar[r] \ar[u] &{Presheaves\,of\,sets} \ar[u] }$$ where $A \to B$ means that the category $A$ is a subcategory $B$. Recall that a presheaf of sets is just a functor from $({{Sch}}/S)$ to the category $({{Sets}})$, a presheaf of groupoids is just a 2-functor to the 2-category $({{groupoids}})$. A sheaf (for example an space or a stack) is a presheaf that satisfies the sheaf axioms (these axioms are slightly different in the context of categories or 2-categories), and if this sheaf satisfies some geometric conditions (that we haven’t yet specified), we will have an algebraic stack or algebraic space. Stacks as categories. Groupoids {#subsgroupoids} ------------------------------- There is an alternative way of defining a stack. From this point of view a stack will be a category, instead of a functor. A category over $({{Sch}}/S)$ is a category ${{\mathcal{F}}}$ and a covariant functor $p^{}_{{\mathcal{F}}}:{{\mathcal{F}}}\to ({{Sch}}/S)$. If $X$ is an object (resp. $\phi$ is a morphism) of ${{\mathcal{F}}}$, and $p^{}_{{\mathcal{F}}}(X)=B$ (resp. $p^{}_{{\mathcal{F}}}(\phi)=f$), then we say that $X$ lies over $B$ (resp. $\phi$ lies over $f$). A category ${{\mathcal{F}}}$ over $({{Sch}}/S)$ is called a category fibered on groupoids (or just groupoid) if 1. \[groupoidone\] For every $f:B'\to B$ in $({{Sch}}/S)$ and every object $X$ with $p^{}_{{\mathcal{F}}}(X)=B$, there exists at least one object $X'$ and a morphism $\phi:X'\to X$ such that $p^{}_{{\mathcal{F}}}(X')=B'$ and $p^{}_{{\mathcal{F}}}(\phi)=f$. $$\xymatrix{ {X'} \ar@{-->}[r]^{\phi} \ar@{-->}[d] & {X} \ar[d] \\ {B'} \ar[r]^{f} & {B} }$$ 2. \[groupoidtwo\] For every diagram $$\xymatrix{ {X_3} \ar[rr]^{\psi} \ar[dd]& & {X_1} \ar[dd] \\ & {X_2} \ar[ru]^{\phi} \ar[dd] \\ {B_3} \ar '[r][rr]^{f\circ f'} \ar[rd]_{f'} & & {B_1} \\ & {B_2} \ar[ru]_f }$$ (where $p^{}_{{\mathcal{F}}}(X_i)=B_i$, $p^{}_{{\mathcal{F}}}(\phi)=f$, $p^{}_{{\mathcal{F}}}(\psi)=f\circ f'$), there exists a unique $\varphi:X_3 \to X_2$ with $\psi=\phi\circ \varphi$ and $p^{}_{{\mathcal{F}}}(\varphi)=f'.$ Condition \[groupoidtwo\] implies that the object $X'$ whose existence is asserted in condition \[groupoidone\] is unique up to canonical isomorphism. For each $X$ and $f$ we choose once and for all such an $X'$ and call it $f^*X$. Another consequence of condition \[groupoidtwo\] is that $\phi$ is an isomorphism if and only if $p^{}_{{\mathcal{F}}}(\phi)=f$ is an isomorphism. Let $B$ be an object of $({{Sch}}/S)$. We define ${{\mathcal{F}}}(B)$, the fiber of ${{\mathcal{F}}}$ over $B$, to be the subcategory of ${{\mathcal{F}}}$ whose objects lie over $B$ and whose morphisms lie over ${\operatorname{id}}_B$. It is a groupoid. The association $B\to {{\mathcal{F}}}(B)$ in fact defines a presheaf of groupoids (note that the 2-isomorphisms $\epsilon_{f,g}$ required in the definition of presheaf of groupoids are well defined thanks to condition \[groupoidtwo\]). Conversely, given a presheaf of groupoids ${{\mathcal{G}}}$ on $(Sch/S)$, we can define the category ${{\mathcal{F}}}$ whose objects are pairs $(B,X)$ where $B$ is an object of $({{Sch}}/S)$ and $X$ is an object of ${{\mathcal{G}}}(B)$, and whose morphisms $(B',X')\to (B,X)$ are pairs $(f,\alpha)$ where $f:B'\to B$ is a morphism in $(Sch/S)$ and $\alpha:f^* X \to X'$ is an isomorphism, where $f^*={{\mathcal{G}}}(f)$. This gives the relationship between both points of view. \[defstablecurve\] \[quotient\] A stack is a groupoid that satisfies 1. (*Prestack*). For all scheme $B$ and pair of objects $X$, $Y$ of ${{\mathcal{F}}}$ over $B$, the contravariant functor $$\begin{array}{rccc} {\operatorname{Iso}}_B(X,Y): & ({{Sch}}/B)& \longrightarrow & ({{Sets}}) \\ & (f:B'\to B) & \longmapsto & {\operatorname{Hom}}(f^*X,f^*Y) \end{array}$$ is a sheaf on the site $(Sch/B).$ 2. Descent data is effective (this is just condition \[sheafthree\] in the definition \[sheaf\] of sheaf). From now on we will mainly use this approach. Now we will give some definitions for stacks. **Morphisms of stacks**. A morphism of stacks $f:{{\mathcal{F}}}\to {{\mathcal{G}}}$ is a functor between the categories, such that $p_{{\mathcal{G}}}\circ f= p^{}_{{\mathcal{F}}}$. A commutative diagram of stacks is a diagram $$\xymatrix{ & {{{\mathcal{G}}}} \ar[rd]^g \ar@2[d]^{\alpha} \\ {{{\mathcal{F}}}} \ar[ur]^f \ar[rr]_h & &{{{\mathcal{H}}}} }$$ such that $\alpha:g\circ f \to h$ is an isomorphism of functors. If $f$ is an equivalence of categories, then we say that the stacks ${{\mathcal{F}}}$ and ${{\mathcal{G}}}$ are isomorphic. We denote by ${\operatorname{Hom}}_S({{\mathcal{F}}},{{\mathcal{G}}})$ the category whose objects are morphisms of stacks and whose morphisms are natural transformations. **Stack associated to a scheme**. Given a scheme $U$ over $S$, consider the category $({{Sch}}/U)$. Define the functor $p^{}_U:({{Sch}}/U)\to ({{Sch}}/S)$ which sends the $U$-scheme $f:B\to U$ to the composition $B\stackrel{f}\to U \to S$. Then $({{Sch}}/U)$ becomes a stack. Usually we denote this stack also by $U$. From the point of view of 2-functors, the stack associated to $U$ is the 2-functor that for each scheme $B$ gives the category whose objects are the elements of the set ${\operatorname{Hom}}_S(B,U)$, and whose only morphisms are identities. We say that a stack is represented by a scheme $U$ when it is isomorphic to the stack associated to $U$. We have the following very useful lemmas: \[nonrepresentable\] If a stack has an object with an automorphism other that the identity, then the stack cannot be represented by a scheme. In the definition of stack associated with a scheme we see that the only automorphisms are identities. \[yoneda\] . Let ${{\mathcal{F}}}$ be a stack and $U$ a scheme. The functor $$u:{\operatorname{Hom}}_S(U,{{\mathcal{F}}}) \to {{\mathcal{F}}}(U)$$ that sends a morphism of stacks $f:({{Sch}}/U)\to {{\mathcal{F}}}$ to $f({\operatorname{id}}_U)$ is an equivalence of categories. Follows from Yoneda lemma This useful observation that we will use very often means that an object of ${{\mathcal{F}}}$ that lies over $U$ is equivalent to a morphism (of stacks) from $U$ to ${{\mathcal{F}}}$. **Fiber product**. Given two morphisms $f_1:{{\mathcal{F}}}_1\to {{\mathcal{G}}}$, $f_2:{{\mathcal{F}}}_2\to {{\mathcal{G}}}$, we define a new stack ${{\mathcal{F}}}_1 \times_{{\mathcal{G}}}{{\mathcal{F}}}_2$ (with projections to ${{\mathcal{F}}}_1$ and ${{\mathcal{F}}}_2$) as follows. The objects are triples $(X_1,X_2,\alpha)$ where $X_1$ and $X_2$ are objects of ${{\mathcal{F}}}_1$ and ${{\mathcal{F}}}_2$ that lie over the same scheme $U$, and $\alpha: f_1(X_1)\to f_2(X_2)$ is an isomorphism in ${{\mathcal{G}}}$ (equivalently, $p_{{\mathcal{G}}}(\alpha)={\operatorname{id}}_U$). A morphism from $(X_1,X_2,\alpha)$ to $(Y_1,Y_2,\beta)$ is a pair $(\phi_1,\phi_2)$ of morphisms $\phi_i:X_i\to Y_i$ that lie over the same morphism of schemes $f:U \to V$, and such that $\beta \circ f_1(\phi_1) = f_2(\phi_2)\circ \alpha$. The fiber product satisfies the usual universal property. **Representability**. A stack ${{\mathcal{X}}}$ is said to be representable by an algebraic space (resp. scheme) if there is an algebraic space (resp. scheme) $X$ such that the stack associated to $X$ is isomorphic to ${{\mathcal{X}}}$. If “P” is a property of algebraic spaces (resp. schemes) and ${{\mathcal{X}}}$ is a representable stack, we will say that ${{\mathcal{X}}}$ has “P” iff $X$ has “P”. A morphism of stacks $f:{{\mathcal{F}}}\to {{\mathcal{G}}}$ is said to be representable if for all objects $U$ in $({{Sch}}/S)$ and morphisms $U\to {{\mathcal{G}}}$, the fiber product stack $U\times_{{\mathcal{G}}}{{\mathcal{F}}}$ is representable by an algebraic space. Let “P” is a property of morphisms of schemes that is local in nature on the target for the topology chosen on $({{Sch}}/S)$ (étale or fppf), and it is stable under arbitrary base change. For instance: separated, quasi-compact, unramified, flat, smooth, étale, surjective, finite type, locally of finite type,... Then we say that $f$ has “P” if for every $U\to {{\mathcal{G}}}$, the pullback $U\times_{{\mathcal{G}}}{{\mathcal{F}}}\to U$ has “P” ([@La p.17], [@DM p.98]). **Diagonal**. Let $\Delta_{{\mathcal{F}}}:{{\mathcal{F}}}\to {{\mathcal{F}}}\times_S {{\mathcal{F}}}$ be the obvious diagonal morphism. A morphism from a scheme $U$ to ${{\mathcal{F}}}\times_S {{\mathcal{F}}}$ is equivalent to two objects $X_1$, $X_2$ of ${{\mathcal{F}}}(U)$. Taking the fiber product of these we have $$\xymatrix{ {{\operatorname{Iso}}_U(X_1,X_2)} \ar[r] \ar[d]& {{{\mathcal{F}}}} \ar[d]^{\Delta_{{\mathcal{F}}}} \\ {U} \ar[r]^{(X_1,X_2)} & {{{\mathcal{F}}}\times_S {{\mathcal{F}}}}}$$ hence the group of automorphisms of an object is encoded in the diagonal morphism. \[diag\] . The following are equivalent 1. The morphism $\Delta_{{\mathcal{F}}}$ is representable. 2. The stack ${\operatorname{Iso}}_U(X_1,X_2)$ is representable for all $U$, $X_1$ and $X_2$. 3. For all scheme $U$, every morphism $U\to {{\mathcal{F}}}$ is representable. 4. For all schemes $U$, $V$ and morphisms $U\to {{\mathcal{F}}}$ and $V\to {{\mathcal{F}}}$, the fiber product $U\times_{{\mathcal{F}}}V$ is representable. The implications $1 \Leftrightarrow 2$ and $3 \Leftrightarrow 4$ follow easily from the definitions. $1 \Rightarrow 4$) Assume that $\Delta_{{\mathcal{F}}}$ is representable. We have to show that $U\times_{{\mathcal{F}}}V$ is representable for any $f:U\to {{\mathcal{F}}}$ and $g:V\to {{\mathcal{F}}}$. Check that the following diagram is Cartesian $$\xymatrix{ {U\times_{{\mathcal{F}}}V} \ar[r] \ar[d]& {{{\mathcal{F}}}}\ar[d]^{\Delta_{{\mathcal{F}}}}\\ U\times_S V \ar[r]^{f\times g} &{{{\mathcal{F}}}\times_S {{\mathcal{F}}}}}$$ Then $U\times_{{\mathcal{F}}}V$ is representable. $1 \Leftarrow 4$) First note that the Cartesian diagram defined by $h:U\to {{\mathcal{F}}}\times_S {{\mathcal{F}}}$ and $\Delta_{{\mathcal{F}}}$ factors as follows $$\xymatrix{ {U\times^{}_{{{\mathcal{F}}}\times_S {{\mathcal{F}}}} {{\mathcal{F}}}} \ar[r] \ar[d] & {U\times^{}_{{\mathcal{F}}}U} \ar[r] \ar[d] &{{{\mathcal{F}}}} \ar[d] \\ {U} \ar[r]^{\Delta_U} & {U\times_S U} \ar[r] & {{{\mathcal{F}}}\times_S {{\mathcal{F}}}}}$$ Both squares are Cartesian and by hypothesis $U\times_{{\mathcal{F}}}U$ is representable, then $U\times^{}_{{{\mathcal{F}}}\times_S {{\mathcal{F}}}} {{\mathcal{F}}}$ is also representable. Algebraic stacks {#subsalgebraic} ---------------- Now we will define the notion of algebraic stack. As we have said, first we have to choose a topology on $({{Sch}}/S)$. Depending of whether we choose the étale or fppf topology, we get different notions. Let $({{Sch}}/S)$ be the category of $S$-schemes with the étale topology. Let ${{\mathcal{F}}}$ be a stack. Assume 1. The diagonal $\Delta_{{\mathcal{F}}}$ is representable, quasi-compact and separated. 2. There exists a scheme $U$ (called atlas) and an étale surjective morphism $u:U\to {{\mathcal{F}}}$. Then we say that ${{\mathcal{F}}}$ is a Deligne-Mumford stack. The morphism of stacks $u$ is representable because of proposition \[diag\] and the fact that the diagonal $\Delta_{{\mathcal{F}}}$ is representable. Then the notion of étale is well defined for $u$. In [@DM] this was called an algebraic stack. In the literature, algebraic stack usually refers to Artin stack (that we will define later). To avoid confusion, we will use “algebraic stack” only when we refer in general to both notions, and we will use “Deligne-Mumford” or “Artin” stack when we want to be specific. Note that the definition of Deligne-Mumford stack is the same as the definition of algebraic space, but in the context of stacks instead of spaces. As with schemes a stack such that the diagonal $\Delta_{{\mathcal{F}}}$ is quasi-compact and separated is called quasi-separable. We always assume this technical condition, as it is usually done both with schemes and algebraic spaces. Sometimes it is difficult to find explicitly an étale atlas, and the following proposition is useful. \[represen\] . Let ${{\mathcal{F}}}$ be a stack over the étale site $({{Sch}}/S)$. Assume 1. The diagonal $\Delta_{{\mathcal{F}}}$ is representable, quasi-compact, separated and **unramified**. 2. There exists a scheme $U$ of finite type over $S$ and a surjective morphism $u:U\to {{\mathcal{F}}}$. Then ${{\mathcal{F}}}$ is a Deligne-Mumford stack. Now we define the analogue for the fppf topology [@Ar2]. Let $({{Sch}}/S)$ be the category of $S$-schemes with the fppf topology. Let ${{\mathcal{F}}}$ be a stack. Assume 1. The diagonal $\Delta_{{\mathcal{F}}}$ is representable, quasi-compact and separated. 2. There exists a scheme $U$ (called atlas) and a smooth (hence locally of finite type) and surjective morphism $u:U\to {{\mathcal{F}}}$. Then we say that ${{\mathcal{F}}}$ is an Artin stack. For propositions analogous to proposition \[represen\] see [@La 4]. . If ${{\mathcal{F}}}$ is a Deligne-Mumford (resp. Artin) stack, then the diagonal $\Delta_{{\mathcal{F}}}$ is unramified (resp. finite type). Recall that $\Delta_{{\mathcal{F}}}$ is unramified (resp. finite type) if for every scheme $B$ and objects $X$, $Y$ of ${{\mathcal{F}}}(B)$, the morphism ${\operatorname{Iso}}_B(X,Y)\to U$ is unramified (resp. finite type). If $B={\operatorname{Spec}}S$ and $X=Y$, then this means that the automorphism group of $X$ is discrete and reduced for a Deligne-Mumford stack, and it just of finite type for an Artin stack. \[quotconstruction\] \[atlasquotient\] Algebraic stacks as groupoid spaces {#subsgroupspaces} ----------------------------------- We will introduce a third equivalent definition of stack. First consider a category $C$. Let $U$ be the set of objects and $R$ the set of morphisms. The axioms of a category give us four maps of sets $$\xymatrix{ {R} \ar@<0.5ex>[r]^{s} \ar@<-0.5ex>[r]_{t} & {U} \ar[r]^{e} & {R}} \qquad \xymatrix{ \save[]+<-5.5ex,-0.55ex>*{R\times^{}_{s,U,t} R}\restore \ar[r]^{m} & {R}}$$ where $s$ and $t$ give the source and target for each morphism, $e$ gives the identity morphism, and $m$ is composition of morphisms. If the category is a groupoid then we have a fifth morphism $$\xymatrix{{R} \ar[r]^i & {R}}$$ that gives the inverse. These maps satisfy 1. $s\circ e= t\circ e = {\operatorname{id}}_R$, $s\circ i=t$, $t\circ i=s$, $s\circ m=s\circ p_2$, $t\circ m=t\circ p_1$. 2. *Associativity*. $m\circ (m\times {\operatorname{id}}_R)=m\circ ({\operatorname{id}}_R \times m)$. 3. *Identity*. Both compositions $$R=R\times^{}_{s,U} U=U\times^{}_{U,t}R \xymatrix{ {}\ar@<0.5ex>[r]^{{\operatorname{id}}_R \times e} \ar@<-0.5ex>[r]_{e \times {\operatorname{id}}_R} & {}} R\times^{}_{s,U,t} R \xymatrix{ {}\ar[r]^{m} & {R}}$$ are equal to the identity map on $R$. 4. *Inverse*. $m\circ (i\times {\operatorname{id}}_R)= e\circ s$, $m\circ ({\operatorname{id}}_R \times i)= e\circ t$. . A groupoid space is a pair of spaces (sheaves of sets) $U$, $R$, with five morphisms $s$, $t$, $e$, $m$, $i$ with the same properties as above. . Given a groupoid space, define the groupoid over $({{Sch}}/S)$ as the category $[R,U]'$ over $({{Sch}}/S)$ whose objects over the scheme $B$ are elements of the set $U(B)$ and whose morphisms over $B$ are elements of the set $R(B)$. Given $f:B' \to B$ we define a functor $f^*: [R,U]'(B) \to [R,U]'(B')$ using the maps $U(B) \to U(B')$ and $R(B) \to R(B')$. The groupoid $[R,U]'$ is in general only a prestack. We denote by $[R,U]$ the associated stack. The stack $[R,U]$ can be thought of as the sheaf associated to the presheaf of groupoids $B \mapsto [R,U]'(B)$ ([@La 2.4.3]). Let $\delta:R\to U\times_S U$ be an equivalence relation in the category of spaces. One can define a groupoid space, and $[R,U]$ is to be thought of as the stack-theoretic quotient of this equivalence relation, as opposed to the quotient space, used for instance to define algebraic spaces (for more details and the definition of equivalence relation see appendix A). Properties of Algebraic Stacks {#subsproperties} ------------------------------ So far we have only defined scheme-theoretic properties for representable stacks and morphisms. We can define some properties for arbitrary algebraic stacks (and morphisms among them) using the atlas. Let “P” be a property of schemes, local in nature for the smooth (resp. étale) topology. For example: regular, normal, reduced, of characteristic $p$,... Then we say that an Artin (resp. Deligne-Mumford) stack has “P” iff the atlas has “P” ([@La p.25], [@DM p.100]). Let “P” be a property of morphisms of schemes, local on source and target for the smooth (resp. étale) topology, i.e. for any commutative diagram $$\xymatrix{ {X'} \ar[r]^{p} \ar[dr]_{f''}& {Y'\times_Y X} \ar[r]^{g'} \ar[d]_{f'} & {X} \ar[d]^{f} \\ & {Y'} \ar[r]^{g} & {Y} }$$ with $p$ and $g$ smooth (resp. étale) and surjective, $f$ has “P” iff $f''$ has “P”. For example: flat, smooth, locally of finite type,... For the étale topology we also have: étale, unramified,... Then if $f:{{\mathcal{X}}}\to {{\mathcal{Y}}}$ is a morphism of Artin (resp. Deligne-Mumford) stacks, we say that $f$ has “P” iff for one (and then for all) commutative diagram of stacks $$\xymatrix{ {X'} \ar[r]^{p} \ar[dr]_{f''}& {Y'\times_Y {{\mathcal{X}}}} \ar[r]^{g'} \ar[d]_{f'} & {{{\mathcal{X}}}} \ar[d]^{f} \\ & {Y'} \ar[r]^{g} & {{{\mathcal{Y}}}} }$$ where $X'$, $Y'$ are schemes and $p$, $g$ are smooth (resp. étale) and surjective, $f''$ has “P” ([@La pp. 27-29]). For Deligne-Mumford stacks it is enough to find a commutative diagram $$\xymatrix{ {X'} \ar[r]^{p} \ar[d]_{f''}& {{{\mathcal{X}}}} \ar[d]^{f} \\ {Y'} \ar[r]^{g} & {{{\mathcal{Y}}}} }$$ where $p$ and $g$ are étale and surjective and $f''$ has “P”. Then it follows that $f$ has “P” ([@DM p. 100]). Other notions are defined as follows. \[substack\] . A stack ${{\mathcal{E}}}$ is a substack of ${{\mathcal{F}}}$ if it is a full subcategory of ${{\mathcal{F}}}$ and 1. If an object $X$ of ${{\mathcal{F}}}$ is in ${{\mathcal{E}}}$, then all isomorphic objects are also in ${{\mathcal{E}}}$. 2. For all morphisms of schemes $f:U\to V$, if $X$ is in ${{\mathcal{E}}}(V)$, then $f^* X$ is in ${{\mathcal{E}}}(U)$. 3. Let $\{U_i \to U\}$ be a cover of $U$ in the site $({{Sch}}/S)$. Then $X$ is in ${{\mathcal{E}}}$ iff $X|_i$ is in ${{\mathcal{E}}}$ for all $i$. . A substack ${{\mathcal{E}}}$ of ${{\mathcal{F}}}$ is called open (resp. closed, resp. locally closed) if the inclusion morphism ${{\mathcal{E}}}\to {{\mathcal{F}}}$ is **representable** and it is an open immersion (resp. closed immersion, resp. locally closed immersion). . An algebraic stack ${{\mathcal{F}}}$ is irreducible if it is not the union of two distinct and nonempty proper closed substacks. . An algebraic stack ${{\mathcal{F}}}$ is separable, if the (representable) diagonal morphism $\Delta_{{\mathcal{F}}}$ is universally closed (and hence proper, because it is automatically separable and of finite type). A morphism $f:{{\mathcal{F}}}\to {{\mathcal{G}}}$ of algebraic stacks is separable if for all $U \to {{\mathcal{F}}}$ with $U$ affine, $U\times_{{\mathcal{G}}}{{\mathcal{F}}}$ is a separable (algebraic) stack. For Deligne-Mumford stacks, $\Delta_{{\mathcal{F}}}$ is universally closed iff it is finite. There is a valuative criterion of separatedness, similar to the criterion for schemes. Recall that by Yoneda lemma (lemma \[yoneda\]), a morphism $f:U\to {{\mathcal{F}}}$ between a scheme and a stack is equivalent to an object in ${{\mathcal{F}}}(U)$. Then we will say that $\alpha$ is an isomorphism between two morphisms $f_1,f_2:U\to {{\mathcal{F}}}$ when $\alpha$ is an isomorphism between the corresponding objects of ${{\mathcal{F}}}(U)$. . An algebraic stack ${{\mathcal{F}}}$ is separated (over $S$) if and only if the following holds. Let $A$ be a valuation ring with fraction field $K$. Let $g^{}_1:{\operatorname{Spec}}A\to {{\mathcal{F}}}$ and $g^{}_2:{\operatorname{Spec}}A \to {{\mathcal{F}}}$ be two morphisms such that: 1. $f_{p^{}_{{\mathcal{F}}}}\circ g^{}_1= f_{p^{}_{{\mathcal{F}}}}\circ g^{}_2$. 2. There exists an isomorphism $\alpha: g^{}_1|_{{\operatorname{Spec}}K} \to g^{}_2|_{{\operatorname{Spec}}K}$. $$\xymatrix{ & & {{{\mathcal{F}}}} \ar[d]^{p^{}_{{{\mathcal{F}}}}} \\ {{\operatorname{Spec}}K} \ar@(u,l)[rru] \ar[r]^{i} & {{\operatorname{Spec}}A} \ar@<0.5ex>[ru]^{g^{}_1} \ar@<-0.5ex>[ru]_{g^{}_2} \ar[r] & S }$$ then there exists an isomorphism (in fact unique) $\tilde\alpha: g^{}_1\to g^{}_2$ that extends $\alpha$, i.e. $\tilde\alpha|_{{\operatorname{Spec}}K}=\alpha$. \[dvr\] The criterion for morphisms is more involved because we are working with stacks and we have to keep track of the isomorphisms. A morphism of algebraic stacks $f:{{\mathcal{F}}}\to {{\mathcal{G}}}$ is separated if and only if the following holds. Let $A$ be a valuation ring with fraction field $K$. Let $g^{}_1:{\operatorname{Spec}}A\to {{\mathcal{F}}}$ and $g^{}_2:{\operatorname{Spec}}A \to {{\mathcal{F}}}$ be two morphisms such that: 1. There exists an isomorphism $\beta: f\circ g^{}_1\to f\circ g^{}_2$. 2. There exists an isomorphism $\alpha: g^{}_1|_{{\operatorname{Spec}}K} \to g^{}_2|_{{\operatorname{Spec}}K}$. 3. $f(\alpha)=\beta|_{{\operatorname{Spec}}K}$. then there exists an isomorphism (in fact unique) $\tilde\alpha: g^{}_1\to g^{}_2$ that extends $\alpha$, i.e. $\tilde\alpha|_{{\operatorname{Spec}}K}=\alpha$ and $f(\tilde\alpha)=\beta$. Remark \[dvr\] is also true in this case. . An algebraic stack ${{\mathcal{F}}}$ is proper (over $S$) if it is separated and of finite type, and if there is a scheme $X$ proper over $S$ and a (representable) surjective morphism $X\to {{\mathcal{F}}}$. A morphism ${{\mathcal{F}}}\to {{\mathcal{G}}}$ is proper if for any affine scheme $U$ and morphism $U\to {{\mathcal{G}}}$, the fiber product $U\times_{{\mathcal{G}}}{{\mathcal{F}}}$ is proper over $U$. For properness we only have a satisfactory criterion for stacks (see [@La prop 3.23 and conj 3.25] for a generalization for morphisms). . Let ${{\mathcal{F}}}$ be a separated algebraic stack (over $S$). It is proper (over $S$) if and only if the following condition holds. Let $A$ be a valuation ring with fraction field $K$. For any commutative diagram $$\xymatrix{ & & {{{\mathcal{F}}}} \ar[d]^{p^{}_{{\mathcal{F}}}} \\ {{\operatorname{Spec}}K} \ar[r]^{i} \ar[rru]^{g} & {{\operatorname{Spec}}A} \ar[r] & S }$$ there exists a finite field extension $K'$ of $K$ such that $g$ extends to ${\operatorname{Spec}}(A')$, where $A'$ is the integral closure of $A$ in $K'$. $$\xymatrix{ & & {{{\mathcal{F}}}} \ar[dd]^{p^{}_{{\mathcal{F}}}} \\ {{\operatorname{Spec}}K'} \ar[rru]^{g\circ u} \ar[d]_{u} \ar[r] & {{\operatorname{Spec}}A'} \ar[d] \ar@{-->}[ru] \\ {{\operatorname{Spec}}K} \ar[r]^{i} & {{\operatorname{Spec}}A} \ar[r] & S }$$ . Points and dimension {#subspoints} -------------------- We will introduce the concept of point of an algebraic stack and dimension of a stack at a point. The reference for this is [@La chapter 5]. Let ${{\mathcal{F}}}$ be an algebraic stack over $S$. The set of points of ${{\mathcal{F}}}$ is the set of equivalence classes of pairs $(K,x)$, with $K$ a field over $S$ (i.e. a field with a morphism of schemes ${\operatorname{Spec}}K \to S$) and $x:{\operatorname{Spec}}K \to {{\mathcal{F}}}$ a morphism of stacks. Two pairs $(K',x')$ and $(K'',x'')$ are equivalent if there is a field $K$ extension of $K'$ and $K''$ and a commutative diagram $$\xymatrix{ {{\operatorname{Spec}}K} \ar[r] \ar[d] & {{\operatorname{Spec}}K'} \ar[d]^{x'} \\ {{\operatorname{Spec}}K''} \ar[r]^{x''} & {{\mathcal{F}}}}$$ Given a morphism ${{\mathcal{F}}}\to {{\mathcal{G}}}$ of algebraic stacks and a point of ${{\mathcal{F}}}$, we define the image of that point in ${{\mathcal{G}}}$ by composition. Every point of an algebraic stack is the image of a point of an atlas. To see this, given a point represented by ${\operatorname{Spec}}K \to {{\mathcal{F}}}$ and an atlas $X\to {{\mathcal{F}}}$, take any point ${\operatorname{Spec}}K' \to X\times_{{\mathcal{F}}}{\operatorname{Spec}}K$. The image of this point in $X$ maps to the given point. To define the concept of dimension, recall that if $X$ and $Y$ are locally Noetherian schemes and $f:X\to Y$ is flat, then for any point $x\in X$ we have $$\dim_x(X)= \dim_x(f) + \dim_{f(x)}(Y),$$ with $\dim_x(f)=\dim_x(X_{f(x)})$, where $X_y$ is the fiber of $f$ over $y$. Let $f:{{\mathcal{F}}}\to {{\mathcal{G}}}$ be a representable morphism, locally of finite type, between two algebraic spaces. Let $\xi$ be a point of ${{\mathcal{F}}}$. Let $Y$ be an atlas of ${{\mathcal{G}}}$ Take a point $x$ in the algebraic space $Y\times_{{\mathcal{G}}}{{\mathcal{F}}}$ that maps to $\xi$, $$\xymatrix{ {Y\times_{{\mathcal{G}}}{{\mathcal{F}}}} \ar[r] \ar[d]_{\tilde f} & {{\mathcal{F}}}\ar[d]^{f} \\ {Y} \ar[r] & {{\mathcal{G}}}}$$ and define the dimension of the morphism $f$ at the point $\xi$ as $$\dim_\xi(f)=\dim_x(\tilde f).$$ It can be shown that this definition is independent of the choices made. Let ${{\mathcal{F}}}$ be a locally Noetherian algebraic stack and $\xi$ a point of ${{\mathcal{F}}}$. Let $u: X\to {{\mathcal{F}}}$ be an atlas, and $x$ a point of $X$ mapping to $\xi$. We define the dimension of ${{\mathcal{F}}}$ at the point $\xi$ as $$\dim_\xi({{\mathcal{F}}})=\dim_x(X)-\dim_x(u).$$ The dimension of ${{\mathcal{F}}}$ is defined as $$\dim({{\mathcal{F}}})=\operatorname{Sup}_{\xi} (\dim_\xi({{\mathcal{F}}})).$$ Again, this is independent of the choices made. Quasi-coherent sheaves on stacks {#subssheaves} -------------------------------- A quasi-coherent sheaf ${{\mathcal{S}}}$ on an algebraic stack ${{\mathcal{F}}}$ is the following set of data: 1. For each morphism $X\to {{\mathcal{F}}}$ where $X$ is a scheme, a quasi-coherent sheaf ${{\mathcal{S}}}_X$ on $X$. 2. For each commutative diagram $$\xymatrix{ {X} \ar[r]^f \ar[dr] & {Y} \ar[d] \\ & {{{\mathcal{F}}}} }$$ an isomorphism $\varphi^{}_f: {{\mathcal{S}}}_X \stackrel{{\cong}}{{\longrightarrow}} f^*{{\mathcal{S}}}_Y$, satisfying the cocycle condition, i.e. for any commutative diagram $$\begin{aligned} \label{sheaf2} \xymatrix{ {X} \ar[r]^{f} \ar[dr] & {Y} \ar[d] \ar[r]^{g}& {Z} \ar[dl] \\ & {{{\mathcal{F}}}} }\end{aligned}$$ we have $\varphi^{}_{g\circ f} = \varphi^{}_f \circ f^* \varphi^{}_g$. We say that ${{\mathcal{S}}}$ is coherent (resp. finite type, finite presentation, locally free) if ${{\mathcal{S}}}_X$ is coherent (resp. finite type, finite presentation, locally free) for all $X$. A morphism of quasi-coherent sheaves $h:{{\mathcal{S}}}\to {{\mathcal{S}}}'$ is a collection of morphisms of sheaves $h^{}_X:{{\mathcal{S}}}^{}_X \to {{\mathcal{S}}}'_X$ compatible with the isomorphisms $\varphi$ Vector bundles: moduli stack vs. moduli scheme {#sectionversus} ============================================== In this section we will compare, in the context of vector bundles, the new approach of stacks versus the standard approach of moduli schemes via geometric invariant theory (GIT). Fix a scheme $X$, a positive integer $r$ and classes $c_i\in H^{2i}(X)$. All vector bundles over $X$ in this section will have rank $r$ and Chern classes $c_i$. We will also consider vector bundles on products $B\times X$ where $B$ is a scheme. We will always assume that these vectors bundles are flat over $B$, and that the restriction to the slices $\{p\}\times X$ are vector bundles with rank $r$ and Chern classes $c_i$. Fix also a polarization on $X$. All references to stability or semistability of vector bundles will mean Gieseker stability with respect to this fixed polarization. Recall that the functor ${\underline{{\mathfrak{M}}}}^{s}$ (resp. ${\underline{{\mathfrak{M}}}}^{ss}$) is the functor from $(Sch/S)$ to $(Sets)$ that for each scheme $B$ gives the set of *equivalence* classes of vector bundles over $B\times X$, flat over $B$ and such that the restrictions $V|_b$ to the slices $p\times X$ are stable (resp. semistable) vector bundles with fixed rank and Chern classes, where two vector bundles $V$ and $V'$ on $B\times X$ are considered *equivalent* if there is a line bundle $L$ on $B$ such that $V$ is isomorphic to $V'\otimes p^*_B L$. There are schemes ${\mathfrak{M}}^{s}$ and ${\mathfrak{M}}^{ss}$, called moduli schemes, corepresenting the functors ${{\underline{{\mathfrak{M}}}}}^{s}$ and ${{\underline{{\mathfrak{M}}}}}^{ss}$. The moduli scheme ${\mathfrak{M}}^{ss}$ is constructed using the Quot schemes introduced in example \[quotconstruction\] (for a detailed exposition of the construction, see [@HL]). Since the set of *semistable* vector bundles is bounded, we can choose once and for all $N$ and $m$ (depending only on the Chern classes and rank) with the property that for any semistable vector bundle $V$ there is a point in $R=R_{N,m}$ whose corresponding quotient is isomorphic to $V$. The scheme $R$ parametrizes vector bundles $V$ on $X$ together with a basis of $H^0(V(m))$ (up to multiplication by scalar). Recall that $N=h^0(V(m))$. There is an action of ${{GL(N)}}$ on $R$, corresponding to change of basis but since two basis that only differ by a scalar give the same point on $R$, this ${{GL(N)}}$ action factors through ${{PGL(N)}}$. Then the moduli scheme ${\mathfrak{M}}^{ss}$ is defined as the GIT quotient $R {{/\!\!/}}{{PGL(N)}}$. The closed points of ${\mathfrak{M}}^{ss}$ correspond to S-equivalence classes of vector bundles, so if there is a strictly semistable vector bundle, the functor ${{\underline{{\mathfrak{M}}}}}^{ss}$ is not representable. Now we will compare this scheme with the moduli stack ${{\mathcal{M}}}$ defined on example \[bbund\]. We will also consider the moduli stack ${{\mathcal{M}}}^{s}$ defined in the same way, but with the extra requirement that the vector bundles should be stable. The moduli stack ${{\mathcal{M}}}^{s}$ is a substack (definition \[substack\]) of ${{\mathcal{M}}}$. The following are some of the differences between the moduli scheme and the moduli stack: 1. The stack ${{\mathcal{M}}}$ parametrizes all vector bundles, but the scheme ${\mathfrak{M}}^{ss}$ only parametrizes semistable vector bundles. 2. From the point of view of the scheme ${\mathfrak{M}}^{ss}$, we identify two vector bundles if they are S-equivalent. On the other hand, from the point of view of the moduli stack, two vector bundles are identified only if they are isomorphic. 3. Let $V$ and $V'$ be two families of vector bundles parametrized by a scheme $B$, i.e. two vector bundles (flat over $B$) on $B\times X$. If there is a line bundle $L$ on $B$ such that $V$ is isomorphic to $V'\otimes p^*_B L$, then from the point of view of the moduli scheme, $V$ and $V'$ are identified as being the same family. On the other hand, from the point of view of the moduli stack, $V$ and $V'$ are identified only if they are isomorphic as vector bundles on $B\times X$. 4. The subscheme ${\mathfrak{M}}^{s}$ corresponding to stable vector bundles is sometimes representable by a scheme, but the moduli stack ${{\mathcal{M}}}^{s}$ is never representable by a scheme. To see this, note that any vector bundle has automorphisms different from the identity (multiplication by scalars) and apply lemma \[nonrepresentable\]. Now we will restrict our attention to stable bundles, i.e. to the scheme ${\mathfrak{M}}^s$ and the stack ${{\mathcal{M}}}^s$. For stable bundles the notions of $S$-equivalence and isomorphism coincide, so the points of ${\mathfrak{M}}^s$ correspond to isomorphism classes of vector bundles. Consider $R^{s}\subset R$, the subscheme corresponding to stable bundles. There is a map $\pi :R^s \to {\mathfrak{M}}^s=R^s/{{PGL(N)}}$, and $\pi$ is in fact a principal ${{PGL(N)}}$-bundle (this is a consequence of Luna’s étale slice theorem). Recall from example \[atlasquotient\] that there is a morphism $[R^{ss}/{{PGL(N)}}] \to {\mathfrak{M}}^{ss}$, and that the morphism $[R^{s}/{{PGL(N)}}] \to {\mathfrak{M}}^{s}$ is an isomorphism of stacks. \[versus\] There is a commutative diagram of stacks $$\xymatrix{ {[R^{s}/{{GL(N)}}]} \ar[rr]^{q} \ar[d]_{g}^{\simeq}& &{[R^{s}/{{PGL(N)}}]} \ar[d]^{h}_{\simeq} \\ {{{\mathcal{M}}}^{s}} \ar[rr]_{\varphi} & &{\;{\mathfrak{M}}^{s},} }$$ where $g$ and $h$ are isomorphisms of stacks, but $q$ and $\varphi$ are not. If we change “stable” by “semistable” we still have a commutative diagram, but the corresponding morphism $h^{ss}$ is not an isomorphism of stacks. The morphism $\varphi$ is the composition of the natural morphism ${{\mathcal{M}}}^{s} \to {\underline{{\mathfrak{M}}}}^{s}$ (sending each category to the set of isomorphism classes of objects) and the morphism ${\underline{{\mathfrak{M}}}}^{s} \to {\mathfrak{M}}^{s}$ given by the fact that the scheme ${\mathfrak{M}}^{s}=R^s{{/\!\!/}}{{PGL(N)}}$ corepresents the functor. The morphism $h$ was constructed in example \[quotient\]. The key ingredient needed to define $g$ is the fact that the ${{GL(N)}}$ action on the Quot scheme lifts to the universal bundle, i.e. the universal bundle on the Quot scheme has a ${{GL(N)}}$-linearization. Let $$\xymatrix{ {{\widetilde}{B}} \ar[r]^{p} \ar[d] & R^{ss} \\ {B} }$$ be an object of $[R^{ss}/{{GL(N)}}]$. Since $R^{ss}$ is a subscheme of a Quot scheme, and this universal bundle has a ${{GL(N)}}$-linearization. Let ${\widetilde}E$ be the vector bundle on ${\widetilde}B\times X$ defined by the pullback of this universal bundle. Since $f$ is ${{GL(N)}}$-equivariant, ${\widetilde}E$ is also ${{GL(N)}}$-linearized. Since ${\widetilde}B \times X \to B\times X$ is a principal bundle, the vector bundle ${\widetilde}E$ descends to give a vector bundle $E$ on $B\times X$, i.e. an object of ${{\mathcal{M}}}^{ss}$. Let $$\xymatrix{ & & R^{ss}\\ {{\widetilde}{B}} \ar[r]_{\phi} \ar[d] \ar[rru]^{f} & {{\widetilde}{B}'} \ar[d] \ar[ru]_{f'} \\ {B} \ar@{=}[r] & {B} }$$ be a morphism in $[R^{ss}/{{GL(N)}}]$. Consider the vector bundles ${\widetilde}E$ and ${\widetilde}E'$ defined as before. Since $f'\circ \phi=f$, we get an isomorphism of ${\widetilde}E$ with $(\phi \times {\operatorname{id}})^* {\widetilde}E'$. Furthermore this isomorphism is ${{GL(N)}}$-equivariant, and then it descends to give an isomorphism of the vector bundles $E$ and $E'$ on $B\times X$, and we get a morphism in ${{\mathcal{M}}}^{ss}$. To prove that this gives an equivalence of categories, we construct a functor $\overline g$ from ${{\mathcal{M}}}^{ss}$ to $[R^{ss}/{{GL(N)}}]$. Given a vector bundle on $B\times X$, let $q:{\widetilde}B \to B$ be the ${{GL(N)}}$-principal bundle associated with the vector bundle $p^*_B E$ on $B$. Let ${\widetilde}E=(q\times {\operatorname{id}})^*E$ be the pullback of $E$ to ${\widetilde}B\times X$. It has a canonical ${{GL(N)}}$-linearization because it is defined as a pullback by a principal ${{GL(N)}}$-bundle. The vector bundle ${p^{}_{{\widetilde}B}}_* {\widetilde}E$ is canonically isomorphic to the trivial bundle ${{\mathcal{O}}}^N_{{\widetilde}B}$, and this isomorphism is ${{GL(N)}}$-equivariant, so we get an *equivariant* morphism ${\widetilde}B\to R^{ss}$, and hence an object of $[R^{ss}/{{GL(N)}}]$. If we have an isomorphism between two vector bundles $E$ and $E'$ on $B\times X$, it is easy to check that it induces an isomorphism between the associated objects of $[R^{ss}/{{GL(N)}}]$. It is easy to check that there are natural isomorphisms of functors $g\circ {\widetilde}g {\cong}{\operatorname{id}}$ and ${\widetilde}g\circ g {\cong}{\operatorname{id}}$, and then $g$ is an equivalence of categories. The morphism $q$ is defined using the following lemma, with $G={{GL(N)}}$, $H$ the subgroup consisting of scalar multiples of the identity, $\overline G={{PGL(N)}}$ and $Y$=$R^{ss}$. Let $Y$ be an $S$-scheme and $G$ an affine flat group $S$-scheme, acting on $Y$ on the right. Let $H$ be a normal closed subgroup of $G$. Assume that $\overline G=G/H$ is affine. If $H$ acts trivially on $Y$, then there is a morphism of stacks $$[Y/G]{\longrightarrow}[Y/\overline G].$$ If $H$ is nontrivial, then this morphism is not faithful, so it is not an isomorphism. Let $$\xymatrix{ {E} \ar[r]^{f} \ar[d]^{\pi} & Y \\ {B} }$$ be an object of $[Y/G]$. There is a scheme $Y/H$ such that $\pi$ factors $$E \stackrel{q}{\longrightarrow}E/H \stackrel{\pi'}{\longrightarrow}B.$$ To construct $Y/H$, note that there is a local étale cover $U_i$ of $B$ and isomorphisms $\phi_i:\pi^{-1}(U_i)\to U_i\times G$, with transition functions $\psi_{ij}=\phi^{}_i \circ \phi^{-1}_j$. Since these isomorphisms are $G$-equivariant, they descend to give isomorphisms $\overline{\psi}_{ij}:U_j\times G/H \to U_i\times G/H$, and using this transition functions we get $Y/H$. This construction shows that $\pi'$ is a principal $\overline G$-bundle. Furthermore, $q$ is also a principal $H$-bundle ([@HL example 4.2.4]), and in particular it is a categorical quotient. Since $f$ is $H$-invariant, there is a morphism $\overline f: E/H \to R$, and this gives an object of $[Y/\overline G]$. If we have a morphism in $[Y/G]$, given by a morphism $g:E\to E'$ of principal $G$-bundles over $B$, it is easy to see that it descends (since $g$ is equivariant) to a morphism $\overline{g}:E/H \to E'/H$, giving a morphism in $[Y/\overline G]$. This morphism is not faithful, since the automorphism $E\stackrel{\cdot z}{{\longrightarrow}} E$ given by multiplication on the right by a nontrivial element $z\in H$ is sent to the identity automorphism $E/H \to E/H$, and then ${\operatorname{Hom}}(E,E)\to {\operatorname{Hom}}(E/H,E/H)$ is not injective. If $X$ is a smooth curve, then it can be shown that ${{\mathcal{M}}}$ is a smooth stack of dimension $r^2(g-1)$, where $r$ is the rank and $g$ is the genus of $X$. In particular, the open substack ${{\mathcal{M}}}^{ss}$ is also smooth of dimension $r^2(g-1)$, but the moduli scheme ${\mathfrak{M}}^{ss}$ is of dimension $r^2(g-1)+1$ and might not be smooth. Proposition \[versus\] explains the difference in the dimensions (at least on the smooth part): we obtain the moduli stack by taking the quotient by the group ${{GL(N)}}$, of dimension $N^2$, but the moduli scheme is obtained by a quotient by the group ${{PGL(N)}}$, of dimension $N^2-1$. The moduli scheme ${\mathfrak{M}}^{ss}$ is not smooth in general because in the strictly semistable part of $R^{ss}$ the action of ${{PGL(N)}}$ is not free. On the other hand, the smoothness of a stack quotient doesn’t depend on the freeness of the action of the group. Appendix A: Grothendieck topologies, sheaves and algebraic spaces {#grothendiecktopologies} ================================================================= The standard reference for Grothendieck topologies is SGA (*Séminaire de Géométrie Algébrique*). For an introduction see [@T] or [@MM]. For algebraic spaces, see [@K] or [@Ar1]. An open cover in a topological space $U$ can be seen as family of morphisms in the category of topological spaces $f_i:U_i \to U$, with the property that $f_i$ is an open inclusion and the union of their images is $U$, i.e we are choosing a class of morphisms (open inclusions) in the category of topological spaces. A Grothendieck topology on an arbitrary category is basically a choice of a class of morphisms, that play the role of “open sets”. A morphism $f:V\to U$ in this class is to be thought of as an “open set” in the object $U$. The concept of intersection of open sets, for instance, can be replaced by the fiber product: the “intersection” of $f_1:U_1\to U$ and $f_2:U_2\to U$ is $f_{12}:U_1\times _U U_2 \to U$. A category with a Grothendieck topology is called a site. We will consider two topologies on $({{Sch}}/S)$. **fppf topology**. Let $U$ be a scheme. Then a cover of $U$ is a finite collection of morphisms $\{f_i:U_i\to U\}_{i\in I}$ such that each $f_i$ is a finitely presented flat morphism (for Noetherian schemes, this is equivalent to flat and finite type), and $U$ is the (set theoretic) union of the images of $f_i$. In other words, $\coprod U_i \to U$ is *“fidèlement plat de présentation finie”*. **Étale topology**. Same definition, but substituting flat by étale. A presheaf of sets on $({{Sch}}/S)$ is a contravariant functor $F$ from $({{Sch}}/S)$ to $({{Sets}})$. Choose a topology on $({{Sch}}/S)$. We say that $F$ is a sheaf (or an $S$-space) with respect to that topology if for every cover $\{f_i:U_i\to U\}_{i\in I}$ in the topology the following two axioms are satisfied: 1. *(Mono)* Let $X$ and $Y$ be two elements of $F(U)$. If $X|_i=Y|_i$ for all $i$, then $X=Y$. 2. *(Glueing)* Let $X_i$ be an object of $F(U_i)$ for each $i$ such that $X_i|_{ij}=X_j|_{ij}$, then there exists $X \in F(U)$ such that $X|_i=X_i$ for each $i$. We have used the following notation: if $X\in F(U)$, then $X|_i$ is the element of $F(U_i)$ given by $F(f_i)(X)$, and if $X_i\in F(U_i)$, then $X_i|_{ij}$ is the element of $F(U_{ij})$ given by $F(f_{ij,i})(X_i)$ where $f_{ij,i}:U_i\times_U U_j \to U_i$ is the pullback of $f_j$. We can define morphisms of $S$-spaces as morphisms of sheaves (natural transformation of functors with the obvious conditions). Note that a scheme can be viewed as an $S$-space via its functor of points, and a morphism between two such $S$-spaces is equivalent to a scheme morphism between the schemes (by the Yoneda embedding lemma), then the category of $S$-schemes is a full subcategory of the category of $S$-spaces. **Equivalence relation and quotient space**. An equivalence relation in the category of $S$-spaces consists of two $S$-spaces $R$ and $U$ and a monomorphism of $S$-spaces $$\delta:R \to U \times_S U$$ such that for all $S$-scheme $B$, the map $\delta(B):R(B)\to U(B)\times U(B)$ is the graph of an equivalence relation between sets. A quotient $S$-space for such an equivalence relation is by definition the sheaf cokernel of the diagram $$\xymatrix{ {R} \ar@<0.5ex>[r]^{p_2\circ \delta} \ar@<-0.5ex>[r]_{p_1\circ \delta} & {U}}$$ . An $S$-space $F$ is called an algebraic space if it is the quotient $S$-space for an equivalence relation such that $R$ and $U$ are $S$-schemes, $p_1\circ \delta$, $p_2\circ \delta$ are étale (morphisms of $S$-schemes), and $\delta$ is a quasi-compact morphism (of $S$-schemes). Roughly speaking, an algebraic space is a quotient of a scheme by an étale equivalence relation. The following is an equivalent definition. . An $S$-space $F$ is called an algebraic space if there exists a scheme $U$ (atlas) and a morphism of $S$-spaces $u:U\to F$ such that 1. (The morphism $u$ is étale) For any $S$-scheme $V$ and morphism $V \to F$, the (sheaf) fiber product $U\times_F V$ is representable by a scheme, and the map $U\times_F V\to V$ is an étale morphism of schemes. 2. (Quasi-separatedness) The morphism $U\times_F U \to U\times_S U$ is quasi-compact. We recover the first definition by taking $R=U\times_F U$. Then roughly speaking, we can also think of an algebraic space as “something” that looks locally in the étale topology like an affine scheme, in the same sense that a scheme is something that looks locally in the Zariski topology like an affine scheme. Algebraic spaces are used, for instance, to give algebraic structure to certain complex manifolds (for instance Moishezon manifolds) that are not schemes, but can be realized as algebraic spaces. All smooth algebraic spaces of dimension 1 and 2 are actually schemes. An example of a smooth algebraic space of dimension 3 that is not a scheme can be found in [@H]. But étale topology is useful even if we are only interested in schemes. The idea is that the étale topology is finer than the Zariski topology, and in many situations it is “fine enough” to do the analogue of the manipulations that can be done with the analytic topology of complex manifolds. As an example, consider the affine complex line ${\operatorname{Spec}}({\mathbb{C}}[x])$, and take a (closed) point $x_0$ different from $0$. Assume that we want to define the function ${\sqrt{x}}$ in a neighborhood of $x_0$. In the analytic topology we only need to take a neighborhood small enough so that it doesn’t contain a loop that goes around the origin, then we choose one of the branches (a sign) of the square root. In the Zariski topology this cannot be done, because all open sets are too large (have loops going around the origin, so the sign of the square root will change, and ${\sqrt{x}}$ will be multivaluated). But take the 2:1 étale map $V= {\operatorname{Spec}}({\mathbb{C}}[y,x,x^{-1}]/(y-x^2)) \to {\operatorname{Spec}}({\mathbb{C}}[x])$. The function ${\sqrt{x}}$ can certainly be defined on $V$, it is just equal to the function $y$, so it is in this sense that we say that the étale topology is finer: $V$ is a “small enough open subset” because the square root can be defined on it. Appendix B: 2-categories ======================== In this section we recall the notions of 2-category and 2-functor. A 2-category $\mathfrak{C}$ consists of the following data [@Hak]: 1. A class of objects ${\operatorname{ob}\mathfrak{C}}$ 2. For each pair $X$, $Y \in {\operatorname{ob}\mathfrak{C}}$, a category ${\operatorname{Hom}}(X,Y)$ 3. *horizontal composition of 1-morphisms and 2-morphisms*. For each triple $X$, $Y$, $Z \in {\operatorname{ob}\mathfrak{C}}$, a functor $$\mu_{X,Y,Z}:{\operatorname{Hom}}(X,Y) \times {\operatorname{Hom}}(Y,Z) \to {\operatorname{Hom}}(X,Z)$$ with the following conditions 1. *(Identity 1-morphism)* For each object $X\in {\operatorname{ob}\mathfrak{C}}$, there exists an object ${\operatorname{id}}_X\in {\operatorname{Hom}}(X,X)$ such that $$\mu_{X,X,Y}({\operatorname{id}}_X,\;)=\mu_{X,Y,Y}(\;,{\operatorname{id}}_Y)={\operatorname{id}}_{{\operatorname{Hom}}(X,Y)},$$ where ${\operatorname{id}}_{{\operatorname{Hom}}(X,Y)}$ is the identity functor on the category ${\operatorname{Hom}}(X,Y)$ 2. *(Associativity of horizontal compositions)* For each quadruple $X$, $Y$, $Z$, $T\in {\operatorname{ob}\mathfrak{C}}$, $$\mu_{X,Z,T}\circ (\mu_{X,Y,Z}\times {\operatorname{id}}_{{\operatorname{Hom}}(Z,T)})= \mu_{X,Y,T}\circ ({\operatorname{id}}_{{\operatorname{Hom}}(X,Y)}\times\mu_{Y,Z,T})$$ The example to keep in mind is the 2-category $\mathfrak{Cat}$ of categories. The objects of $\mathfrak{Cat}$ are categories, and for each pair $X$, $Y$ of categories, ${\operatorname{Hom}}(X,Y)$ is the category of functors between $X$ and $Y$. Note that the main difference between a 1-category (a usual category) and a 2-category is that ${\operatorname{Hom}}(X,Y)$, instead of being a set, is a category. Given a 2-category, an object $f$ of the category ${\operatorname{Hom}}(X,Y)$ is called a 1-morphisms of ${\mathfrak{C}}$, and is represented with a diagram $$\xymatrix { {\bullet} \ar[r]^f \save[]+<0ex,2.5ex>*{X}\restore & {\bullet}\save[]+<0ex,2.5ex>*{Y}\restore}$$ and a morphism $\alpha$ of the category ${\operatorname{Hom}}(X,Y)$ is called a 2-morphisms of ${\mathfrak{C}}$, and is represented as $$\xymatrix { {\bullet} \ar @(ur,ul)[rr]^f_{}="f" \ar @(dr,dl)[rr]_{f'}^{}="fp" \save[]+<0ex,2.5ex>*{X}\restore & &{\bullet} \save[]+<0ex,2.5ex>*{Y}\restore \ar @2^{\alpha} "f";"fp"}$$ Now we will rewrite the axioms of a 2-category using diagrams. 1. *(Composition of 1-morphisms)* Given a diagram $$\xymatrix {{\bullet} \ar[r]^f \save[]+<0ex,2.5ex>*{X}\restore & {\bullet} \ar[r]^g \save[]+<0ex,2.5ex>*{Y}\restore & {\bullet} \save[]+<0ex,2.5ex>*{Z}\restore} \quad\text{there exist}\quad \xymatrix {{\bullet} \ar[r]^{g\circ f} \save[]+<0ex,2.5ex>*{X}\restore & {\bullet}\save[]+<0ex,2.5ex>*{Z}\restore}$$ (this is (iii) applied to objects) and this composition is associative: $(h\circ g) \circ f= h\circ (g\circ f)$ (this is (ii’) applied to objects). 2. *(Identity for 1-morphisms)* For each object $X$ there is a 1-morphism ${\operatorname{id}}_X$ such that $f\circ {\operatorname{id}}_Y ={\operatorname{id}}_X \circ f=f$ (this is (i’)). 3. \[three\] *(Vertical composition of 2-morphisms)* Given a diagram $$\xymatrix {{\bullet} \ar @(ur,ul)[rr]^f_{}="f" \ar [rr]|g^{}="g"_{}="g2" \ar @(dr,dl)[rr]_h^{}="h" \save[]+<0ex,2.5ex>*{X}\restore & &{\bullet} \save[]+<0ex,2.5ex>*{Y}\restore \ar @2^{\alpha} "f";"g" \ar @2^{\beta} "g2";"h"} \quad\text{there exists}\quad \xymatrix { {\bullet} \ar @(ur,ul)[rr]^f_{}="f" \ar @(dr,dl)[rr]_h^{}="g" \save[]+<0ex,2.5ex>*{X}\restore & &{\bullet} \save[]+<0ex,2.5ex>*{Y}\restore \ar @2^{\beta\circ\alpha} "f";"g"}$$ and this composition is associative $(\gamma\circ\beta)\circ\alpha = \gamma\circ(\beta\circ\alpha)$. 4. *(Horizontal composition of 2-morphisms)* Given a diagram $$\xymatrix { {\bullet} \ar @(ur,ul)[rr]^f_{}="f" \ar @(dr,dl)[rr]_{f'}^{}="fp" \save[]+<0ex,2.5ex>*{X}\restore & &{\bullet} \save[]+<0ex,2.5ex>*{Y}\restore \ar @(ur,ul)[rr]^{g}_{}="g" \ar @(dr,dl)[rr]_{g'}^{}="gp" & &{\bullet} \save[]+<0ex,2.5ex>*{Z}\restore \ar @2^{\alpha} "f";"fp" \ar @2^{\beta} "g";"gp"} \quad\text{there exists}\quad \xymatrix { {\bullet} \ar @(ur,ul)[rrr]^{g\circ f}_{}="gf" \ar @(dr,dl)[rrr] _{g'\circ f'}^{}="gpfp" \save[]+<0ex,2.5ex>*{X}\restore & & &{\bullet} \save[]+<0ex,2.5ex>*{Z}\restore \ar @2^{\beta\ast\alpha} "gf";"gpfp"}$$ (this is (iii) applied to morphisms) and it is associative $(\gamma\ast \beta)\ast\alpha=\gamma\ast(\beta\ast\alpha)$ (this is (ii’) applied to morphisms). 5. *(Identity for 2-morphisms)* For every 1-morphism $f$ there is a 2-morphism ${\operatorname{id}}_f$ such that $\alpha\circ{\operatorname{id}}_g={\operatorname{id}}_f\circ\alpha= \alpha$ (this and item \[three\] are (ii)). We have ${\operatorname{id}}_g \ast {\operatorname{id}}_f={\operatorname{id}}_{g\circ f}$ (this means that $\mu_{X,Y,Z}$ respects the identity). 6. *(Compatibility between horizontal and vertical composition of 2-morphisms)* Given a diagram $$\xymatrix {{\bullet} \ar @(ur,ul)[rr]^f_{}="f" \ar [rr]|{f'}^{}="f1"_{}="f2" \ar @(dr,dl)[rr]_{f''}^{}="fpp" \save[]+<0ex,2.5ex>*{X}\restore & & {\bullet} \ar @(ur,ul)[rr]^g_{}="g" \ar [rr]|{g'}^{}="g1"_{}="g2" \ar @(dr,dl)[rr]_{g''}^{}="gpp" \save[]+<0ex,2.5ex>*{Y}\restore & &{\bullet} \save[]+<0ex,2.5ex>*{Z}\restore \ar @2^{\alpha} "f";"f1" \ar @2^{\alpha'} "f2";"fpp" \ar @2^{\beta} "g";"g1" \ar @2^{\beta'} "g2";"gpp"}$$ then $(\beta'\circ \beta)\ast(\alpha'\circ \alpha)=(\beta'\ast\alpha') \circ(\beta\ast\alpha)$ (this is (iii) applied to morphisms). Two objects $X$ and $Y$ of a 2-category are called equivalent if there exist two 1-morphisms $f:X\to Y$, $g:Y\to X$ and two 2-isomorphisms (invertible 2-morphism) $\alpha:g\circ f \to {\operatorname{id}}_X$ and $\beta:f\circ g \to {\operatorname{id}}_Y$. A commutative diagram of 1-morphisms in a 2-category is a diagram $$\xymatrix{ & {\bullet} \ar[rd]^g \save[]+<0ex,2.5ex>*{Y}\restore \ar @2[d]^{\alpha} \\ {\bullet} \ar[ru]^f \ar[rr]_{h} \save[]-<3ex,0ex>*{X}\restore & & {\bullet} \save[]+<3ex,0ex>*{Z}\restore}$$ such that $\alpha:g\circ f \to h$ is a 2-isomorphisms. On the other hand, a diagram of 2-morphisms will be called commutative only if the compositions are actually equal. Now we will define the concept of covariant 2-functor (a contravariant 2-functor is defined in a similar way). A covariant 2-functor $F$ between two 2-categories ${\mathfrak{C}}$ and ${\mathfrak{C'}}$ is a law that for each object $X$ in ${\mathfrak{C}}$ gives an object $F(X)$ in ${\mathfrak{C'}}$. For each 1-morphism $f:X\to Y$ in ${\mathfrak{C}}$ gives a 1-morphism $F(f):F(X)\to F(Y)$ in ${\mathfrak{C'}}$, and for each 2-morphism $\alpha:f\Rightarrow g$ in ${\mathfrak{C}}$ gives a 2-morphism $F(\alpha):F(f)\Rightarrow F(g)$ in ${\mathfrak{C'}}$, such that 1. *(Respects identity 1-morphism)* $F({\operatorname{id}}_X)={\operatorname{id}}_{F(X)}$. 2. *(Respects identity 2-morphism)* $F({\operatorname{id}}_f)={\operatorname{id}}_{F(f)}$. 3. \[twoisom\] *(Respects composition of 1-morphism up to a 2-isomorphism)* For every diagram $$\xymatrix {{\bullet} \ar[r]^f \save[]+<0ex,2.5ex>*{X}\restore & {\bullet} \ar[r]^g \save[]+<0ex,2.5ex>*{Y}\restore & {\bullet} \save[]+<0ex,2.5ex>*{Z}\restore}$$ there exists a 2-isomorphism $\epsilon_{g,f}:F(g)\circ F(f) \to F(g\circ f)$ $$\xymatrix{ & {\bullet} \ar[rd]^{F(g)} \save[]+<0ex,2.5ex>*{F(Y)}\restore \ar @2[d]^{\epsilon_{g,f}} \\ {\bullet} \ar[ru]^{F(f)} \ar[rr]_{F(g\circ f)} \save[]-<3ex,0ex>*{F(X)}\restore & & {\bullet} \save[]+<3ex,0ex>*{F(Z)}\restore}$$ 1. $\epsilon_{f,{\operatorname{id}}_X}=\epsilon_{{\operatorname{id}}_Y,f}={\operatorname{id}}_{F(f)}$ 2. $\epsilon$ *is associative*. The following diagram is commutative $$\xymatrix {F(h)\circ F(g)\circ F(f) \ar@2[rr]^{\epsilon_{h,g} \times {\operatorname{id}}} \ar@2[d]_{{\operatorname{id}}\times \epsilon_{g,f}} & & F(h\circ g)\circ F(f) \ar@2[d]^{\epsilon_{h\circ g,f}} \\ F(h)\circ F(g\circ f) \ar@2[rr]^{\epsilon_{h,g\circ f}} & & F(h\circ g\circ f)}$$ 4. *(Respects vertical composition of 2-morphisms)* For every pair of 2-morphisms $\alpha:f \to f'$, $\beta:g \to g'$, we have $F(\beta\circ \alpha)=F(\beta)\circ F(\alpha)$. 5. \[last\] *(Respects horizontal composition of 2-morphisms)* For every pair of 2-morphisms $\alpha:f \to f'$, $\beta:g \to g'$, the following diagram commutes $$\xymatrix {F(g)\circ F(f) \ar@2[rr]^{F(\beta)\ast F(\alpha)} \ar@2[d]_{\epsilon_{g,f}} & & F(g')\circ F(f') \ar@2[d]^{\epsilon_{g',f'}} \\ F(g\circ f) \ar@2[rr]^{F(\beta\ast\alpha)} & & F(g'\circ f')}$$ By a slight abuse of language, condition \[last\] is usually written as $F(\beta)\ast F(\alpha)=F(\beta\ast \alpha)$. Note that strictly speaking this equality doesn’t make sense, because the sources (and the targets) don’t coincide, but if we chose once and for all the 2-isomorphisms $\epsilon$ of condition \[twoisom\], then there is a unique way of making sense of this equality. \[B2\] Given a 1-category $C$ (a usual category), we can define a 2-category: we just have to make the set ${\operatorname{Hom}}(X,Y)$ into a category, and we do this just by defining the unit morphisms for each element. On the other hand, given a 2-category ${\mathfrak{C}}$ there are two ways of defining a 1-category. We have to make each category ${\operatorname{Hom}}(X,Y)$ into a set. The naive way is just to take the set of objects of ${\operatorname{Hom}}(X,Y)$, and then we obtain what is called the underlying category of ${\mathfrak{C}}$ (see [@Hak]). This has the problem that a 2-functor $F:{\mathfrak{C}}\to {\mathfrak{C'}}$ is not in general a functor of the underlying categories (because in item \[twoisom\] we only require the composition of 1-morphisms to be respected up to 2-isomorphism). The best way of constructing a 1-category from a 2-category is to define the set of morphisms between the objects $X$ and $Y$ as the set of isomorphism classes of objects of ${\operatorname{Hom}}(X,Y)$: two objects $f$ and $g$ of ${\operatorname{Hom}}(X,Y)$ are isomorphic if there exists a 2-isomorphism $\alpha:f \Rightarrow g$ between them. We call the category obtained in this way the 1-category associated to ${\mathfrak{C}}$. Note that a 2-functor between 2-categories then becomes a functor between the associated 1-categories. **Acknowledgments.** This article is based on a series of lectures that I gave in February 1999 in the Geometric Langlands programme seminar of the Tata Institute of Fundamental Research. First of all, I would like to thank N. Nitsure for proposing me to give these lectures. Most of my understanding on stacks comes from conversations with N. Nitsure and C. Sorger. I would also like to thank T.R. Ramadas for encouraging me to write these notes, and the participants in the seminar in TIFR for their active participation, interest, questions and comments. In ICTP (Trieste) I gave two informal talks in August 1999 on this subject, and the comments of the participants, specially L. Brambila-Paz and Y.I. Holla, helped to remove mistakes and improve the original notes. This work was supported by a postdoctoral fellowship of Ministerio de Educación y Cultura (Spain). [EMG]{} Algebraic Spaces, Yale Math. Monographs 3, Yale University Press, 1971. *Versal deformations and algebraic stacks,* Invent. Math. **27**, 165–189 (1974). *The irreducibility of the space of curves of given genus,* Publ. Math. IHES **36**, 75–110 (1969). *Notes on the construction of the moduli space of curves,* Preprint 1999. Topos annelés et schémas relatifs, Ergebnisse der Math. und ihrer Grenzgebiete 64, Springer Verlag, 1972. Algebraic Geometry, Grad. Texts in Math. 52, Springer Verlag, 1977. *The geometry of moduli spaces of sheaves,* Aspects of Mathematics E31, Vieweg, Braunschweig/Wiesbaden 1997. Algebraic spaces, LNM 203, Springer Verlag, 1971. *Champs algébriques,* Prépublications **88-33**, U. Paris-Sud (1988). Sheaves in Geometry and Logic, Universitext, Springer-Verlag, 1992. *Moduli of representations of the fundamental group of a smooth projective variety I,* Publ. Math. I.H.E.S. **79**, 47–129 (1994). Introduction to Étale Cohomology, Universitext, Springer-Verlag, 1994. *Intersection theory on algebraic stacks and their moduli spaces,* Invent. Math. **97**, 613–670 (1989). [^1]: To be precise, we should consider also $B$-valued points, for any scheme $B$, but we will only consider $k$-valued points for the moment